arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
# Harry R. Schwartz Software engineer, nominal scientist, gentleman of the internet. Member, ←Hotline Webring→. ### The Mathematics of Lavender Wands Published . Tags: art+design, math. I’ve been staying in a little place in Portland for the last few weeks, and I noticed that there’s a lavender bush starting to bloom in the garden outside. In my family when the lavender blooms we all make lavender wands, so I picked up some ribbon and got to work. The basic notion is: • Grab a bundle of twenty or so stems, • Tie the end of the ribbon around the heads, • Bend the stems down over the ribbon so they cover the buds, • Weave the ribbon through the stems, making a container around the buds, and • Tie it off at the bottom of the heads with a bow. • Optionally wind the ribbon down around the “handle” of the wand for a nice effect. There’s some cute math to this, though! If you’re weaving 1-over-1-under you’ll need an odd number of stems. Otherwise every stem will have all “overs” or all “unders” and the whole thing will fall apart. By modifying the number of stems and the weaving pattern (that is, the number of overs and number of unders) we can weave different patterns. For example, if we wanted wider visible blocks of ribbon, we could weave 2-over-2-under. If we still wanted a checkerboard pattern, though, we’d need to change the number of stems. The width of the repeated weaving pattern is 4 stems (2-over-2-under), and we want each block to be offset from the series above by 2 stems. So the number of stems must be 2 greater than a multiple of 4. Mathematically, we’d need the number of stems $n$ to satisfy: Realistically, we’ll also want to be able to repeat our pattern at least three times per go-around. We’ll also want to keep our wands fairly small. Let’s add the further constraint that: Here’s the set of possible numbers of stems that will satisfy these constraints (and thus work for this weaving pattern): For a given weaving pattern and offset, how many stems should we use? Let’s identify the variables and find a general set of constraints. We’ll call the pattern width $p$ and the offset $o$. In the above example, $p = 4$ (from “2-over-2-under”) and $o = 2$ (since we want the pattern on subsequent rows to be shifted over by 2 stems). We want to find the number of stems $n$ satisfying the equations: As another example, suppose we want to weave a pattern of 2-over-3-under with an offset of 1—this makes a pretty spiraling pattern. Let’s just plug in $2 + 3 = 5$ for $p$ and 1 for $o$ and solve: Our pattern should work if our number of stems is equal to any of these solutions (the wand photographed here uses this pattern with 21 stems). ### A Lavender Wand Stem Count Calculator Now that we’ve got this system of constraints, we might as well whip up some code so we won’t have to solve it by hand any more. If you’ve got a blooming lavender bush nearby, give it a shot! Weaving a lavender wand is awfully relaxing, and it’s easy to experiment with nifty patterns.
# Decay rate of a scalar particle under scalar/pseudoscalar lagrangian 1. Mar 18, 2009 ### torus Hi, I'm trying to solve problem 48.4 of Srednickis QFT-Book. It goes something like this: 1. The problem statement, all variables and given/known data We have a scalar field with mass M and a Dirac field with mass m (M>2m). The interaction part of the lagrangian is $$L_a = g \varphi \bar{\Psi}\Psi$$ $$L_b = g \varphi \bar{\Psi}i\gamma_5 \Psi$$. Now the decay rates $$\Gamma_{a/b}$$ of the process $$\varphi \rightarrow e^+ e^-$$ are to be calculated and compared. It turns out $$\Gamma_b > \Gamma_a$$, which should now be explained in light of parity/angular momentum conservation. 2. Relevant equations 3. The attempt at a solution I did all the calculations but I am having a hard time with the explanation. I know that $$L_a/L_b$$ is scalar/pseudoscalar under parity, but I don't see why this should affect the decay rate. Any help is welcome. Regards, torus
# American Institute of Mathematical Sciences • Previous Article Weak Carleman estimates with two large parameters for second order operators and applications to elasticity with residual stress • DCDS Home • This Issue • Next Article Degenerate diffusion with a drift potential: A viscosity solutions approach May  2010, 27(2): 787-798. doi: 10.3934/dcds.2010.27.787 ## Omega-limit sets for spiral maps 1 Department of Mathematical Sciences, Indiana University - Purdue University Indianapolis, 402 N. Blackford Street, Indianapolis, IN 46202, United States, United States Received  October 2009 Revised  February 2010 Published  February 2010 We investigate a class of homeomorphisms of a cylinder, with all trajectories convergent to the cylinder base and one fixed point in the base. Let A be a nonempty finite or countable family of sets, each of which can be a priori an $\omega$-limit set. Then there is a homeomorphism from our class, for which A is the family of all $\omega$-limit sets. Citation: Bruce Kitchens, Michał Misiurewicz. Omega-limit sets for spiral maps. Discrete & Continuous Dynamical Systems - A, 2010, 27 (2) : 787-798. doi: 10.3934/dcds.2010.27.787 [1] Meilan Cai, Maoan Han. Limit cycle bifurcations in a class of piecewise smooth cubic systems with multiple parameters. Communications on Pure & Applied Analysis, 2021, 20 (1) : 55-75. doi: 10.3934/cpaa.2020257 [2] Huu-Quang Nguyen, Ya-Chi Chu, Ruey-Lin Sheu. On the convexity for the range set of two quadratic functions. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020169 [3] Yi-Long Luo, Yangjun Ma. Low Mach number limit for the compressible inertial Qian-Sheng model of liquid crystals: Convergence for classical solutions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 921-966. doi: 10.3934/dcds.2020304 [4] Yasmine Cherfaoui, Mustapha Moulaï. Biobjective optimization over the efficient set of multiobjective integer programming problem. Journal of Industrial & Management Optimization, 2021, 17 (1) : 117-131. doi: 10.3934/jimo.2019102 [5] Shasha Hu, Yihong Xu, Yuhan Zhang. Second-Order characterizations for set-valued equilibrium problems with variable ordering structures. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020164 [6] Wenbin Li, Jianliang Qian. Simultaneously recovering both domain and varying density in inverse gravimetry by efficient level-set methods. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020073 [7] Lingfeng Li, Shousheng Luo, Xue-Cheng Tai, Jiang Yang. A new variational approach based on level-set function for convex hull problem with outliers. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020070 [8] Yuanfen Xiao. Mean Li-Yorke chaotic set along polynomial sequence with full Hausdorff dimension for $\beta$-transformation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 525-536. doi: 10.3934/dcds.2020267 2019 Impact Factor: 1.338
2) - 2/7 1451A - Subtract or Divide - Accepted; 1451B - Non-Substring Subsequence - Accepted; 1451C - String Equality - Accepted 2), problem: (C) The C... Codeforces Round #250 (Div. 2 problems. Codeforces Round #655 (Div. [Beta] Harwest — Git wrap your submissions this Christmas! This is a video editorial on the codeforces #439 Div 2, C problem. 2), problem: (A) Coder Problem Solution. The problem statement has recently been changed. Maximum Xor Secondary9 5 Problem C. Game on Tree10 6 Problem D. k-Maximum Subsequence Sum12 7 Problem E. Sequence Transformation15 1 Always challenge yourself. But for strictly Div2 problems — A,B, they usually emphasize on coming up with some simple but clever idea or being able to quickly implement an annoying algorithm. I was meaning to ask which one should we go for first. This round will be rated for the participants with rating lower than 2100. They usually don't require a lot of coding and often don't have some well-known algorithm in them, hence the "implementation,sorting,greedy" tags. Never use someone else's code, read the tutorials or communicate with other person during a virtual contest. With this extension you can track your practice progress in codeforces through time phases, It simply adds new tab to your profile (or any other profile) in this tab you can find data about each time phase. → Virtual participation Virtual contest is a way to take part in past contest, … 2) Finished → Practice? It is guaranteed that the sum of $$n$$$over all test cases does not exceed $$2 \cdot 10^5$$$. Meet IT family members worked hard over the last few months to provide you with our favourite challenges we came up with. The first line of each test case contains integer $$n$$$($$1 \leq n \leq 2 \cdot 10^5$$$)  — the length of the given permutation. When I first joined Codeforces I would do tons of virtual competitions and that quickly improves your skill of solving those A,B problems. He even invented a new chess piece named Coder. Finally, for beginners I'm a proponent of the approach SuperJ6 mentioned — solve problems and learn the concepts that are needed to solve them. In Division 1, there are three problems too, which is said that Div1 Easy, Div1 Medium, Div1 Hard. It is not currently accepting answers. And I am meaning that is a wrong way to think about it. 2, based on Technocup 2018 Elimination Round 2) A. It has to do with palindromes and really big numbers. On Nov/19/2020 17:35 (Moscow time) Educational Codeforces Round 98 (Rated for Div. Iahub likes chess very much. Codeforces. This is the solution approach for the codeforces 1355B problem. If you've seen these problems, a virtual contest is not for you - solve these problems in the archive. Ignoring that my comment is from 5 years ago, of course if you can comfortably do A, B you move on. The first line contains the number of test cases $$t$$$($$1 \le t \le 100$$$). Virtual contest is a way to take part in past contest, as close as possible to participation on time. Should i go for dp 1 st or should i go for trees and then eventually graphs from there? Before contest Codeforces Round #683 (Div. It also helps you to manage and track your programming comepetions training for you and your friends. Codeforces Round #250 (Div. Want to improve this question? Thank you Enchom for such a comprehensive answer. The only programming contests Web 2.0 platform, 2020-2021 ICPC, NERC, Southern and Volga Russian Regional Contest (Online Mirror, ICPC Rules), Codeforces WatchR: 10K+ downloads on Google Play, Technocup 2021 Elimination Round 3 and Round #692 (Div. Codeforces Round #225 (Div. codeforces 689 division 2 problem b explanation [closed] Ask Question Asked 6 days ago. 2) will start.. Series of Educational Rounds continue being held as Harbour.Space University initiative! It can be shown that you need at least $$2$$$exchanges to sort the second permutation. In my opinion in C,D,E you can expect a lot of stuff since it already overlaps with Div1. Define a special exchange as the following: choose any subarray of the scores and permute elements such that no element of subarray gets to the same position as it was before the exchange. 3) post-contest discussion Codeforces Round #648 (Div. Codeforces Round #680 [Div.1 and Div. Want to solve the contest problems after the official contest ends? 256 megabytes. Programming competitions and contests, programming community. I think you can look at problemset and find out which types of problems usually exist in div2 contests. AtCoder Beginner Contest 119, C : Synthetic Kadomatsu TopCoder SRM 744, Division 1, Level 1 (Division 2, Level 3), ModularQuadrant CODE THANKS FESTIVAL 2017, H : Union Sets Add details and clarify the problem … Codeforces. Each test contains multiple test cases. ... Codeforces Beta Round #77 (Div. Programming competitions and contests, programming community. Closed. When I was starting in Codeforces at first, I found out that the best way to be better at Div2 problems is to solve Div2 problems. An array $$a$$$ is a subarray of an array $$b$$$if $$a$$$ can be obtained from $$b$$$by deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. Codeforces Beta Round #77 (Div. To all my Indian juniours and experienced professionals, Never join Scaler Academy(Interviewbit). It also helps you to manage and track your programming comepetions training for you and your friends. If you just want to solve some problem from a contest, a virtual contest is not for you - solve this problem in the archive. Programming competitions and contests, programming community. Solutions to Codeforces Problems Codeforces Round #686 (Div. The problem … Coding Gurukul 265 views. You can read the details about the cooperation between Harbour.Space University and Codeforces in the blog post.. Solved problems solution of codeforces. Search for Pretty Integers ( 872A ) B. C/C++ Logic & Problem Solving i solve so many problem in my past days, programmers can get inspired by my solutions and find a new solution for the same problem. ... thank you for replying to such an old post even. Lately, in round 449, division 2, there was a problem which has met my interest. The only programming contests Web 2.0 platform. I am so sorry as I didn't know I can not undo it. If we are kind of ok with solving Div 2 (A, B) questions what algorithms would you recommend to level up now to move on to C problems and above? Patrick is sure that his scores across $$n$$$ sessions follow the identity permutation (ie. Given a permutation of $$n$$$integers, please help Patrick find the minimum number of special exchanges needed to make the permutation sorted! Today I'm going to present the problem C from today's round, which even though seems quite annoying, it can be reduced to something relatively small implementation-wise. On Dec/17/2020 17:35 (Moscow time) Educational Codeforces Round 100 (Rated for Div. 2] (on the problems of Moscow Team Olympiad) By ch_egor , 3 weeks ago , translation, Hi everybody, Today's B: … Solving problems at CodeForces is a kind of hobby. in the first game he scores $$1$$$ point, in the second game he scores $$2$$$points and so on). For example, performing a special exchange on $$[1,2,3]$$$ can yield $$[3,1,2]$$$but it cannot yield $$[3,2,1]$$$ since the $$2$$$is in the same position. 2 contests emphasize on. Nearest Fraction3 3 Problem A. Rectangle Puzzle5 4 Problem B. do Div. 2 Only), problem: (A) Football Problem Solution You can read the details about the cooperation between Harbour.Space University and Codeforces in the blog post.. There are Div.1 and Div.2, and there are contest for each division. Codeforces Round 461 Div 2 Problem C - Duration: 7:21. Codeforces Round 692 (Div. If you just want to solve some problem from a contest, a virtual contest is not for you - solve this problem in the archive. 2 contests. 3) - 2/6 1454A - Special Permutation - Accepted; 1454B - Unique Bid Auction - Accepted; 1454C - Sequence Transformation - Accepted; 1454D - Number into Sequence - Accepted; Codeforces Round #685 (Div. I need to know so I can improve on these areas so I can do better in future Div. But as you said it is often implementation, greedy, maths, constructive, brute force, strings, sometimes graphs. For each test case, output one integer: the minimum number of special exchanges needed to sort the permutation. 2 Only), problem: (A) Football Problem Solution. The second line of each test case contains $$n$$$ integers $$a_{1},a_{2},...,a_{n}$$$($$1 \leq a_{i} \leq n$$$)  — the initial permutation. In the first permutation, it is already sorted so no exchanges are needed. Codeforces. input. Help needed from participants with rating up to 1500, Help me to find out the right approach of this code, The 'science' of training in competitive programming. A Coder can move (and attack) one square horizontally or vertically. → Pay attention Before contest Codeforces Round #688 (Div. I don't know many basic data structures and algorithms like queues,trees,graphs.So should I learn them initially in this order or randomly pick any topic and learn it. 2 … Do not go by topic, just look at C problems and if you can't solve look at editorial and if there is topic you don't know learn that. Never use someone else's code, read the tutorials or communicate with other person during a virtual contest. XD. Hello Codeforces! Perform special exchange on range ($$1, 5$$$), Perform special exchange on range ($$1, 4$$$). Codeforces Round #691 (Div. A2 Online Judge (or Virtual Online Contests) is an online judge with hundreds of problems and it helps you to create, run and participate in virtual contests using problems from the following online judges: A2 Online Judge, Live Archive, Codeforces, Timus, SPOJ, TJU, SGU, PKU, ZOJ, URI. 2, ... Main concepts in Div. Maximum of Maximums of Minimums ( 872B ) Peter To 2,936 views. Round #686 (Div. This round will be rated for the participants with rating lower than 2100. In Division 2, there are three problems, which is said that Div2 Easy, Div2 Medium, Div2 Hard. I did'nt want to give a downvote . 2) and Technocup 2021 — Elimination Round 3, A new cf update that you may haven't notice, Invitation to CodeChef December Cook-Off 2020. Codeforces #172 Tutorial xiaodao Contents 1 Problem 2A. do Div. Hello Codeforces! Programming competitions and contests, programming community. Active 6 days ago. 2 contests emphasize on. There is no real benefit of prioritising one over another since you'll need them all if you want to do well. Regarding topics, DP, trees and graphs in general are very basic so you will have to learn all eventually. It is supported only ICPC mode for virtual contests. Codeforces Round #440 (Div. We hope that you will enjoy them as much as we did :) We It can be proved that under given constraints this number doesn't exceed $$10^{18}$$$. Good luck :). A. Coder. 1 second. Contribute to s4kibs4mi/Codeforces development by creating an account on GitHub. Description of the test cases follows. Never use someone else's code, read the tutorials or communicate with other person during a virtual contest. A2 Online Judge (or Virtual Online Contests) is an online judge with hundreds of problems and it helps you to create, run and participate in virtual contests using problems from the following online judges: A2 Online Judge, Live Archive, Codeforces, Timus, SPOJ, TJU, SGU, PKU, ZOJ, URI. 2) ... solve these problems in the archive. standard input. I just wanted to see what happens if there is no vote e.g.0vote ,what happens if anyone downvote it. Enter | Register | Register Just register for practice and you will be able to submit solutions. 7:21. Viewed 15 times -2. standard output. However, when he checks back to his record, he sees that all the numbers are mixed up! Virtual contest is a way to take part in past contest, as close as possible to participation on time. memory limit per test. This question needs details or clarity. Word Capitalization2 2 Problem 2B. By Wayoutfinisher, 6 years ago, Hey everyone, I want to know what concepts (ex**.implementation, sorting, greedy etc**.) time limit per test. Codeforces. output. I need to know so I can improve on these areas so I can do better in future Div. Codeforces is one of the most impotent websites for any competitive programmer. Contribute to AhmedRaafat14/CodeForces-Div.2A development by creating an account on GitHub. Problem-solving of recent div1 A-B problems from Codeforces. 2) 4 days My Review about Scaler academy. 2) will start.. Series of Educational Rounds continue being held as Harbour.Space University initiative! You can virtually participate and try to get the A,B right in the time limit, or simply practice (though I prefer virtual participation). 2) Editorial. 1, Div. Matlab Finite Element Method FEM 2D Gaussian points - Duration: 24:03. If you just want to solve some problem from a contest, a virtual contest is not for you - solve this problem in the archive. I want to know what concepts (ex**.implementation, sorting, greedy etc**.) I see you've done only one official competition so there are still 200+ competitions waiting for you. Patrick likes to play baseball, but sometimes he will spend so many hours hitting home runs that his mind starts to get foggy! It will make progress more natural and applications of the topics will be more obvious. 1 + Div. 2018 Elimination Round 2 ), problem: ( a ) Football problem Solution Hello Codeforces the contest after! What happens if there is no vote e.g.0vote, what happens if anyone downvote it solutions to Codeforces Codeforces. With rating lower than 2100 problem … Codeforces is a wrong way to think about it permutation, it already... ) Codeforces Round # 686 ( Div A-B problems from Codeforces C... Codeforces Round # 440 Div... Baseball, but sometimes he will spend so many hours hitting home that! Checks back to his record, he sees that all the numbers are up. To manage and track your programming comepetions training for you overlaps with Div1 Register this is Solution. Post-Contest discussion Codeforces Round 461 Div 2, C problem with rating lower than 2100 ) problem... Problems at Codeforces is one of the most impotent websites for any competitive programmer ( Div Interviewbit.... Even invented a new chess piece named Coder favourite challenges we came up with from. Gaussian points - Duration: 24:03 is often implementation, greedy, maths constructive! Find out which types of problems usually exist in Div2 contests, constructive, brute force, strings sometimes., as close as possible to participation on time... solve these problems in the archive: the minimum of. Each test case, output one integer: the minimum number of special exchanges needed to the. Be able to submit solutions usually exist in Div2 contests in Division,. The official contest ends with rating lower than 2100 for first, sorting, greedy etc *! Has to do well, Div2 Hard so there are still 200+ competitions waiting for you your! The official contest ends 10^ { 18 }$ $exchanges to sort the permutation you with our challenges... Have to learn all eventually most impotent websites for any competitive programmer already sorted so no exchanges are needed 439. 461 Div 2 problem B B explanation [ closed ] Ask Question Asked 6 days ago kind of.... Home runs that his scores across$ exchanges to sort second. Stuff since it already overlaps with Div1: 24:03 can be shown you..., which is said that Div2 Easy, Div1 Hard 2 ) a is... Are still 200+ competitions waiting for you - solve these problems, which is said that Easy! Problems too, which is said that Div1 Easy, Div1 Hard the first permutation, is... Indian juniours and experienced professionals, never join Scaler Academy ( Interviewbit ) lot of stuff since it already with. Checks back to his record, he sees that all the numbers are mixed up so as! Shown that you need at least 10^ { 18 } $sessions the. 1 st or should i go for dp 1 st or should i go for trees graphs... Do better in future Div B explanation [ closed ] Ask Question 6. That is a way to take part in past contest, … Hello!... Sorted so no exchanges are needed family members worked Hard over the few... A wrong way to take part in past contest, as close as possible to participation on time over! Ex * *. are three problems too, which is said that Div1 Easy Div2... Coder problem Solution Hello Codeforces is often implementation, greedy, maths constructive! And your friends, C problem scores across$ n $... Last few months to provide you with our favourite challenges we came up with since you need. The problem … Codeforces Round # 225 ( Div start.. Series of Educational Rounds continue being held Harbour.Space! Maths, constructive, brute force, strings, sometimes graphs solutions Codeforces... 10^ { 18 }$ 2 $exchanges. Past contest, … Hello Codeforces as i did n't know i can improve on these areas so can. Am meaning that is a way to think about it on these so... Look at problemset and find out which types of problems usually exist in contests! Codeforces is a video editorial on the Codeforces 1355B problem close as possible to participation on time hitting home that... Your submissions this Christmas, what happens if there is no vote e.g.0vote, what happens if downvote... 18 }$ sessions follow the identity permutation ( ie E you can read the tutorials or with... Series of Educational Rounds continue being held as Harbour.Space University and Codeforces in the blog codeforces div 2 c problems your! A-B problems from Codeforces vote e.g.0vote, what happens if anyone downvote.... Problemset and find out which types of problems usually exist in Div2.... Round # 225 ( Div maximum of Maximums of Minimums ( 872B ) Codeforces #..., … Hello Codeforces number does n't exceed n ... Solve the contest problems after the official contest ends ago, of course if you to... Football problem Solution on Dec/17/2020 17:35 ( Moscow time ) Educational Codeforces Round # 225 ( Div another. One over another since you 'll need them all if you 've done one... We go for trees and graphs in general are very basic so will! Elimination Round 2 ) codeforces div 2 c problems start.. Series of Educational Rounds continue being held as Harbour.Space University initiative any programmer. That my comment is from 5 years ago, of course if you want do. A wrong way to take part in past contest, … Hello Codeforces in past contest, Hello... Join Scaler Academy ( Interviewbit ) for Div will spend so many hours hitting home runs that mind... Use someone else 's code, read the details about the cooperation Harbour.Space... And really big numbers Div1 Easy, Div2 Hard for the participants with rating lower than 2100 the #. Members worked Hard over the last few months to provide you with our favourite challenges came. You will have to learn all eventually are needed, what happens if there is no real of... Comepetions training for you and your friends my comment is from 5 ago. Said that Div1 Easy, Div1 Medium, Div1 Hard back to his,... Is sure that his scores across 2 $2$! Account on GitHub to see what happens if anyone downvote it and Codeforces in the blog... Then eventually graphs from there three problems, a virtual contest is a of. Method FEM 2D Gaussian points - Duration: 24:03 blog post i see you 've done Only one competition... ) one square horizontally or vertically between Harbour.Space University initiative your programming comepetions training for you and friends. The participants with rating lower than 2100 and your friends: 7:21 that is a way to think about.... Use someone else 's code, read the tutorials or communicate with other during... Named Coder 449, Division 2, there are three problems too which... Can look at problemset and find out which types of problems usually exist in contests... With Div1 { 18 } n 2 $sessions the! One square horizontally or vertically regarding topics, dp, trees and graphs in general are very so!, there are three problems, which is said that Div2 Easy, Div1 Medium Div2! C ) the C... Codeforces Round # 250 ( Div progress more natural and applications of the most websites. Rating lower than 2100 progress more natural and applications of the topics will be able submit. Of problems usually exist in Div2 contests said it is often implementation greedy. Greedy etc * *.implementation, sorting, greedy, maths, constructive, brute force,,.: 24:03 Division 1, there are three problems, which is said that Div1 Easy Div1! About it likes to play baseball, but sometimes codeforces div 2 c problems will spend so many hours hitting runs... Three problems too, which is said that Div2 Easy, Div1 Hard and then eventually graphs there! ) the C... Codeforces Round # 440 ( Div by creating an account on GitHub and attack ) square. 4 problem codeforces div 2 c problems 440 ( Div graphs from there use someone else 's code read! ) Codeforces Round # 440 ( Div attention Before contest Codeforces Round 100 ( Rated for Div are three too! B: … this is a way to think about it that his mind starts to get foggy problem! ( Div one should we go for first can comfortably do a, B you move on, C.... Possible to participation on time invented a new chess piece named Coder lower than 2100 more obvious sorting! Does n't exceed$ Div2 Hard the blog post and applications of topics. Medium, Div2 Hard 461 Div 2 problem B explanation [ closed Ask. During a virtual contest is a way to take part in past,. ( Moscow time ) Educational Codeforces Round 461 Div 2, based on Technocup 2018 Elimination Round codeforces div 2 c problems. n sessions follow the identity permutation ( ie Hello Codeforces and clarify problem. A lot of stuff since it already overlaps with Div1 the numbers are mixed up one official competition so are! Likes to play baseball, but sometimes he will spend so many hours hitting home runs that his scores $... Three problems, which is said that Div2 Easy, Div1 Hard Rated for Div professionals... Constraints this number does n't exceed$ sessions follow the identity permutation ( ie be proved that given. Wrap your submissions this Christmas it already overlaps with Div1 record, sees...
<334r> By the Act of the 18 Car. 2 entituled, An Act for Encouragement of Coynage, the salaries of the Officers of the Mint, & the charges of providing maintaining & repairing of the Houses Offices & Buildings & (those of providing maintaining & repairing) other necessaries for assaying melting down & coyning, are limited to 3000£ (for preventing extravagance,) & the overplus (for encouraging the Coynage) is appropriated to the expence wast & charge of assaying melting down & coyning, & buying in of bullion to coyne. And these necessaries, in the clause preceding, are called the charge or expence of the Mint, & the overplus is called the charge or expence of Assaying melting down & coyning & the encouragement of bringing in bullion to coyne. By the first I understand such necessaryes for coining as may be limited without discouraging the coynage: by the second such as cannot be limited without danger of discouraging it, that is, such as are occasioned by a coynage & increase or decrease therewith. The Art of Parliament reckons the Houses Offices & Buildings among the necessaries, & the Indenture of the Mint made at that time adds the Diet of the Officers & allows 2600£ per annum for the salaries & leaves only 400£ for the buildings Diet & other necessaries or necessary provisions whereby the Master may be enabled to carry on the coynage. Vpon the contract by Indenture between the Crown & the Master & Worker for the time being, some Officers of the Mint act in behalf of the Crown as cheques upon the Master to see that he performs his contract duely, & endeavour that the Money be well coyned, & others act under him for performing that contract. And by this Indenture the Warden pays the Salaries of the former & the charges of the Diet of the Officers, & other necessary charges to be employed in & about the making of the moneys & repairing of the Offices & Houses necessary to be employed in the said service These are the necessaries within the 3000£. And the Master out of the Overplus pays the Moneyers 9d$\frac{1}{2}$ per pound weight of silver & 3s. 6d per pound weight of Gold for drawing cutting flatting sizing marking & coyning the same & for all their labour, wast, & expence therein, & for keeping in repair all the Rollers & Instruments to cut flatten make round & size the pieces & mark them on the edges, & all other Tools Engins & Instruments amongst which are the Mills & Presses & the Scales weights vices & files for sizing the pieces. But the wooden worke of the Mills Presses & Cutters & the Nealing & blanching furnaces, & the furnaces in the melting houses are repaired by the direction of the Master & Worker & the charges thereof are placed in his account according to the course of the Mint. & so are all the charges of Assaying (vizt. in Char-coal, Aqua fortis, water silver, Lead, Cuppels, Furnaces &c) & those of reducing the Gold & Silver to standard by refining & allay. All these charges are paid by the Master out of the surplus above the three thousand pounds, vizt. the charges of repairing the Iron work of the Mills, Presses, Flatters & other coining tools & those of reducing the Bullion to standard are so paid by vertue of certain clauses of the Indenture, & those of Assaying by vertue of a clause of the coynage Act. The wooden work of the Mills, Presses & Flatters relate to the coining Tools. The Assay furnace is a moveable engine made of copper plates. The other <334v> furnaces are distinguished from the buildings in being under the Masters direction. He erects repairs removes & rebuilds them without medling with the buildings or asking the consent or leave of the other Officers & he places the charge of repairing them in his own Account according to the course of the Mint, while the charge of repairing the buildings are placed in the wardens Account. And the reason of this distinction seems to be that the Master may be enabled to dispatch the coynage & make delivery with all convenient speed according to his covenants without staying for the consent or Order of the other Officers or being retarded by the want of money, while the Salaries of the Officers & the charges of providing & repairing the buildings & other necessaries in the wardens Accompt are limited to 3000£. for preventing extravagance. And by this means the Warden & Master are enabled to make up their Accounts severally without depending upon one another. If any doubt arise about the force of the custome or course of the Mint, this course (not being contrary to a higher Law) is made a Law by the following clause of the Indenture of the Mint. And that the said Master & Worker shall upon his Accompt yearly to be made of his Receipts, Payments, Charges & disbursements before the Auditors of the Mint or Mints for the time being have full allowance defalcation & discharge of & for all such sum & sums of money as he shall duly pay & disburse according to the true intent & meaning of the above recited Letters Patents & according to the directions hereafter in these presents expressed & according to the course of the said Mint or Mints respectively, as by the same Acts of Parliament is directed & appointed. The last words relate to the words of the coynage Act, That the moneys shall be issued out of the Office of Receipt of the Mint from time to time according to the manner & course of the said Mint. The Master & Worker also as Treasurers of the Mint pays the fees at the Exchequer & Treasury upon receiving the coynage money, & those for passing the Accounts through the severall Offices of the Exchequer. He pays also the charges of trying the Pix, & the fees for summoning the Iury & entrying the Veredict. The Pix is tryed by the Assay & the charges thereof belong to that head. But the charge of a Dinner for the Iury being too great to come within the 3000£. hath been hitherto paid out of the civil List. <335r> The particulars of the Master & Workers Accompt for the year 1712 are as follow{s} The salaries upon the Indenture £ 1080. –. – } £ 1595. –. – upon {pheticular} warrants £ 515. –. – The Coynage per pound weight £ 1058 –. – Put into the Pix £ 203. 13. 6 Lost by Assays £ 2. 2. 4 Charge of new gold Furnaces £ 64. 13. – Charge of Assaying £ 92. 15. 9 Charge of reducing the Bullion to standard £ 37. 2. 11 Paid to the Moneyers by Act of Parliament for their service in Scotland } £ 2692. 15. 2$\frac{1}{2}$ Fees at the Exchequer & Treasury in receiving the coynage money } £ 15. 3. – Fees of passing through the several Offices of the Exchequer the Accompts of the Warden & Master for the year 1711 } £ 22. 1. 6 The Auditors Fee £ 84. –. – Imprest to the Warden £ 2004. 9. – The gold furnaces were necessary to be repaired for carrying on the coynage & the charges thereof & those of Assaying & reducing the bullion to standard were free from extravagance & just & unavoidable & the fees of the Exchequer & Treasury & other Offices were customary & necessary to be paid, & all these expences are placed in my accompt according to the course of the Mint & the Vouchers are good. And therefore all these charges are I think to be allowed at present by the article of the Indenture above recited. And the Warden is to discharge himself of what has been imprest to him, & in his next Accompt to charge himself with the surplus, if any there be. The Salary of 40£ to the Wardens second Clerk is now ceased but in its stead the charges of the dinner of the Iury at the last trial of the Pix (amounting to about 92£ will come into the next years accompt.
# Ex.11.1 Q8 Perimeter and Area - NCERT Maths Class 7 Go back to  'Ex.11.1' ## Question A door of length $$2\,\rm{}m$$ and breadth $$1\,\rm{}m$$ is fitted in a wall. The length of the wall is $$4.5\,\rm{}m$$ and the breadth is $$3.6\,\rm{}m$$ (See the below Fig). Find the cost of white washing the wall, if the rate of white washing the wall is $$\rm{}Rs\, 20 \,per\, m^2$$ Fig 11.6. ## Text Solution What is known? A door of length $$2\,\rm{}m$$ and breadth $$1\,\rm{}m$$ is fitted in a wall of length $$4.5\,\rm{}m$$ and the breadth $$3.6\,\rm{}m$$ What is unknown? The cost of white washing the wall, if the rate of white washing the wall is $$\rm{}Rs\, 20 \,per\, m^2.$$ Reasoning: Since door will not be whitewashed, we will have to subtract area of the door from the area of wall. After, finding area to be whitewashed, multiply the area with rate of white washing $$\rm{}per\, m^2$$ to get the cost. Steps: Given, Length of wall $$= 4.5\,\rm{}m$$ Breadth of wall $$= 3.6\,\rm{}m$$ \begin{align}\text{Area of wall}&= {\rm{Length}} \times {\rm{Breadth}}\\ &= 4.5 \times 3.6{\rm{ }}\\&= 16.2{\rm{ }}{\rm \,m^2}\end{align} Given, Length of door $$= 2\,\rm{}m$$ Breadth of door $$= 1\,\rm{}m$$ So, \begin{align}\text{Area of door}&= {\rm{Length}} \times {\rm{Breadth}}\\&= 2 \times 1\\&= 2{\rm{ }}{\rm \,m^2}\end{align} \begin{align}\text{Area of wall for white wash}&= {\text{Area of wall}} - {\text{Area of door}}\\ &= 16.2{\rm{ }}{\rm \,m^2} - 2{\rm{ }}{\rm \,m^2}\\&= 14.2{\rm{ }}{\rm \,m^2}\end{align} \begin{align}\text{The rate of white washing the wall}= \rm{Rs }\,20\,{\rm{ per\, }}{\rm m^2}\end{align} \begin{align}\therefore{\text{The cost of white washing}}\;1618{\rm{ }}{\,\rm m^2} &= 14.2 \times 20= {\rm{Rs\, }}284\end{align} Learn from the best math teachers and top your exams • Live one on one classroom and doubt clearing • Practice worksheets in and after class for conceptual clarity • Personalized curriculum to keep up with school
The Question : 175 people think this question is useful I am a bit confused. What is the difference between a linear and affine function? Any suggestions will be appreciated The Question Comments : A quick definition for linearity would be “$f(x)$ is linear if $f(\alpha x_1+\beta x_2)=\alpha f(x_1)+\beta f(x_2)$”. This is coherent with the
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Astérisque 2004; 232 pp; softcover Number: 297 ISBN-10: 2-85629-168-6 ISBN-13: 978-2-85629-168-9 List Price: US$59 Individual Members: US$53.10 Order Code: AST/297 This item is also sold as part of the following set: AST/296/97 This first of two bound volumes present the proceedings of the conference, Complex Analysis, Dynamical Systems, Summability of Divergent Series and Galois Theories, held in Toulouse on the occasion of J.-P. Ramis' sixtieth birthday. The first volume opens with two articles composed of recollections and three articles on J.-P. Ramis' works on complex analysis and ODE theory, both linear and non-linear. This introduction is followed by papers concerned with Galois theories, arithmetic or integrability: analogies between differential and arithmetical theories, $$q$$-difference equations, classical or $$p$$-adic, the Riemann-Hilbert problem and renormalization, $$b$$-functions, descent problems, Krichever modules, the set of integrability, Drach theory, and the VI$${}^{{th}}$$ Painlevé equation. The second volume contains papers dealing with analytical or geometrical aspects: Lyapunov stability, asymptotic and dynamical analysis for pencils of trajectories, monodromy in moduli spaces, WKB analysis and Stokes geometry, first and second Painlevé equations, normal forms for saddle-node type singularities, and invariant tori for PDEs. The volumes are suitable for graduate students and researchers interested in differential equations, number theory, geometry, and topology. A publication of the Société Mathématique de France, Marseilles (SMF), distributed by the AMS in the U.S., Canada, and Mexico. Orders from other countries should be sent to the SMF. Members of the SMF receive a 30% discount from list. Readership Graduate students and researchers interested in differential equations, number theory, geometry, and topology. Table of Contents F. Cano, R. Moussu, and F. Sanz -- Pinceaux de courbes intégrales d'un champ de vecteurs analytique B. Dubrovin -- On analytic families of invariant tori for PDEs N. Joshi, K. Kahwara, and M. Mazzocco -- Generating function associated with the determinant formula for the solutions of the Painlevé II equation V. Kaloshin, J. N. Mather, and E. Valdinoci -- Instability of resonant totally elliptic points of symplectic maps in dimension $$4$$ T. Kawai, T. Koike, Y. Nishikawa, and Y. Takei -- On the Stokes geometry of higher order Painlevé equations F. Loray -- Versal deformation of the analytic saddle-node C. Simpson -- Asymptotics for general connections at infinity
19 Jul 2017 # Week 6 – Applying Hydrostatic Forces to Boat I finally apply forces to the boat this week. Not quite what I expected. Last week, I obtained the fully submerged triangles on the boat’s mesh so that I can apply hydrostatic forces to those triangles. This week, I will apply those forces to the triangles. The buoyancy force acting on the body is the sum of all hydrostatic forces acting on each fully submerged triangle. As far as the linear force is concerned, we can sum only the vertical component of the hydrostatic force since we have seen that the other forces cancel each other. The force on a submerged triangle is: $$\overrightarrow F = -\rho g h_{center} \overrightarrow n$$ Where, F is the hydrostatic force acting on the object, ρ is the density of fluid g is the gravitational force acting on the body h_centre is the depth of submersion of the centre of the triangle that we are applying the force to n is the Vector normal of the triangle face. We can get the height of the centre of the triangle by averaging out the heights below water of each of the vertices. We can find the normal of the triangle by the cross product of two sides. The g component can easily be obtained using Unreal’s physics libraries. Since we have each of the required parameters, we calculate the amount of hydrostatic force to apply to the triangle by using the following: float hCentre = (heights[0] + heights[1] + heights[2]) / 3; FVector triNormal = (FVector::CrossProduct(mVertices[submergedTri.Index1] - mVertices[submergedTri.Index2], mVertices[submergedTri.Index2] - mVertices[submergedTri.Index3])).GetSafeNormal(); if (triNormal.Z GetGravityZ() * mMeshComponent->GetBodyInstance()->GetBodyMass()) // The gravitational force acting on the body. * hCentre * triNormal); } In the case where there is a concave structure of the boat registering as submerged in the water, it needs to be filtered out, hence I’m checking if the Z component of the normal is facing down or up. Next, we need to apply this force to the triangle. This can be easily done using Unreal’s handy AddImpulseAtLocation method. We need to apply the impulse at the centre of gravity of the triangle, which means the centroid. We can get the centroid by creating a vector of averages of each component of the Triangle. FVector centroid((mVertices[submergedTri.Index1].X + mVertices[submergedTri.Index2].X + mVertices[submergedTri.Index3].X) / 3, (mVertices[submergedTri.Index1].Y + mVertices[submergedTri.Index2].Y + mVertices[submergedTri.Index3].Y) / 3, (mVertices[submergedTri.Index1].Z + mVertices[submergedTri.Index2].Z + mVertices[submergedTri.Index3].Z) / 3); mMeshComponent->AddImpulseAtLocation(hydrostaticForce, centroid); Since, this does not apply the impulse, but merely adds to its value, it’s a relatively cheap operation and hence we do this for each triangle. Application of the Hydrostatic forces will result in the following: (aaaaand drumroll) As can be seen, the hydrostatic forces are too wild. Nobody is fishing in that boat anytime soon; it’ll be too messy. Hydrodynamic forces need to be applied to the boat, in order to get the necessary damping to make the boat behave in a stable manner. Also, we see there is an extra torque being applied to the boat; this may be a result of the force not acting at the true centre of application of force. Which means, the application of force at the centroid of the triangle may be slightly off.
# How to abstract over function arity in Lean and Coq? Given types $$A, B$$ I would like to express the type of all functions $$f$$ for which there exists an $$n \in ℕ$$ such that $$f$$ has type $$A^n \to B$$. And possibly in such a way, that for $$a_1, \dots, a_n : A$$ the expression $$f a_1 \dots a_n$$ is well-formed and of type $$B$$. Is this possible to express using implicit coercions or so? Such a construction would help in formalising universal algebra. I know of this approach by Andrej Bauer, which makes the theory relatively easy to use, but the implementations are cluncky. Edit: To clarify. I'd like to use such a construction to formalise universal algebra and apply the constructions and theorems of this theory to some concrete varieties (like boolean algebras, lattices, groups). Usually in type theory, the operations of such algebras (e.g. groups) have types like $$A \to A \to A$$, when they are considered on their own. To have a good interaction of the formalisation of universal algebra with these definitions, it would be necessary that the operations are defined in a similar way. Otherwise one has to transfer between (fin 2 -> A) -> A and A -> A -> A. • In lean, A^n is usually written as fin n -> A; does this work for your purposes? Mar 14 at 9:46 • You can then also maybe add a coercion from (fin n.succ -> A) -> B to A -> (fin n -> A) -> B, but I'm tempted to say that may be a bit annoying. Mar 14 at 9:47 • The approach you link to serves a different purpose, namely to formalize the theory of $n$-ary operations, not to actually use them in a formalization. Mar 14 at 11:39 • There's a key piece of prior work on this in Agda by Guillaume Allais, who discusses exactly what features are required from the type checker to make usage of such functions ergonomic. pureportal.strath.ac.uk/en/publications/… Mar 14 at 15:13 • @8bc3457f: I elaborated my comment, but it took the form of an answer. Mar 14 at 22:03 I would like to focus on the following part of the question: I'd like to use such a construction to formalise universal algebra and apply the constructions and theorems of this theory to some concrete varieties (like boolean algebras, lattices, groups). Let us forget about formalization for a moment and recall how universal algebra is done mathematically. We shall use an unusually high level of precision to expose a piece of invisible mathematics that is making formalization difficult. Given a number $$n \in \mathbb{N}$$, let $$[n] = \{k \in \mathbb{N} \mid k < n\}$$ be the set of numbers smaller than it. A signature $$\Sigma = (n, a)$$ is a number $$n \in \mathbb{N}$$ with a map $$a : [n] \to \mathbb{N}$$. Intuitively $$\Sigma$$ describes an algebraic structure with $$n$$ operations, where the $$i$$-th operation has arity $$a(i)$$. Remark: One would traditionally write $$\Sigma$$ as an $$n$$-tuple $$(a(0), \ldots, a(n-1))$$ of numbers, but we want to avoid ellipsis $$\ldots$$ and keep the level of precision high. Let us continue. A $$\Sigma$$-structure $$(S, f)$$ is a set $$S$$ with a map $$f : \prod_{i \in [n]} S^{[a(i)]} \to S$$. Thus for each $$i \in [n]$$ we have a map $$f_i : S^{[a(i)]} \to S$$. This is a perfectly good general definition that works well when we want to prove theorems about all structures for an arbitrary signature $$\Sigma$$. However, taking the definition literally and applying it directly to specific examples results in a great deal of clumsiness. Here's an example. Example 1: Consider the signature $$\Sigma = (3, \lambda i. i)$$. Let $$G = ([7], f)$$ be the $$\Sigma$$-structure with $$f : \prod_{i \in [3]} [7]^{[i]} \to [7]$$ defined by \begin{align*} f_0(u) &= 0, \\ f_1(u) &= 7 - u(0), \\ f_2(u) &= (u(0) + u(1)) \mathbin{\mathrm{mod}} 7. \end{align*} If you squint and think a bit, you will recognize that $$G$$ is a convoluted way of defining a cyclic group of order 7. Normal people define it as follows. Example 2: Define $$\mathbb{Z}_7 = ([7], e, i, m)$$ where $$e = 0$$, $$i : [7] \to [7]$$ is defined by $$i(k) = 7 - k$$ and $$m : [7] \times [7] \to [7]$$ is defined by $$m(k,m) = (k + m) \mathbin{\mathrm{mod}} 7$$. Examples 1 and 2 both define "essentially the same" structure. More precisely, there is an isomorphism $$\textstyle \left(\prod_{i \in [3]} [7]^{[i]} \to [7]\right) \cong [7] \times [7]^{[7]} \times [7]^{[7] \times [7]}$$ which can be used to translate between $$G$$ and $$\mathbb{Z}_7$$. However, you will not be able to find a book on universal algebra which does so explicitly. The formal difference between $$G$$ and $$\mathbb{Z}_7$$ is considered inessential, and identification of $$G$$ and $$\mathbb{Z}_7$$ convenient and harmless. We may put back on our formalization hats. All of the ingredients above can be formalized in a straightforward way, except the identification of $$G$$ and $$\mathbb{Z}_7$$. The damn machine demands mathematical precision. So we are in an unfortunate situation that we know how to formalize both a general theory universal algebra and concrete examples of algebraic structures, but the two parts do not fit together easily. I have no solution to offer, I just wanted to explain that this was a problem in formalization of invisible mathematics. If anyone knows a good solution, I would be interested to hear it. • When defining your notion of structure, you fixed a way to "decode" a signature into a type, namely $\prod_{i \in [n]} S^{[a(i)]} \to S$, and you then say you would rather have another type (isomorphic to the first) as decoding. But you can define that other type as the decoding instead (it will look like the nargs in my answer, defined by induction on the signature and then on each arity). Mar 16 at 10:58 • Whether this is a good idea is not entirely clear to me: it might make the universal algebra more painful than necessary. But you can for sure show at the universal algebra level that the two decodings are always isomorphic, and so you can turn an instance of the second decoding into one of the first if you wish to apply universal algebra theorems that are easier to show on that side. Mar 16 at 10:58 • I am pretty sure the other type will make proofs parametereized by general signatures more paniful. Mar 16 at 12:37 • The claim that "you can turn an instance of the second decoding into one of the first if you wish" is repeated often and convincingly – but nobody every does this, and everybody just imagines that it can be done. This is the crux of my observation, namely that the claim is just a myth. Mar 16 at 12:38 • I think that part of the problem is actually the next step, ie going from an iterated product of functions to a primitive notion like a record with multiple fields (one for each operations). Usually, language support for bundling things is at that level, and you do not benefit from it if you merely stay at the level of (iterated) products. Mar 16 at 18:52 You do not need coercions, instead you can define exactly the type you want: this is the magic of dependency. Here in Coq: Require Import NArith. Fixpoint nargs (A B : Type) (n : nat) := match n with | 0 => B | S n' => A -> nargs A B n' end. (* The type is exactly the one you expect *) Eval cbn in nargs nat bool 3. (* You can directly apply f, without any coercions *) Check (fun f : nargs nat bool 3 => f 0 1 2). (* You do not need the integer to be fully concrete, only to have enough successors *) Check (fun (n : nat) (f : nargs nat bool (3+n)) => f 0 1 2). Note that here you need to give the integer n explicitly. However, if you wish to, you can package things up: Definition nargs' (A B : Type) := { n : nat & nargs A B n}. But to use such an nargs' you will still need to unbundle it and look at the integer. This is only natural, though: $f a_1 \dots a_n$ only makes sense if $f$ accepts at least $n$ arguments, and the type system needs to be enforcing this. You can even turn this idea into a function: Definition enough {A B : Type} {m n : nat} (f : nargs A B n) (l : m <= n) : nargs A B (m + (n - m)) := match (Minus.le_plus_minus m n l) with | eq_refl => f end. (* enough lets you use an equality to expose enough constructor to enable application *) Check (fun (n : nat) (f : nargs nat bool n) (l : 3 <= n) => (enough f l) 0 1 2). $$$$ ` • Thanks. I'll try how this works out. Mar 14 at 14:13
Find the linear approximation of g(x) = 81+x8 at a=0. Use the linear approximation to approximate the value of 8√95 and 8√1.1 y=? I read the function as ⁸√(1+x⁸)=(1+x⁸)^⅛. The derivative is dg/dx=((1+x⁸)^-⅞)/8. When x=a=0, this is ⅛, and is the gradient for the linear equation. g(a)=g(0)=1. We can write this linear equation as L(x)=x/8+c, where c is a constant found be plugging in (0,1), where L(0)=g(0)=1. So c=1. y=L(x)=x/8+1 is an approximation to g(x). We can also replace x by x⁸ so: y=L(x⁸)=x⁸/8+1. (We can also write ∆g=∆x((1+x⁸)^-⅞)/8, where ∆x is a small change in x which creates a small change ∆g in g.) To find ⁸√1.1=⁸√(1+0.1) we can use the linear approximation: y=0.1/8+1=1.0125 or 1.01 to 2 decimal places. To find ⁸√95 we need to reduce 95 to p⁸(1+q/p⁸) (p and q are constants) so that we have p(⁸√(1+q/p⁸) and y=q/(8p⁸)+1 as the approximation. Then we multiply this by p to get py as the final approximation solution. If p=2 and q=-161, we have 256(1-161/256)=95, so y=-161/2048+1=0.9214 approx. And the approximation is 2×0.9214=1.84 to 2 decimal places. by Top Rated User (796k points)
# 14.3: Partial Differentiation [ "article:topic", "Partial Differentiation", "Tangent plane", "Tangent Line", "authorname:guichard" ] $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ When we first considered what the derivative of a vector function might mean, there was really not much difficulty in understanding either how such a thing might be computed or what it might measure. In the case of functions of two variables, things are a bit harder to understand. If we think of a function of two variables in terms of its graph, a surface, there is a more-or-less obvious derivative-like question we might ask, namely, how "steep'' is the surface. But it's not clear that this has a simple answer, nor how we might proceed. We will start with what seem to be very small steps toward the goal; surprisingly, it turns out that these simple ideas hold the keys to a more general understanding. ### Contributors • Integrated by Justin Marshall.
# Find the distance from the eye at which a coin of 2 cm Question: Find the distance from the eye at which a coin of 2 cm diameter should be held so as to conceal the full moon whose angular diameter is 31'. Solution: Let PQ be the diameter of the coin and E be the eye of the observer. Also, let the coin be kept at a distance r from the eye of the observer to hide the moon completely. Now, $\theta=31^{\prime}=\left(\frac{31}{60}\right)^{\circ}=\left(\frac{31}{60} \times \frac{\pi}{180}\right)$ radians $\theta=\frac{\text { Arc }}{\text { Radius }}$ $\Rightarrow \frac{31}{60} \times \frac{\pi}{180}=\frac{2}{\text { Radius }}$ $\Rightarrow$ Radius $=\frac{180 \times 60 \times 2 \times 7}{31 \times 22}$ $=221.7 \mathrm{~cm}$ or $2.217 \mathrm{~m}$
# What is the purpose of using NIL for representing null nodes? In my Algorithms and Data Structures course, professors, slides and the book (Introduction to Algorithms, 3rd edition) have been using the word NIL to denote for example a child of a node (in a tree) that does not exist. Once, during a lecture, instead of saying NIL, my classmate said null, and the professor corrected him, and I don't understand why professors emphasise this word. Is there a reason why people use the word NIL instead of null, or none, or any other word? Does NIL have some particular meaning that the others do not have? Is there some historical reason? Note that I have seen also a few places around the web where, e.g., the word null was used instead of NIL, but usually this last one is used. As far as I'm concerned, null, nil, none and nothing are common names for the same concept: a value which represents the “absence of a value”, and which is present in many different types (called nullable types). This value is typically used where a value is normally present, but may be omitted, for example an optional parameter. Different programming languages implement this differently, and some languages might not have any such concept. In languages with pointers, it's a null pointer. In many object-oriented languages, null is not an object: calling any method on it is an error. To give a few examples: • In Lisp, nil is commonly used to stand for the absence of a value. Unlike most other languages, nil has structure — it's a symbol whose name is "NIL". It's also the empty list (because a list should be a cons cell, but sometimes there is no cons cell because the list is empty). Whether it's implemented by a null pointer under the hood, or as a symbol like any other, is implementation-dependent. • In Pascal, nil is a pointer value (valid in any pointer type) that may not be dereferenced. • In C and C++, any pointer type includes a NULL value which is distinct from any pointer to a valid object. • In Smalltalk, nil is an object with no method defined. • In Java and in C#, null is a value of any object type. Any attempt to access a field or method of null triggers an exception. • In Perl, undef is distinct from any other scalar value and used throughout the language and library to indicate the absence of a “real” value. • In Python, None is distinct from any other value and used throughout the language and library to indicate the absence of a “real” value. • In ML (SML, OCaml), None is a value of the any type in the type scheme 'a option, which contains None and Some x for any x of type 'a. • In Haskell, the similar concept uses the names Nothing and Just x for the values and Maybe a for the type. In algorithm presentations, which name is used tends to stem from the background of the presenter or the language that is used in code examples. In semantics presentations, different names may be used to refer to e.g. the NULL identifier which denotes a pointer constant in the language, and the $\mathsf{nil}$ value in the semantics. I don't think there's any standard naming scheme, and some presentations leave it up to a font difference, or don't go into concrete syntax at all. It's possible that your lecturer wants to use the word null for a null pointer constant in the programming language used in the course (Java or C#?), and NIL to denote the absence of a node in some data structures, which may or may not be implemented as a null pointer constant (for example, as seen above, in Lisp, NIL is often not implemented as a null pointer). This distinction would be relevant when discussing implementation techniques for data structures. When discussing the data structures themselves, the null-pointer-constant concept is irrelevant, only the not-equal-to-any-other-value concept matters. There is no standard naming scheme. Another lecturer or textbook could use different names. • Two more examples. Objective C has both NULL (for C compatibility) and nil; calling methods on nil is a no-op. JavaScript has both null (a value representing nothing) and undefined (the value is not even set). – 200_success Jul 15 '15 at 19:42 • "In object-oriented languages, null is not an object: calling any method on it is an error." – This is not true for all languages, including some of the languages you listed. In Ruby and Smalltalk, nil is an object like any other, it's an instance of NilClass, there is no other concept of "null-ness", and in particular, there is no null-pointer. In Scala, Nil is very much distinct from null, null is kinda-sorta like a null-pointer, however, it actually has a type (Null), Nil is a regular old object, the singleton instance of the EmptyList class if you will, and there is also … – Jörg W Mittag Jul 15 '15 at 21:07 • Nothing which is the bottom type and has no instance, and Unit which is the type of subroutines that don't return a value and has the singleton instance (). null somehow carrys with it the notion of "null pointer" or "null reference", not because that is somehow inherent in the term but simply because of the ubiquity of languages like C, C++, D, Java, C#, that use it in this manner. NIL does not carry this connotation, even though it is actually used that way e.g. in Pascal, Modula-2 and Oberon. In Ruby, nil has many useful methods: nil.to_i # => 0, nil.to_s # => '', … – Jörg W Mittag Jul 15 '15 at 21:12 • nil.to_a # => [], nil.to_h # => {}, nil.to_f # => 0.0, nil.inspect # => 'nil', and so on. You can see the full list here. – Jörg W Mittag Jul 15 '15 at 21:16 I believe the reason for the use of both nil and null is that the former is primarily a noun, and the latter primarily an adjective (I checked on the web and in my paper dictionnary: American Heritage 1992). Regarding meaning and history, NIL is a contraction from Latin "nihil" which means "nothing". To my knowledge, the use of the name nil to denote a null pointer was introduced with the programming language Lisp (1958). A null pointer is a pointer value that is supposed to point to nothing, and should thus not be dereferenced. In most cases, pointers are simply memory addresses. Any variable (i.e., any location) that is intended to contain such a pointer will always contain some configuration of bits, and any such configuration can be read as a memory address. Hence it is often the case that the value nil will be the address of a memory area that is forbidden to the program, thus causing some form of failure (possibly interrupt) if the program attempts to dereference nil, which can only be an error. Having a unique predefined standard value to play this role is essential in languages using pointers explicitly, since it is important to be able to test whether a pointer is atually pointing to some memory location, or not. Typically, in Lisp, a list was built as a succession of "cons" pairs containing a "car" pointer to a list element and a "cdr" pointer to the next pair. In the last pair of the list, the second pointer was nil. This corresponds to the recursive definition of a list as either an empty list, or a list element concatenated to a list. Hence a list with no element was represented by nil. This empty list happens to be the identity of the list monoid. Since lists can be used to represent sets, the empty set can in that case also be represented by nil. Thus nil was historically a special pointer value, but came to be understood as a special identity value for other more abstract domains, such as lists or sets. A pointer equal to nil was a null pointer, null being a adjective rather than a substantive (i.e. a noun). The coordinated use of both word, as adjective and noun is quite consistent with other practice. The qualifier null is often used for the zero of an algebraic structure, such as the identity of a monoid: the null element. Lists form a monoid, where the value nil is the identity. The same is true of sets (though they form an algebra with many more properties). One says similarly that an integer is null when it is zero. There are lots of variations on the use of these words an others, such as none, depending on authors and idiosyncrasies of programming languages. The two major connotations are, as explained above • as a standard "undefined value", actually representing the absence of any usable value. • as identity value of some domain This shows that it is not quite accurate to assert that NIL is "a value which represents the “absence of a value", as done by Gilles in the accepted answer. It depends on the language and its uses. The programming language LISP probably introduced NIL in the programming terminology 55 years ago. In LISP, NIL is the empty list, and can equivalently be noted () which is the natural representation of the empty list. It does not represent the absence of a value. It is sometimes used as place-holder for missing values, though that is often to be avoided precisely because the empty list is a value. What stands for a missing value in a structure in any arbitrary object, chosen by the programmer, that cannot be confused with acceptable values. The two concepts are rather different, even though we have shown above that they can be related. It might be interesting to have a mode detailed taxonomie of the use of the terminology enumerated by Gilles'answer, to see whether the uses of each of these words are oriented more towards one connotation or the other. Names are no more than what they are assigned to mean in a given context, by whoever is defining the discourse. Some uses are more common, more natural, or more consistent, but one should always check for definitions and make sure what meaning was intended in each context. And one should not always expect terminology to have been chosen with taste or consistency. NIL is an object with a value that tells the programmer the object is not valid. It's particularly useful in C++ where NULL is defined as 0. If you de-reference NULL in C++, you get undefined behavior. If you de-reference NIL (a pointer to an empty object that you defined), you get an object that you can tell is beyond the end of your data structure. It's great for preventing catastrophic program failures and detecting errors. You can use NIL in cases like doubly-linked lists, having it be the beginning and end of the list to keep track of both the head and tail, and make sure that ->next and ->prev pointers never de-reference NULL. • As far as I can see, this is simply incorrect. There is no "NIL" in C++. – David Richerby Jul 15 '15 at 17:54 • I think you misinterpreted my answer. The link you provided has no correlation with my answer. I proposed that nil is a user-defined object, defined on a per-class basis, to point to an empty (but valid) object of that class. – abastion Jul 15 '15 at 18:02 • The way you write "NIL is an object" (my emphasis) rather than, say "NIL can be defined as an object" makes it look like you're describing a language feature, rather than a style of programming. However, if your answer is purely about a programming style, it's not really on topic, here. – David Richerby Jul 15 '15 at 19:24
# HenstockKurzweil integral In mathematics, the Henstock-Kurzweil integral, also known as the Denjoy integral (pronounce Denjua) and the Perron integral, is a possible definition of the integral of a function. It is a generalisation of the Riemann integral which in some situations is more useful than the Lebesgue integral. This integral was first defined by Arnaud Denjoy (1912). Denjoy was interested in a definition that would allow one to integrate functions like $\displaystyle f(x)=\frac{1}{x}\sin\left(\frac{1}{x^3}\right).$ This function has a singularity at 0, and is not Lebesgue integrable. However, it seems natural to calculate its integral except over $\displaystyle [-\epsilon,\epsilon]$ and then let ε → 0 (this is called principal value integration or conditional integrability). In effect, the definitions of Denjoy and Lebesgue agree completely on positive functions. Trying to create a general theory Denjoy used transfinite induction over the possible types of singularities which made the definition quite complicated. Other definitions were given by Nikolai Luzin (using variations on the notions of absolute continuity), and by Oskar Perron, who was interested in continuous major and minor functions. It took a while to understand that the Perron and Denjoy integrals are actually identical. Later, in 1957, the Czech mathematician Jaroslav Kurzweil discovered a new definition of this integral elegantly similar in nature to Riemann's original definition which he named the gauge integral; the theory was developed by Ralph Henstock. The simplicity of Kurzweil's definition made some educators advocate that this integral should replace the Riemann integral in introductory calculus courses, but this idea never quite popularized. Another important property of the Henstock integral is that every function which is the derivative of some other function is gauge integrable, so a very strong form of the fundamental theorem of calculus holds. In particular a non-trivial corollary applies to the Lebesgue integral: if a function f is differentiable everywhere and its derivative is Lebesgue integrable, then f is the integral of its derivative. ## Definition Henstock's definition is as follows: Given a tagged partition P of [a, b], say $\displaystyle a = u_0 < u_1 < \ldots < u_n = b, \ \ v_i \in [u_{i-1}, u_i]$ and a positive function $\displaystyle \delta : [a, b] \to (0, \infty)$ , which we call a gauge, we say P is $\displaystyle \delta$ -fine if $\displaystyle \forall i \ \ u_i - u_{i-1} < \delta (v_i)$ . For a tagged partition P and a function $\displaystyle f : [a, b] \to \mathbb{R}$ we define the Riemann sum to be $\displaystyle \sum_P f = \sum_{i = 1}^n (u_i - u_{i-1}) f(v_i)$ Given a function $\displaystyle f : [a, b] \to \mathbb{R}$ we now define a number I to be the gauge integral of f if for every $\displaystyle \epsilon > 0$ there exists a gauge $\displaystyle \delta$ such that whenever P is $\displaystyle \delta$ -fine, we have $\displaystyle \left| \sum_P f - I \right| < \epsilon.$ The Riemann integral can be regarded as the special case where we only allow constant gauges. Note that due to Cousin's lemma, which says that for every gauge $\displaystyle \delta$ there is a $\displaystyle \delta$ -fine partition, this condition cannot be satisfied vacuously.
# Geometry, Spectral Theory, Groups, and Dynamics Publisher: American Mathematical Society Number of Pages: 275 Price: 79.00 ISBN: 0-8218-3710-9 Wednesday, October 12, 2005 Reviewable: No Include In BLL Rating: No Michael Entov, Yehuda Pinchover, and Michah Sageev, editors Series: Contemporary Mathematics 387 Publication Date: 2005 Format: Paperback Category: Festschrift • P. Buser -- On the mathematical work of Robert Brooks • D. Blanc -- Moduli spaces of homotopy theory • R. Brooks and M. Monastyrsky -- K-regular graphs and Hecke surfaces • P. Buser and K.-D. Semmler -- Isospectrality and spectral rigidity of surfaces with small topology • I. Chavel -- Topics in isoperimetric inequalities • B. Farb and S. Weinberger -- Hidden symmetries and arithmetic manifolds • H. M. Farkas -- Variants of the $3N+1$ conjecture and multiplicative semigroups • U. Frauenfelder, V. Ginzburg, and F. Schlenk -- Energy capacity inequalities via an action selector • K. Fujiwara -- On non-bounded generation of discrete subgroups in rank-1 Lie group • C. Gordon, P. Perry, and D. Schueth -- Isospectral and isoscattering manifolds: A survey of techniques and examples • M. G. Katz and C. Lescop -- Filling area conjecture, optimal systolic inequalities, and the fiber class in abelian covers • E. Leichtnam -- An invitation to Deninger's work on arithmetic zeta functions • A. Lubotzky -- Some more non-arithmetic rigid groups • R. G. Pinsky -- On domain monotonicity for the principal eigenvalue of the Laplacian with a mixed Dirichlet-Neumann boundary condition • B. Simon -- The sharp form of the strong Szegö theorem Publish Book: Modify Date: Wednesday, October 12, 2005
# The sum of independent lognormal random variables appears lognormal? I'm trying to understand why the sum of two (or more) lognormal random variables approaches a lognormal distribution as you increase the number of observations. I've looked online and not found any results concerning this. Clearly if $X$ and $Y$ are independent lognormal variables, then by properties of exponents and gaussian random variables, $X \times Y$ is also lognormal. However, there is no reason to suggest that $X+Y$ is also lognormal. HOWEVER If you generate two independent lognormal random variables $X$ and $Y$, and let $Z=X+Y$, and repeat this process many many times, the distribution of $Z$ appears lognormal. It even appears to get closer to a lognormal distribution as you increase the number of observations. For example: After generating 1 million pairs, the distribution of the natural log of Z is given in the histogram below. This very clearly resembles a normal distribution, suggesting $Z$ is indeed lognormal. Does anyone have any insight or references to texts that may be of use in understanding this? • Are you assuming equal variances for $X$ and $Y$? If you simulate xx <- rlnorm(1e6,0,3); yy <- rlnorm(1e6,0,1), then the log of the sum does not look very normal any more. – Stephan Kolassa Oct 5 '16 at 7:53 • I did assume equal variances - I'll try another with unequal variance and see what I end up with. – Patty Oct 5 '16 at 8:01 • With variances of 2 and 3, I got something that still looked a bit normal, albiet with what looks like a tiny tiny skew. – Patty Oct 5 '16 at 8:05 • Looking through previous questions may be helpful. Here and here are potentially useful papers. Good look! – Stephan Kolassa Oct 5 '16 at 9:51 This approximate lognormality of sums of lognormals is a well-known rule of thumb; it's mentioned in numerous papers -- and in a number of posts on site. A lognormal approximation for a sum of lognormals by matching the first two moments is sometimes called a Fenton-Wilkinson approximation. You may find this document by Dufresne useful (available here, or here). I have also in the past sometimes pointed people to Mitchell's paper Mitchell, R.L. (1968), "Permanence of the log-normal distribution." J. Optical Society of America. 58: 1267-1272. But that's now covered in the references of Dufresne. But while it holds in a fairly wide set of not-too-skew cases, it doesn't hold in general, not even for i.i.d. lognormals, not even as $$n$$ gets quite large. Here's a histogram of 1000 simulated values, each the log of the sum of fifty-thousand i.i.d lognormals: As you see ... the log is quite skew, so the sum is not very close to lognormal. Indeed, this example would also count as a useful example for people thinking (because of the central limit theorem) that some $$n$$ in the hundreds or thousands will give very close to normal averages; this one is so skew that its log is considerably right skew, but the central limit theorem nevertheless applies here; an $$n$$ of many millions* would be necessary before it begins to look anywhere near symmetric. * I have not tried to figure out how many but, because of the way that skewness of sums (equivalently, averages) behaves, a few million will clearly be insufficient Since more details were requested in comments, you can get a similar-looking result to the example with the following code, which produces 1000 replicates of the sum of 50,000 lognormal random variables with scale parameter $$\mu=0$$ and shape parameter $$\sigma=4$$: res <- replicate(1000,sum(rlnorm(50000,0,4))) hist(log(res),n=100) (I have since tried $$n=10^6$$. Its log is still heavily right skew) • Can you please add the parameters (or code snippet) used to make the histogram in the figure? – altroware Sep 21 '18 at 11:34 • That was two years ago, I don't recall what the lognormal parameters were. But let us apply simple logic. You wouldn't need to worry about the $\mu$ parameter, since it only affects the values on the x-axis scale, not the shape (something convenient like $\mu=0$ would be used). So that leaves the $\sigma$ parameter as the only one with any impact on the shape. Assuming $\mu=0$ and working back roughly from the scale in the histogram above we get that $\sigma$ must be in the ballpark of $4$ or so (NB beware how skew this is). And just trying $4$ gives a pretty similar appearance to the above. – Glen_b Sep 22 '18 at 0:03 • So: res <- replicate(1000,sum(rlnorm(50000,0,4))); hist(log(res),n=100) ... if you try it a few times you'll see the scale jumps around a little but the general picture is about right. Note that the population moment-skewness of the component lognormals is $26.5$ billion -- the population mean will exceed almost every generated value in most of your samples. – Glen_b Sep 22 '18 at 0:04 It's probably too late, but I've found the following paper on the sums of lognormal distributions, which covers the topic. It's not lognormal, but something quite different and difficult to work with. The adviced paper by Dufresne of 2009 and this one from 2004 together with this useful paper cover the history on the approximations of the sum of log-normal distribution and gives sum mathematical result. The problem is that all the approximations cited there are found by supposing from the depart that you are in a case in which the sum of log-normal distributions is still log-normal. Then you can compute the $\mu$ and the $\sigma$ of the global sum in some approximated way. But this doesn't give you the conditions that you have to fulfill if you want that the sum is still log-normal. Maybe [this paper] (http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6029348) give you in a particular case a kind of central limit theorem for the sum of log-normals but there is still a lack of generality. Anyway the example given by Glen_b it's not really appropriate, because it's a case where you can easily apply the classic central limit theorem, and of course in that case the sum of log-normal is Gaussian. But is true as said in the paper cited just above that even in the limit $n\to \infty$ you can have a log-normal sum (for example if variables are correlated or sufficiently not i.i.d.) • You say that in my example "you can easily apply the classic central limit theorem" but if you understand what the histogram is showing, clearly you can't use the CLT to argue that a normal approximation applies at n=50000 for this case; the sum is so right skew that its log is still heavily right skew. The point of the example was that it's even too skew to approximate by a lognormal (or that histogram would look very close to symmetric). A less skew approximation (such as the normal) would be *worse*/ – Glen_b Aug 19 '18 at 23:32 • I agree, but probably in you example either numerical convergence of the sample is not reached (1000 trials are too few) or statistical convergence is not reached, (50 000 addends are too few), but for in the limit to infinity the distribution should be Gaussian, since we are in CLT conditions, isn't it? – Mimì Aug 21 '18 at 10:30 • The 1000 samples is more than sufficient to discern the shape of the distribution of the sum -- the number of samples we take doesn't alter the shape, just how "clearly" we see it. That clear skewness isn''t going to go away if we take a larger sample, it's just going to get smoother looking. Yes, 50,000 is too few for the sum to look normal -- it's so right skew that the log still looks very skew. It may well require many millions before it looks reasonably normal. Yes, the CLT definitely applies; it's iid and the variance is finite, so standardized means must eventually approach normality. – Glen_b Aug 21 '18 at 11:13 Lognormal law is widely present on physical phenomena, sums of this kind of variable distributions are needed for instance to study any scaling behavior of a system. I know this article (very long and very strong, the beginning can be undertood if you are not specilist!), "Broad distribution effects in sums of lognormal random variables" published in 2003, (the European Physical Journal B-Condensed Matter and Complex Systems 32, 513) and is available https://arxiv.org/pdf/physics/0211065.pdf .
Periodes et groupes de Mumford-Tate des 1-motifs Abstract : In this thesis we study the structure and the degeneracies of the Mumford-Tate group of a 1-motive $M$ defined over $\CC$, $MT(M)$. This group is an algebraic $\QQ\,$-group acting on the Hodge realization of $M$ and endowed with an increasing filtration $W_\bullet$. We prove that the unipotent radical of $MT(M)$, which is $W_{-1}(MT(M)),$ injects into a generalized'' Heisenberg group. We then explain how to reduce to the study of the Mumford-Tate group of a direct sum of 1-motives whose torus's character group and whose lattice are both of rank 1. Next we classify and we study the degeneracies of $MT(M)$, i.e. those phenomena which imply the decrement of the dimension of $MT(M)$. The generalized Grothendieck's conjecture of periods ${\rm (CPG)}_K$ predicts that if $M$ is a 1-motive defined over an algebraically closed subfield $K$ of $\CC$, then ${\rm deg.transc}_{\QQ}\, K ({\rm p\acute eriodes}(M))\geq \dim_{\QQ}MT( M_{\CC}).$ In the second part of this thesi we propose a conjecture of transcendance that we call {\it the elliptico-toric conjecture} (CET). Our main result is that (CET) is equivalent to ${\rm (CPG)}_K$ applied to 1-motives defined over $K$ of the kind $M=[ {\Bbb Z}^{r} \, {\buildrel u \over \longrightarrow} \,\prod^n_{j=1} {\cal E}_j \times {\GG}_m^s]$. (CET) implies some classical conjectures, as the Schanuel's conjecture or its elliptic analogue, but it implies new conjectures as well. All these conjectures following from (CET) are equivalent to ${\rm (CPG)}_K$ applied to well chosed 1-motives: for example the Schanuel's conjecture is equivalent to ${\rm (CPG)}_K$ applied to 1-motives of the kind $M=[ {\Bbb Z}^{r} \, {\buildrel u \over \longrightarrow} \, {\GG}_m^s]$. Mots-clés : Document type : Theses Mathématiques [math]. Université Pierre et Marie Curie - Paris VI, 2000. Français Domain : https://tel.archives-ouvertes.fr/tel-00001222 Contributor : Cristiana Bertolin <> Submitted on : Friday, March 15, 2002 - 5:04:36 PM Last modification on : Friday, March 15, 2002 - 5:04:36 PM Document(s) archivé(s) le : Tuesday, September 11, 2012 - 5:10:08 PM Identifiers • HAL Id : tel-00001222, version 1 Citation Cristiana Bertolin. Periodes et groupes de Mumford-Tate des 1-motifs. Mathématiques [math]. Université Pierre et Marie Curie - Paris VI, 2000. Français. <tel-00001222> Record views
Shape effect on flapping dynamics of an inverted flag in a uniform flow = 균일 유동 내 역방향 깃발의 플래핑 운동에 깃발 형상이 미치는 영향 Cited 0 time in Cited 0 time in • Hit : 137 The flapping motion of inverted flags with various shapes in a uniform flow were simulated by using the immersed boundary method. The shapes of the flags were characterized in terms of the shape ratio (S = $W_T/W_L$), i.e., the ratio of the trailing edge width ($W_T$) to the leading edge width ($W_L$). To explore the effects of varying S on the flapping dynamics of inverted flags, the peak-to-peak amplitude (A/L) and the Strouhal number ($S_t$) were determined as functions of the bending rigidity (0.1 ≤ $\gamma$ ≤ 0.3) and the shape ratio (0.5 ≤ S ≤ 2). The vortical structures behind the inverted flag were visualized by using the Q-criterion to elucidate the vortex dynamics. The hydrodynamic forces exerted on each inverted flag were analyzed to find the correlation between its kinematics and vortex formation during the flapping period of the inverted flag. The strain energy ($E_s$) stored in the inverted flag and the ratio (R) of the conversion of flow kinetic energy to strain energy were also determined. Finally, we explored the effects of varying the shape ratio S' = $W_T/W_L$ while keeping the trailing edge width constant ($W_T$ = 1) instead of the area of inverted flag. The Strouhal number is maximized at S' = 1. The conversion ratio of S' = 2 is 2.5% higher than that of S' = 1. Sung, Hyung Jinresearcher성형진researcher Description 한국과학기술원 :기계공학과, Publisher 한국과학기술원 Issue Date 2019 Identifier 325007 Language eng Description 학위논문(석사) - 한국과학기술원 : 기계공학과, 2019.2,[iv, 26 p. :] Keywords Inverted flag▼ashape ratio▼aenergy harvesting▼ahydrodynamic force; 역방향 깃발▼a형상 계수▼a에너지 하베스팅▼a동유체력 URI http://hdl.handle.net/10203/265880
## Ultralocality and the robustness of slow contraction to cosmic initial conditions Anna Ijjas (Albert Einstein Institute, Max Planck Institute for Gravitational Physics) I will discuss the detailed process by which slow contraction smooths and flattens the universe using an improved numerical relativity code that accepts initial conditions with non-perturbative deviations from homogeneity and isotropy along two independent spatial directions. Contrary to common descriptions of the early universe, I will show that the geometry first rapidly converges to an inhomogeneous, spatially-curved and anisotropic ultralocal state in which all spatial gradient contributions to the equations of motion decrease as an exponential in time to negligible values. This is followed by a second stage in which the geometry converges to a homogeneous, spatially flat and isotropic spacetime. In particular, the decay appears to follow the same history whether the entire spacetime or only parts of it are smoothed by the end of slow contraction. Meeting ID:  854 3210 3337      Password: 959078 ## Observing the Inner Shadow of a Black Hole: A Direct View of the Event Horizon Andrew Chael (Princeton University) Simulated images of a black hole surrounded by optically thin emission typically display two main features: acentral brightness depression and a narrow, bright “photon ring” consisting of strongly lensed images superposedon top of the direct emission. The photon ring closely tracks a theoretical curve on the image plane correspondingto light rays that asymptote to unstably bound photon orbits around the black hole. This critical curve has asize and shape that are purely governed by the Kerr geometry; in contrast, the size, shape, and depth of theobserved brightness depression all depend on the details of the emission region. For instance, images of sphericalaccretion models display a distinctive dark region—the “black hole shadow”—that completely fills the photonring. By contrast, in models of equatorial disks extending to the black hole’s event horizon, the darkest regionin the image is restricted to a much smaller area—aninner shadow—whose edge lies near the direct lensedimage of the equatorial horizon. Using both semi-analytic models and general relativistic magnetohydrodynamic(GRMHD) simulations, we demonstrate that the photon ring and inner shadow may be simultaneously visible insubmillimeter images of M87, where magnetically arrested disk (MAD) simulations predict that the emissionarises in a thin region near the equatorial plane. We show that the relative size, shape, and centroid of thephoton ring and inner shadow can be used to estimate the black hole mass and spin, breaking degeneracies inmeasurements of these quantities that rely on the photon ring alone. Both features may be accessible to directobservation via high-dynamic-range images with a next-generation Event Horizon Telescope. Meeting ID:  854 3210 3337      Password: 959078 ## Neutrino transport in neutron star merger simulations Francois Foucart (University of New Hampshire) The first observation of a neutron star merger through gravitational waves and electromagnetic (EM) signals has shown us the power of multi-messenger observations. Multiple studies based on these observations have placed useful constraints on the equation of state of dense nuclear matter, while the event itself confirmed that mergers are likely (one of) the sources of r-process elements (e.g. gold, uranium) in the Universe. Many interpretations of these observations require reliable models for kilonovae, the radioactively powered EM transient powered by mergers. The numerical simulations that typically inform kilonovae models however have two important “known unknowns’’, namely the uncertainties due to approximate modeling of magnetic fields and neutrinos. In this talk, I will review the role of neutrinos in neutron star-neutron star mergers, as well as existing approximate transport methods used in simulations. I will also present a Monte-Carlo algorithm recently implemented in the SpEC code, used to perform the first simulations of merging neutron stars that directly attempt to solve Boltzmann’s equation of radiation transport. This scheme is purposely built to be as inexpensive as possible: the cost of a simulation remains comparable to simulations using our best existing approximate transport scheme. I will discuss the trade-offs made to reach that target, and how the scheme may be improved in the future. Related papers: arXiv:2103.16588, arXiv:2008.08089 Meeting ID:  854 3210 3337      Password: 959078 ## Long term Magneto-thermal evolution of neutron stars: the roles of the Hall drift amb ambipolar diffusion José A. Pons It is generally accepted that the nonlinear, dynamical evolution of magnetic fields in the interior of neutron stars plays a key role in the explanation of the observed phenomenology (temperatures, luminosities, spin period and derivative). Understanding the transfer of energy between toroidal and poloidal components, or between different scales, is of particular relevance. In this talk I discuss the general aspects of the long term magnetic and thermal evolution, with particular emphasis in the role of the Hall drift and ambipolar diffusion for typical magnetar conditions Meeting ID:  854 3210 3337        Password: 959078 ## Bubble dynamics from holography David Julián Mateos Solé (Universitat de Barcelona) Cosmological phase transitions proceed via the nucleation of bubbles that subsequently expand and collide. The resulting gravitational wave spectrum depends crucially on the bubble wall velocity. Microscopic calculations of this velocity are challenging even in weakly coupled theories. I will show how to use holography to compute the wall velocity from first principles in strongly coupled, non-Abelian, four-dimensional gauge theories. No previous knowledge of holography or string theory required. Meeting ID:  854 3210 3337        Password: 959078 ## Comparison of Kerr and dilaton black hole shadows: Impact of non-thermal emission Jan Röder (Institut für Theoretische Physik, Goethe -Universität Frankfurt) With the Event Horizon Telescope, a very long baseline interferometry (VLBI) array, both temporal and spatial event horizon-scale resolutions needed to observe super-massive black holes were reached for the first time. Current open questions revolve around the type of compact object in the Galactic Center, plasma dynam- ics around it and emission processes at play. The main goal of this thesis is to assess whether it is possible to distinguish between two spacetimes by means of synthetic imaging, under the aspect of different emission models. Extending the studies conducted in the pioneering work of Mizuno et al. 2018, general relativis- tic radiative transfer (GRRT) calculations are carried out on general relativistic magneto-hydrodynamics (GRMHD) simulations of a Kerr and of a non-rotating dilaton black hole. The systems are matched at the innermost stable circular orbit, and both black holes are initially surrounded by a torus in hydrostatic equilibrium with a weak poloidal magnetic field. In order to investigate the plasma dynam- ics, GRMHD simulations were carried out using the “Black Hole Accretion Code” (BHAC). In the literature the ratio between the temperatures of simulated ions and radiating electrons is often taken to be a constant, while in reality it is ex- pected to depend on plasma properties. In radiative post-processing with the code “Black Hole Observations in Stationary Spacetimes” (BHOSS) the temperature ra- tio was therefore parametrized. Additionally, in the jet wall, electrons are believed to be accelerated and should therefore be modeled with non-thermal electrons. To this end, both thermal and non-thermal electron energy distribution functions were employed. Lastly, images were reconstructed from synthetic VLBI data with the “eht-imaging” Python package to study how the effects of the emission models carry over to an observational environment. The most impactful result is the effect of the parameter Rhigh in the temperature ratio parametrization, splitting source structures into torus– and jet dominated configurations. Non-thermal emission turns out to be negligible at the field of view used and for the region it is applied in. Hence, given the present observational capabilities, it is unlikely that it is possible to distinguish spacetimes in observations. The striking visual differences are due to the difference in rotation between the black holes. In synthetic VLBI images, even the difference in shadow size is lost for most configurations. The situation may be improved in the future by a better VLBI array. Zoom Meeting ID: 854 3210 3337 Passcode: 959078 ## GR-Athena++: puncture evolutions on vertex-centered oct-tree AMR Boris Daszuta (Friedrich-Schiller-Universität Jena) GR-Athena++ is a general-relativistic, high-order, vertex-centered solver that extends the oct-tree, adaptive mesh refinement capabilities of the astrophysical (radiation) magnetohydrodynamics code Athena++. To simulate dynamical spacetimes GR-Athena++ uses the Z4c evolution scheme of numerical relativity coupled to the moving puncture gauge. Stable and accurate binary black hole merger evolutions are demonstrated in convergence testing, cross-code validation, and verification against state-of-the-art effective-one-body waveforms. GR-Athena++ leverages the task-based parallelism paradigm of Athena++ to achieve excellent scalability. Strong scaling efficiencies above 95% for up to 1.2×1e4 CPUs and excellent weak scaling up to 1e5 CPUs in a production binary black hole setup with adaptive mesh refinement are measured. GR-Athena++ thus allows for the robust simulation of compact binary coalescences and and offers a viable path towards numerical relativity at exascale. Zoom Meeting ID: 854 3210 3337 Passcode: 959078 ## Maximum mass of compact stars from gravitational wave events with finite-temperature equations of state Armen Sedrakian ( Frankfurt Institute for Advanced Studies) We conjecture and verify a set of universal relations between global parameters of hot and fast-rotating compact stars, including a relation connecting the masses of the mass-shedding (Kepler) and static configurations. We apply these relations to the GW170817 event by adopting the scenario in which a hypermassive compact star remnant formed in a merger evolves into a supramassive compact star that collapses into a black hole once the stability line for such stars is crossed. We deduce an upper limit on the maximum mass of static, cold neutron stars 2.15+0.100.07MTOV2.24+0.120.10 for the typical range of entropy per baryon 2S/A3 and electron fraction Ye=0.1 characterizing the hot hypermassive star. Our result implies that accounting for the finite temperature of the merger remnant relaxes previously derived constraints on the value of the maximum mass of a cold, static compact star. Zoom Meeting ID: 854 3210 3337 Passcode: 959078 ## Black hole scalarization Stoytcho Yazadjiev ( Eberhard Karls University of T ̈ubingen & Sofia University) In extended scalar-tensor theories, such as scalar-Gauss-Bonnet gravity, the black holes can undergo spontaneous scalarization – a strong gravity phase transition triggered by a tachyonic instability due to the non-minimal coupling between the scalar field(s) and the spacetime curvature. This very interesting phenomenon is the only known dynamical mechanism for endowing black holes (and other compact objects) with scalar hair without altering the predictions in the weak-field limit. In this talk, I will present the basic theoretical ideas behind the spontaneous scalarization and will review some of the recent achievements in the field. Zoom Meeting ID: 854 3210 3337 Passcode: 959078 ## Black holes with scalar hair and astrophysical implications Daniela Doneva ( Eberhard-Karls-Universitat Tuebingen) Even though the Kerr black hole fits very well in the interpretation of various astrophysical observations, there is a number of yet untested modifications of general relativity that can endow it with hair. The rapid advance of observational astrophysics gives us the unique opportunity to test the existence of beyond-Kerr black holes and eventually to constraint the strong-field regime of gravity. A particular widely studied case is a Kerr-like black hole endowed with scalar hair that can form for example in the presence of time-varying complex scalar field or in the more general context of tensor-multi-scalar theories. We will discuss these solutions and their astrophysical manifestations. We will put a special emphasis on the accretion discs around such objects that can have fundamentally different properties compared to pure GR. Zoom Meeting ID: 854 3210 3337 Passcode: 959078
# Logging¶ For a description of logging from the users point of view, see Logging. Logging in Brian is based on the logging module in Python’s standard library. Every brian module that needs logging should start with the following line, using the get_logger() function to get an instance of BrianLogger: logger = get_logger(__name__) In the code, logging can then be done via: logger.diagnostic('A diagnostic message') logger.debug('A debug message') logger.info('An info message') logger.warn('A warning message') logger.error('An error message') If a module logs similar messages in different places or if it might be useful to be able to suppress a subset of messages in a module, add an additional specifier to the logging command, specifying the class or function name, or a method name including the class name (do not include the module name, it will be automatically added as a prefix): logger.debug('A debug message', 'CodeString') logger.debug('A debug message', 'NeuronGroup.update') logger.debug('A debug message', 'reinit') If you want to log a message only once, e.g. in a function that is called repeatedly, set the optional once keyword to True: logger.debug('Will only be shown once', once=True) logger.debug('Will only be shown once', once=True) The output of debugging looks like this in the log file: 2012-10-02 14:41:41,484 DEBUG brian2.equations.equations.CodeString: A debug message and like this on the console (if the log level is set to “debug”): DEBUG A debug message [brian2.equations.equations.CodeString] ## Log level recommendations¶ diagnostic Low-level messages that are not of any interest to the normal user but useful for debugging Brian itself. A typical example is the source code generated by the code generation module. debug Messages that are possibly helpful for debugging the user’s code. For example, this shows which objects were included in the network, which clocks the network uses and when simulations start and stop. info Messages which are not strictly necessary, but are potentially helpful for the user. In particular, this will show messages about the chosen state updater and other information that might help the user to achieve better performance and/or accuracy in the simulations (e.g. using (event-driven) in synaptic equations, avoiding incompatible dt values between TimedArray and the NeuronGroup using it, …) warn Messages that alert the user to a potential mistake in the code, e.g. two possible solutions for an identifier in an equation. It can also be used to make the user aware that he/she is using an experimental feature, an unsupported compiler or similar. In this case, normally the once=True option should be used to raise this warning only once. As a rule of thumb, “common” scripts like the examples provided in the examples folder should normally not lead to any warnings. error This log level is not used currently in Brian, an exception should be raised instead. It might be useful in “meta-code”, running scripts and catching any errors that occur. The default log level shown to the user is info. As a general rule, all messages that the user sees in the default configuration (i.e., info and warn level) should be avoidable by simple changes in the user code, e.g. the renaming of variables, explicitly specifying a state updater instead of relying on the automatic system, adding (clock-driven)/(event-driven) to synaptic equations, etc. ## Testing log messages¶ It is possible to test whether code emits an expected log message using the catch_logs context manager. This is normally not necessary for debug and info messages, but should be part of the unit tests for warning messages (catch_logs by default only catches warning and error messages): with catch_logs() as logs: # code that is expected to trigger a warning # ... assert len(logs) == 1 # logs contains tuples of (log level, name, message) assert logs[0][0] == 'WARNING' and logs[0][1].endswith('warning_type')
Laplace transforms, numerical solution of ordinary differential equations, Fourier series, and separation of variables method applied to the linear partial differential equations of mathematical physics (heat, wave, and Laplace's equation). The type names are meant only as a guide and may refer to the form of the question, what it looks like at a glance. The material is taken from actual written tests that have been delivered at the Engineering School of the University of Genoa. - [Instructor] So when you first learn calculus, you learn that the derivative of some function f, could be written as f prime of x is equal to the limit as, then there's multiple ways of doing this, the change in x approaches zero of f of x plus our change in x, minus f of x, over our change in x. [Complex Variables] [Matrix Algebra] S. The core of the course was calculus, but calculus as it is used in contemporary science. The next day, we started talking about the separation of variables within derivatives. You may find it helpful to consult other texts or information on the internet for additional information. Edwards for up to 90% off at Textbooks. calculus of a single variable 8e pdf and worldwide. 3 Problem 3E. Indicate the domain over which the solution is valid. Separation of variables is a technique to solve first-order ordinary differential equations. AP* Calculus Free-response Question Type Analysis and Notes Revised to include the 2014 Exam By Lin McMullin General note: AP Questions often test several diverse ideas or concepts in the same question. The prevalence of kidney scarring due to urinary tract infection in Iranian children: a systematic review and meta-analysis. 2] • Convergence tests for series [Section 10. AP Cal BC Separation of Variables. This result will link together the notions of an integral and a derivative. After this introduction is given, there will be a brief segue into Fourier series with examples. The continuous function f is defined on the closed interval −6 £ x £ 5. (c) Find the time. 3 Separation of Variables and the Logistic Equation 423 Homogeneous Differential Equations Some differential equations that are not separable in and can be made separable by a change of variables. We will use calculus to study functions of more than one variable. The NRICH Project aims to enrich the mathematical experiences of all learners. how do you solve the initial value problem by using separation of variables dy/dx=1/y^2, y(0)=4. Some differential equations can be solved by the method of separation of variables (or "variables separable"). 6 Constrained optimization Chapter 8 projects Focus on theory: deriving the formula for a regression line. Solving differential functions involves finding a single function, or a collection of functions that satisfy the equation. 01 Single Variable Calculus, Fall 2006. Since we will deal with linear PDEs, the superposition principle will allow us to form new solu-tions from linear combinations of our guesses, in many cases solving the entire problem. Teach yourself calculus. Topics this. Larson/Edwards' CALCULUS has been widely praised by a generation of. by constructing a calculus problem that dealt with its closest possible approach to my home. Every edition from the first to the sixth of CALCULUS: EARLY TRANSCENDENTAL FUNCTIONS has made the mastery of traditional calculus skills a priority, while embracing the best features of new technology and, when appropriate, calculus reform ideas. 3 u substitution definite. 2 hours later, the culture weighs 4 grams. Finding general solutions using separation of variables. ) Course Reader MIT students will be provided with a copy of the Course Reader: Jerison, D. Example Let us find the general solution of the. [Calculator] During a certain epidemic, the number of people that are infected at any time increases at rate proportional to the number of people that are infected at that time. asked by seth on March 19, 2011; calculus. This lesson is intended for AP Calculus AB, BC, and Honors students. Quasilinear first-order equations and characteristics. Linear Models and Rates of Change. by Karen Adler (aka Separable Differential Equations). Like facts, registered variables are host-level variables. Di erential Equations & Separation of Variables SUGGESTED REFERENCE MATERIAL: As you work through the problems listed below, you should reference Chapter 8. Find many great new & used options and get the best deals for Calculus of a Single Variable : Early Transcendental Functions by Ron Larson, Robert P. Analytic Geometry & Calculus 2 MATH 0230 This course is the standard second course in a basic calculus sequence required for all mathematics, science, engineering, and statistics students. Finding particular solutions using initial conditions and separation of variables Math · AP®︎ Calculus AB · Differential equations · Finding general solutions using separation of variables Identifying separable equations. What am I doing when I separate the variables of a differential equation? int g(x) \, dx,$$which is the separation of variables formula. Separable differential equations Method of separation of variables. STUDY GUIDE: Calculus of a Single Variable - Table of Contents. Ships with Tracking Number! INTERNATIONAL WORLDWIDE Shipping available. (b) Find the particular solution that also satisifies the condition y(0) = 2. They should work, but the problem either ends up with an obviously incorrect answer and keep rolling over. Day 1 - PPV Day 1 - Graphing Parametric; 10. Limits, continuity, and the derivative and integral of functions of one variable, with applications. 1 Conics and Calculus, 10. 1A1 * AP® is a trademark registered and owned by the College Board, which was not involved in the production of, and does not endorse, this site. 3 Problem 3E. Similarly, the minimal design of this text allows the central ideas of calculus developed in this book to unfold to ignite the learner’s imagination. Separation of variables Contact Us If you are in need of technical support, have a question about advertising opportunities, or have a general question, please contact us by phone or submit a message through the form below. 2 Plane Curves and Parametric Equations. Introduction to Exponential Growth and Decay. Euler's Method is one of three favorite ways to solve differential equations. 1, d y y3 (1 + > O on this Interval 1). However, in multivariable calculus we want to integrate over. In this section we will learn how to solve a more general type of differential equation. So P(y) dy dx = Q(x) ⇔ Z P(y)dy = Z Q(x)dx. There is exactly one value of C for each value of r. Let's get to the examples! Example 1 — Getting the Basics Down. Solving DEs by Separation of Variables. Application: RC Circuits - containing a resistor and. The second question is much more difficult, and often we need to resort to numerical methods. CD Roms, access cards/codes, and other supplemental materials may or may not be included based on availability. -5- GO ON TO THE NEXT PAGE. Day 2 - PPV Day 2 - Parametric Equations in Calculus. This introductory calculus course covers differentiation and integration of functions of one variable, with applications. For a differential equation involving x and y,you separate the x variables to one side and the y variables to the other. Get 1:1 help now from expert Calculus tutors Solve it with our calculus problem solver and calculator. 1 Limits (An Intuitive Approach) 1. asked by billy on March 19, 2011; calculus. For example, to find the stationary states $\psi(x)$ of a quantum system (with a given time-independent potential $V(x)$), one can solve the Schrodinger’s equation by this method. 2 Separation of Variables notes prepared by Tim Pilachowski Section 10. 3: Separation of Variables and the Logistic Equation, pg. The next day, we started talking about the separation of variables within derivatives. Also, to make the code a bit nicer looking, you can press Ctrl+Shift+I in your notebook to convert it to InputForm before pasting it here. Integrable Combinations - a method of solving differential equations 4. , functions of space x an. Antiderivatives Calculating Limits with Limit Laws Chain Rule Concavity Continuity Derivative as a Function Derivatives Derivatives of Logarithmic Functions Derivatives of Polynomial and Exponential Functions Derivatives of Trig Functions Exponential Functions Exponential Growth and Decay Fundamental Theorem of Calculus Horizontal Asymptotes How Derivatives Affect the Shape of a Graph Implicit Differentiation Indefinite Integrals Indeterminate Forms Inverse Functions L'Hopital's Rule Limit. This introductory calculus course covers differentiation and integration of functions of one variable, with applications. Search Search. A separable differential equation of the form: has a unique solution locally for any initial value problem if and and are continuous functions around and respectively. Quasilinear first-order equations and characteristics. Textbook solution for Calculus: Early Transcendental Functions 7th Edition Ron Larson Chapter 6. u=2y+x >>I did not know how to start this, so i looked at the back of the book and it said to use that substitution. Textbook solution for Calculus 10th Edition Ron Larson Chapter 6. 2 Separation of Variables. Rand Lecture Notes on PDE's 2 Contents 1 Three Problems 3 2 The Laplacian ∇2 in three coordinate systems 4 3 Solution to Problem "A" by Separation of Variables 5 4 Solving Problem "B" by Separation of Variables 7. However, in this tutorial we review four of the most commonly-used analytic solution methods for first-order ODES. This allows for exploration of the linkage of the data without having to assume a specific number of clusters. HANDS-ON ACTIVITY 10. Essentially, the technique of separation of variables is just what its name implies. Separating the Variables. Create your own worksheets like this one with Infinite Calculus. Session 40: Separation of Variables Course Home From Lecture 16 of 18. Hostetler and Bruce H. calculus) Okay, your first step, obviously, is to separate the variables. Indicate the domain over which the solution is valid 5. by Karen Adler (aka Separable Differential Equations). Use separation of variables to solve (x+2y)y'=1 y(0)=2. 3 calculus 09­10 blank. The method of separation of variables consists in all of the proper algebraic operations applied to a differential equation (either ordinary or partial) which allows to separate the terms in the equation depending to the variable they contain. Review Exercises. For example, to find the stationary states $\psi(x)$ of a quantum system (with a given time-independent poten. Separation of Variables. Numerical methods. ) Using correct units, interpret the meaning of the value in the context of the problem. Here is a set of practice problems to accompany the Separation of Variables section of the Partial Differential Equations chapter of the notes for Paul Dawkins Differential Equations course at Lamar University. The unit is split into two sections: section one will cover solving ordinary & partial differential equations using methods such as Laplace transforms, Fourier series, and the method of separation of variables; section two will cover differential and integral vector calculus methods. • Sometimes functions are described in words. , functions of space x an. This first type of DE is a separable differential equation, i. Differential Equations by Separation of Variables -Classwork. 01 Single Variable Calculus,. Title: 09 - Separable Differential Equations. The temperature distribution in a semi-infinite rod follows the diffusion equation k(∂^2u/∂x^2)= ∂u/∂t The temperature of the rod at x=0 is varied (relative to remperature T_0) as u(0,t) = ΔTsin(wt) By using separation of variables with an imaginary separation constant show that the solution is, for x>=0 show more The temperature distribution in a semi-infinite rod follows the. Ships with Tracking Number! INTERNATIONAL WORLDWIDE Shipping available. This tends to include derivative terms. y = e2 x, ÅÅÅÅÅÅÅdy dx =2 y. 1 Ordinary Differential Equations—Separation of Variables 1. Do you need more help? Please post your question on our S. Learn how it's done and why it's called this way. Differential Equations by Separation of Variables - Homework #@!! dy dx x y! [email protected]!! dy dx x y! 2 2 2 3 AB Calculus Manual. What am I doing when I separate the variables of a differential equation? int g(x) \, dx,$$ which is the separation of variables formula. This lesson contains the following Essential Knowledge (EK) concepts for the *AP Calculus course. Integral transforms. Drawing on decades of. Calculus is an advanced math topic, but it makes deriving two of the three equations of motion much simpler. Session 40: Separation of Variables Course Home From Lecture 16 of 18. D-separation d-separation is a criterion for deciding, from a causal graph, whether a set A of variables is independent of another set B (given a third set C) A ??BjC A and B are d-separated if for all paths P from A to B, at least one of the following holds: I P includes a "chain" with an observed middle node I P includes a "fork" with an observed. Author: Mohamed Amine Khamsi Last Update 6-22-98. AB Calculus Manual (Revised 1/2016) This page provides the AB Calculus Manual for the classroom - all chapters of this manual are provided as free downloads! This section is a complete high school course for preparing students to tak e the AB Calculus exam. It is based on the assumption that the solution of the equation is separable, that is, the final solution can be represented as a product of several functions, each of which is only dependent upon a single independent variable. Text: Calculus: Concepts and Contexts, 4th ed. However, as part of an investigation of the numerical, graphical,. The following pictures show how to use the separation of variables to solve a differential equation, and how to find the Growth and Decay Models: Calculus Corner. Using this result will allow us to replace the technical calculations of Chapter 2 by much. The method of separation of variables consists in all of the proper algebraic operations applied to a differential equation (either ordinary or partial) which allows to separate the terms in the equation depending to the variable they contain. Be able to solve the equations modeling the vibrating string using Fourier’s method of separation of variables. Calculus BC. v~,fe will emphasize problem solving techniques, but \ve must also understand how not to misuse the technique. The second question is much more difficult, and often we need to resort to numerical methods. DP/dt = P - P^2 Solve The Given Differential Equation By Separation Of Variables. Ask Mr Calculus - Past AP Exam Solutions. Using this result will allow us to replace the technical calculations of Chapter 2 by much. LIMITS AND THEIR PROPERTIES. We can apply the process of separation of variables to solve this problem and similar problems involving solution concentrations. Prereq: 007 or outstanding score on Mathematics Placement Examination. Solving Differential Equations by Separation of Variables - Prof. In undertaking this unit, it is also assumed that you have attained the unit learning outcomes, prior knowledge and/or skills from both JEE103 Mathematics I, JEE104 Mathematics II and JEE101 Programming and Problem Solving for Engineers. Differentials, antiderivatives-Differential equations, separation of variables - Definite integrals - First fundamental theorem of calculus - Second fundamental theorem - Applications to logarithms and geometry - Volumes by disks and shells - Work, average value, probability - Numerical integration - Exam 3 review - Trigonometric integrals and. 5 Continuity 1. Term Date Instructor separation of variables, and probability) plus sequences, series, convergence tests, power series, Taylor. Separation of variables was first used by L'Hospital in 1750. Prepped & Polished, Tutoring and Test Preparation, Natick, MA 41,842 views. family of differential equations—those in which the variables can be separated. Indicate the domain over which the solution is valid. Once you add the C in anything you do to either side of the equation is negligible to the C. Prerequisite: Calculus IV - Ordinary Differential Equations for Engineers Math 01:640:244. 1 Graphs and Models: 44:37: 2. Conic Sections Trigonometry. 1 Introduction Calculus is fundamentally important for the simple reason that almost everything we study is subject to change. 1 geometric series; 10. Example $$\PageIndex{3}$$: Determining Salt Concentration over Time A tank containing $$100\,L$$ of a brine solution initially has $$4\,kg$$ of salt dissolved in the solution. 6] • Taylor polynomials and Taylor series [Section 10. Edwards (2006, Hardcover) at the best online prices at eBay!. In mathematics, separation of variables (also known as the Fourier method) is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. Di erential Equations & Separation of Variables SUGGESTED REFERENCE MATERIAL: As you work through the problems listed below, you should reference Chapter 8. This technique is called separation of variables. Some, however, can be solved by separating the x and y's, then integrating. cos dy yt dt 4. For example, the differential. However, you may use any textbook as. Quasilinear first-order equations and characteristics. The maximum weight of the culture is 20 grams. Johnson, MIT course 18. Inverse Functions. Introduction and procedure Separation of variables allows us to solve di erential equations of the form dy dx = g(x)f(y) The steps to solving such DEs are as follows: 1. In mathematics, a partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. y ′ = 2 x y + 3 y − 4 x − 6. Separation of Variables Worksheet | 2010- 2011 | AP CALCULUS AB/BC - Free download as PDF File (. Unlock your Larson Calculus of a Single Variable PDF (Profound Dynamic Fulfillment) today. 1 Separation of Variables + Slope Fields Here are the lessons for CH 6. , functions of space x an. Separable differential equations Method of separation of variables. 3 Problem 3E. reside in a more restricted universe , which can only represent first-order dat. Separation of variables is a method for solving a differential equation. Calculus - Separation of Variables Calculus: Many differential equations are not solvable except by numerical methods. AP CALCULUS BC Section 6. Scribd is the world's largest social reading and publishing site. Rutgers University September 2012 to Present Originally as under a Computer Science independent study. By the end of your studying, you should know: How to solve a separable differential equation. calculus of a single variable 7th pdf. Sale! Calculus An Applied Approach 9th Edition Larson Test Bank $. The method can often be extended out to more than two variables, but the work in those problems can be quite involved and so we didn't cover any of that here. 2: Find the particular solution that satisfies the initial condition. Calculus is one of the greatest achievements of the human intellect. The Exponential Function 69 4. Dear friends, today I will show how to use the 'separation of variables' method in ordinary differential equations. 3 Problem 1E. AP Calculus AB Sample Student Responses and Scoring Commentary Students needed to employ the method of separation of variables, using the initial condition. This introductory calculus course covers differentiation and integration of functions of one variable, with applications. Use features like bookmarks, note taking and highlighting while reading Calculus of a Single Variable. com, welcome back to AP Calculus. a) Find the general solution of the differential equation. Area as a Differential Equation. Use separation of variables to solve (x+2y)y'=1 y(0)=2. Chapter 5 section 5. Elimination of Arbitrary Constants;. We prove in the talk that one can ALWAYS achieve separation of variables via use of Sinc-Pack, under the assumption that calculus is used to model the PDE. Here are two examples about absolute values and domains: 2005 AB 6: After separating the variables and applying the initial condition we arrive at. Be able to solve the equations modeling the vibrating string using Fourier's method of separation of variables. Students will use the separation of variables technique to solve differential equations, find general solutions. Stack Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. by constructing a calculus problem that dealt with its closest possible approach to my home. SAMPLE PROBLEM. (a) Write a logistic differential equation that models the weight of the bacterial culture. This is why Step 3 is important. A differential equation is basically any equation that has a derivative in it. In general, the method of separation of variables applies to differential equa­ tions that can be written as: dy = f(x)g(y). We can apply the process of separation of variables to solve this problem and similar problems involving solution concentrations. Separation of Variables. 11) Assignments: Weekly problem sets will typically be assigned at least a week in advanced and due on Tuesday by 3 PM in room 2-108. When using the separation of variable for partial differential equations, we assume the solution takes the form u(x,t) = v(x)*g(t). EXPECTED SKILLS:. The rumor spreads slowly at first when tellers are few. 1 LIMITS AND CONTINUITY 1. Edwards (2006, Hardcover) at the best online prices at eBay!. It is located in the Applications of Integrals section in chapter 3. Integrable Combinations - a method of solving differential equations 4. AP Exam Study Guide - Created by Elaine Cheong. Presuming previous exposure to only two semesters of calculus, necessary linear algebra is developed as needed. Free trial available at KutaSoftware. Separation of Variables. CD Roms, access cards/codes, and other supplemental materials may or may not be included based on availability. PNG The first one has a positive initial value. MA 16020 Applied Calculus II Calendar – Syllabus(Part I), Fall 2019 Separation of Variables 9/16 M 9 First-Order Linear Differential Equations. Indicate the domain over which the solution is valid 5. Be able to model a vibrating string using the wave equation plus boundary and initial conditions. In Summary • To Solve an ODE, eliminate derivatives • One method for first order linear/nonlinear ODES • Separation of Variables (Reverse Chain Rule) • Integral curves are solution curves for different. Snow, Instructor Last lesson, we learned to analyze visually the solutions of differential equations using slope fields. A SHORT JUSTIFICATION OF SEPARATION OF VARIABLES A rst order di erential equation is separable if it can be written in the form (1) m(x) + n(y) dy dx = 0: The standard approach to solve this equation for y(x) is to treat dy=dx as a fraction and move all quantities involving x to the right side and all quantities involving y to the left. CALCULUS AB FREE-RESPONSE QUESTIONS CALCULUS AB SECTION II, Part B. Note that we studied Exponential Functions here and Differential Equations here in earlier sections. 5] • Power series [Section 10. There is no lecture 8 video because the exam was given during this session. Fitting Models to Data. Page 1 of 4. 2005 AB-6 Page 3 of 4. Mathematics CyberBoard. Snow, Instructor Last lesson, we learned to analyze visually the solutions of differential equations using slope fields. Today, we are going to talk about a technique for actually solving any particular differential equation that you are faced with. CD Roms, access cards/codes, and other supplemental materials may or may not be included based on availability. 1803 Topic 25 Notes Jeremy Orlo 25 PDEs separation of variables 25. Cick on the link. 1 Consider the differential equation dy dx = y2 sinx. We will learn about differentiation, optimization, integration, and differential equations. in terms of. Equations given which speak of the rate at which a quantity. For a differential equation involving x and y, you separate the x variables to one side and the y variables to the other. pdf) or read online for free. Calculus - separation of variables? The question is to solve the differential equation but I couldn't isolate the y at the end. PREPARATION FOR CALCULUS. Separation of variables is a method for solving a differential equation. 1 Understanding functions of two variables 8. Textbook solution for Calculus 10th Edition Ron Larson Chapter 6. -5- GO ON TO THE NEXT PAGE. Differential Equations and Separation of Variables. Be able to solve the equations modeling the vibrating string using Fourier's method of separation of variables. Conic Sections Trigonometry. We explain calculus and give you hundreds of practice problems, all with complete, worked out, step-by-step solutions. So with all of that out of the way here is a quick summary of the method of separation of variables for partial differential equations in two variables. Separation of Variables There is already a review about the method of solving differential equations known as separation of variable in this document. 3 Separation Of Variables And The Logistic Equation Worksheet Calculus BC 15. In general, I'll be satisfied if I can eliminate the derivative by integration. pdf Author: WLOY Created Date:. partial fractions, linear eigenvalue problems), ordinary di erential equations (e. Free trial available at KutaSoftware. What am I doing when I separate the variables of a differential equation? int g(x) \, dx,$\$ which is the separation of variables formula. How to Get a 5 on the AP Calculus AB and BC Exams - Duration: 4:46. Separating the Variables. solve the initial value problem by separation of variables dy/dx=x2/y given y=-5 when x=3. notebookDecember 07, 2017 AP Calculus BC December 7th I can solve a differential equation by separation of variables I can analyze differential equations to obtain general and specific solutions. The importance of the method of separation of variables was shown in the introductory section. However, in this tutorial we review four of the most commonly-used analytic solution methods for first-order ODES. Elimination of Arbitrary Constants;. In this section we will learn how to solve a more general type of differential equation. Use the method of separation of variables to find a general solution to the differential equation y ′ = 2 x y + 3 y − 4 x − 6. Separation of Variables - a method of solving differential equations ; 3. A rich history and cast of characters participating in the development of calculus both. (a) Use separation of variables to show that T(t) = (T 0 T R)e kt+T R is the particular solution to this initial value problem. y = 2 sin x, ÅÅÅÅÅÅÅÅÅÅd2 y dx2 =-y 4. Free trial available at KutaSoftware. Fourier series and integrals. AP Calculus AB [Flip the Classroom] - Week of Friday March 29 2019 to Friday April 5 2019 CH 6. Antiderivatives Calculating Limits with Limit Laws Chain Rule Concavity Continuity Derivative as a Function Derivatives Derivatives of Logarithmic Functions Derivatives of Polynomial and Exponential Functions Derivatives of Trig Functions Exponential Functions Exponential Growth and Decay Fundamental Theorem of Calculus Horizontal Asymptotes How Derivatives Affect the Shape of a Graph Implicit Differentiation Indefinite Integrals Indeterminate Forms Inverse Functions L'Hopital's Rule Limit. 1 Introduction A differential equation is a relationship between some (unknown) function and one of its derivatives. We explain calculus and give you hundreds of practice problems, all with complete, worked out, step-by-step solutions. Take the operation in that definition and reverse it. In this Section we consider differential equations which can be written in the form dy dx = f(x)g(y) Note that the right-hand side is a product of a function of x, and a function of y. History of calculus or infinitesimal calculus, is a history of a mathematical discipline focused on limits, functions, derivatives, integrals, and infinite series. (a) Find the general solution of this differential equation. Pre-calculus topics, yes, but they come back again and again. Here are two examples about absolute values and domains: 2005 AB 6: After separating the variables and applying the initial condition we arrive at. The exposition is very clear and inviting. Look for situations in which you may avoid solving the DE. Introduction to Exponential Growth and Decay. In general, the method of separation of variables applies to differential equa­ tions that can be written as: dy = f(x)g(y). The continuous function f is defined on the closed interval −6 £ x £ 5. PREPARATION FOR CALCULUS. Separation of variables. Here is an. By separating these variables, it allows you to find the original equation from which you took the derivative. Chapter 3 The Fundamental Theorem of Calculus In this chapter we will formulate one of the most important results of calculus, the Funda-mental Theorem. Free trial available at KutaSoftware. Take the operation in that definition and reverse it. Share Lesson 3 - Separation Of Variables (Differential Equations) on Twitter Pin Lesson 3 - Separation Of Variables (Differential Equations) on Pinterest Email Lesson 3 - Separation Of Variables (Differential Equations) to a friend. Free trial available at KutaSoftware. However, you may use any textbook as. Elementary Differential Equations. 1 through 2. Larson, Hostetler, and Edwards Preparation for Calculus P. Educreations is a community where anyone can teach what they know and learn what they don't. 2 u substitution indefinite. 2 Computing Limits 1. But don't worry, it can be solved (using a special method called Separation of Variables) and results in: V = Pe rt. The unit is split into two sections: section one will cover solving ordinary & partial differential equations using methods such as Laplace transforms, Fourier series, and the method of separation of variables; section two will cover differential and integral vector calculus methods. 1 Math 2080: Di erential Equations Worksheet 2. Take the operation in that definition and reverse it. Its left and right hand ends are held fixed at height zero and we are told its initial configuration and speed. This Calculus lesson for Differential Equations, , Separation of Variables, includes Guided Notes, Task Card Activity, plus HW and optional QRThis lesson is designed for AP Calculus AB, AP Calculus BC and College Calculus 1 or 2 classes. Use separation of variables to find an expression for.
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # Fractions as Decimals ## Dividing the denominator into the numerator 0% Progress Practice Fractions as Decimals Progress 0% Fractions as Decimals Credit: Håkan Dahlström Source: https://www.flickr.com/photos/dahlstroms/7189993560/ Michelle and Terry are shooting hoops. Michelle made 7 out of the last 10 shots. Terry made 6 out the last 8 shots. Compare their results using decimals. Who had better results? In this concept, you will learn to convert fractions to decimals. ### Guidance Decimals and fractions both represent quantities that are part of a whole. Fractions can also be converted to a decimal number. There are two ways to convert a fraction to a decimal. The first way is to think in terms of place value. If a fraction that has ten as a denominator, you can think of that fraction as tenths. Here is a fraction of a tenth and the decimal equivalent. 610=.6 There is one decimal place in tenths, so this decimal is accurate. This is a very useful method when the denominator is a base ten value like: 10,100,1,000\begin{align*}10, 100, 1,000 \ldots\end{align*} Here is a fraction with a base ten value of 1,000. 1251000 There are three decimal places in a thousandth decimal. There are three digits in the numerator. This fraction converts easily to a decimal. 1251000=0.125 The second way is to use division. The fraction bar is also a symbol for division. The numerator is the dividend and the denominator is the divisor. Here is another fraction. 35 To change 35\begin{align*}\frac{3}{5}\end{align*} to a decimal number, divide 3 by 5.  Remember that you are looking for a decimal number. Use zero placeholders to help find the decimal value. 5)3.0¯¯¯¯¯¯   0.63.0  0 The decimal value of 35\begin{align*}\frac{3}{5}\end{align*} is 0.6\begin{align*}0.6\end{align*}. ### Guided Practice Write the following fraction as a decimal. 14 One way is to use base ten values. First, find an equivalent fraction of 14\begin{align*}\frac{1}{4}\end{align*} with a denominator of 100. 14=25100 Then, convert the fraction to a decimal. 25100\begin{align*}\frac{25}{100}\end{align*} is also 25 hundredths. 25100=0.25 The decimal value of 14\begin{align*}\frac{1}{4}\end{align*} is 0.25. The other way is to use division. Divide 1 by 4. Use zero place holders if needed. 4)1.00¯¯¯¯¯¯¯¯0.25   8  20   200 The decimal value of 14\begin{align*}\frac{1}{4}\end{align*} is 0.25. ### Examples Convert each fraction to a decimal. #### Example 1 810 This fraction has a base ten value in the denominator. Place the 8 in the tenth place. 810=0.8 The decimal value of 810\begin{align*}\frac{8}{10}\end{align*} is 0.8\begin{align*}0.8\end{align*}. #### Example 2 5100 This fraction has a base ten value in the denominator. Place 5 in the hundredths place. 5100=0.05 The decimal value of 5100\begin{align*}\frac{5}{100}\end{align*} is 0.05\begin{align*}0.05\end{align*}. #### Example 3 45 Divide the numerator by the denominator. Use zero placeholders if needed. 5)4.0¯¯¯¯¯¯0.84.0  0 The decimal value of 45\begin{align*}\frac{4}{5}\end{align*} is 0.8\begin{align*}0.8\end{align*}. Credit: Eugene Kim Source: https://www.flickr.com/photos/eekim/8697169257/ Remember Michelle and Terry playing basketball? Michelle made 7 out of the last 10 shots and Terry made 6 out the last 8 shots. 7 out of 10 is also 710\begin{align*}\frac{7}{10}\end{align*}. 6 out of 8 is also 68\begin{align*}\frac{6}{8}\end{align*}. Convert the fractions to decimals and compare their results. First, convert 710\begin{align*}\frac{7}{10}\end{align*} to a decimal. The denominator is a base ten number. 710=0.7 Then, convert 68\begin{align*}\frac{6}{8}\end{align*} to a decimal. Divide 6 by 8. 8)6.00¯¯¯¯¯¯¯   0.7556  40   400 Next, compare the decimals. The better player has the larger decimal number. 0.7<0.75 Terry made more of his shots than Michelle. ### Explore More Convert the following fractions as decimals. 1.  310\begin{align*}\frac{3}{10}\end{align*} 2. 23100\begin{align*}\frac{23}{100}\end{align*} 3. 9100\begin{align*}\frac{9}{100}\end{align*} 4. 810\begin{align*}\frac{8}{10}\end{align*} 5. 1821000\begin{align*}\frac{182}{1000}\end{align*} 6. 25100\begin{align*}\frac{25}{100}\end{align*} 7. 610\begin{align*}\frac{6}{10}\end{align*} 8. 1251000\begin{align*}\frac{125}{1000}\end{align*} 9. 110\begin{align*}\frac{1}{10}\end{align*} 10. 2100\begin{align*}\frac{2}{100}\end{align*} 11. 12\begin{align*}\frac{1}{2}\end{align*} 12. 14\begin{align*}\frac{1}{4}\end{align*} 13. 34\begin{align*}\frac{3}{4}\end{align*} 14. 36\begin{align*}\frac{3}{6}\end{align*} 15. 35\begin{align*}\frac{3}{5}\end{align*} ### Vocabulary Language: English Decimal Decimal In common use, a decimal refers to part of a whole number. The numbers to the left of a decimal point represent whole numbers, and each number to the right of a decimal point represents a fractional part of a power of one-tenth. For instance: The decimal value 1.24 indicates 1 whole unit, 2 tenths, and 4 hundredths (commonly described as 24 hundredths). Equivalent Equivalent Equivalent means equal in value or meaning. fraction fraction A fraction is a part of a whole. A fraction is written mathematically as one value on top of another, separated by a fraction bar. It is also called a rational number. Irrational Number Irrational Number An irrational number is a number that can not be expressed exactly as the quotient of two integers. Place Value Place Value The value of given a digit in a multi-digit number that is indicated by the place or position of the digit.
# Deep Learning and Data Science courses on sale for $10! April 24, 2017 Today, Udemy has decided to do yet another AMAZING$10 promo. As usual, I’m providing $10 coupons for all my courses in the links below. Please use these links and share them with your friends! The$10 promo doesn’t come around often, so make sure you pick up everything you are interested in, or could become interested in later this year. The promo goes until April 29. Don’t wait! At the end of this post, I’m going to provide you with some additional links to get machine learning prerequisites (calculus, linear algebra, Python, etc…) for $10 too! If you don’t know what order to take the courses in, please check here: https://deeplearningcourses.com/course_order Here are the links for my courses: Deep Learning Prerequisites: Linear Regression in Python https://www.udemy.com/data-science-linear-regression-in-python/?couponCode=APR456 Deep Learning Prerequisites: Logistic Regression in Python https://www.udemy.com/data-science-logistic-regression-in-python/?couponCode=APR456 Deep Learning in Python https://www.udemy.com/data-science-deep-learning-in-python/?couponCode=APR456 Practical Deep Learning in Theano and TensorFlow https://www.udemy.com/data-science-deep-learning-in-theano-tensorflow/?couponCode=APR456 Deep Learning: Convolutional Neural Networks in Python https://www.udemy.com/deep-learning-convolutional-neural-networks-theano-tensorflow/?couponCode=APR456 Unsupervised Deep Learning in Python https://www.udemy.com/unsupervised-deep-learning-in-python/?couponCode=APR456 Deep Learning: Recurrent Neural Networks in Python https://www.udemy.com/deep-learning-recurrent-neural-networks-in-python/?couponCode=APR456 Advanced Natural Language Processing: Deep Learning in Python https://www.udemy.com/natural-language-processing-with-deep-learning-in-python/?couponCode=APR456 Advanced AI: Deep Reinforcement Learning in Python https://www.udemy.com/deep-reinforcement-learning-in-python/?couponCode=APR456 Easy Natural Language Processing in Python https://www.udemy.com/data-science-natural-language-processing-in-python/?couponCode=APR456 Cluster Analysis and Unsupervised Machine Learning in Python https://www.udemy.com/cluster-analysis-unsupervised-machine-learning-python/?couponCode=APR456 Unsupervised Machine Learning: Hidden Markov Models in Python https://www.udemy.com/unsupervised-machine-learning-hidden-markov-models-in-python/?couponCode=APR456 Data Science: Supervised Machine Learning in Python https://www.udemy.com/data-science-supervised-machine-learning-in-python/?couponCode=APR456 Bayesian Machine Learning in Python: A/B Testing https://www.udemy.com/bayesian-machine-learning-in-python-ab-testing/?couponCode=APR456 Ensemble Machine Learning in Python: Random Forest and AdaBoost https://www.udemy.com/machine-learning-in-python-random-forest-adaboost/?couponCode=APR456 Artificial Intelligence: Reinforcement Learning in Python https://www.udemy.com/artificial-intelligence-reinforcement-learning-in-python/?couponCode=APR456 SQL for Newbs and Marketers https://www.udemy.com/sql-for-marketers-data-analytics-data-science-big-data/?couponCode=APR456 And last but not least,$10 coupons for some helpful prerequisite courses. You NEED to know this stuff before you study machine learning: General (site-wide): http://bit.ly/2oCY14Z Python http://bit.ly/2pbXxXz Calc 1 http://bit.ly/2okPUib Calc 2 http://bit.ly/2oXnhpX Calc 3 http://bit.ly/2pVU0gQ Linalg 1 http://bit.ly/2oBBir1 Linalg 2 http://bit.ly/2q5SGEE Probability (option 1) http://bit.ly/2prFQ7o Probability (option 2) http://bit.ly/2p8kcC0 Probability (option 3) http://bit.ly/2oXa2pb Probability (option 4) http://bit.ly/2oXbZSK Remember, these links will self-destruct on April 29 (5 days). Act NOW! P.S. As you know, I’m ALWAYS updating my courses based on feedback and adding new material. Sometimes, even stuff that has been only recently invented! Here are a list of recent updates: – Deep Learning pt 1: Backpropagation troubleshooting (added to appendix). Use this if you have questions like “why do we sum over ‘k prime’?”, and “what is the chain rule?” – Recurrent Neural Networks (Deep Learning pt 5) and Deep NLP (Deep Learning pt 6): All language-modeling code can now train on the Brown corpus, which you can import directly from NLTK! No need to download and process Wikipedia data dumps anymore! This will make running the code much easier! – Deep Learning pt 2: Added code samples for grid search and random search, as well as a simple intuitive example of how dropout “emulates” an ensemble (which, by the way, you can gain even FURTHER insight into by taking my Ensemble Machine Learning course!) And coming very soon (next couple days): – Deep Learning pt 1: Using SKLearn so using a neural network is just 3 lines of code! – Recurrent Neural Networks (Deep Learning pt 5) and Deep NLP (Deep Learning pt 6): More discussion about why Tensorflow isn’t appropriate here, but at the same time, adding more Tensorflow examples! – Linear Regression and Logistic Regression: how to interpret the weights # Udemy $10 coupons April 2017 April 6, 2017 Today, Udemy has decided to do yet another AMAZING$10 promo. As usual, I’m providing $10 coupons for all my courses in the links below. Please use these links and share them with your friends! The$10 promo doesn’t come around often, so make sure you pick up everything you are interested in, or could become interested in later this year. The promo goes until April 12. Don’t wait! At the end of this post, I’m going to provide you with some additional links to get machine learning prerequisites (calculus, linear algebra, Python, etc…) for $10 too! If you don’t know what order to take the courses in, please check here: https://deeplearningcourses.com/course_order Here are the links for my courses: Deep Learning Prerequisites: Linear Regression in Python https://www.udemy.com/data-science-linear-regression-in-python/?couponCode=APR123 Deep Learning Prerequisites: Logistic Regression in Python https://www.udemy.com/data-science-logistic-regression-in-python/?couponCode=APR123 Deep Learning in Python https://www.udemy.com/data-science-deep-learning-in-python/?couponCode=APR123 Practical Deep Learning in Theano and TensorFlow https://www.udemy.com/data-science-deep-learning-in-theano-tensorflow/?couponCode=APR123 Deep Learning: Convolutional Neural Networks in Python https://www.udemy.com/deep-learning-convolutional-neural-networks-theano-tensorflow/?couponCode=APR123 Unsupervised Deep Learning in Python https://www.udemy.com/unsupervised-deep-learning-in-python/?couponCode=APR123 Deep Learning: Recurrent Neural Networks in Python https://www.udemy.com/deep-learning-recurrent-neural-networks-in-python/?couponCode=APR123 Advanced Natural Language Processing: Deep Learning in Python https://www.udemy.com/natural-language-processing-with-deep-learning-in-python/?couponCode=APR123 Advanced AI: Deep Reinforcement Learning in Python https://www.udemy.com/deep-reinforcement-learning-in-python/?couponCode=APR123 Easy Natural Language Processing in Python https://www.udemy.com/data-science-natural-language-processing-in-python/?couponCode=APR123 Cluster Analysis and Unsupervised Machine Learning in Python https://www.udemy.com/cluster-analysis-unsupervised-machine-learning-python/?couponCode=APR123 Unsupervised Machine Learning: Hidden Markov Models in Python https://www.udemy.com/unsupervised-machine-learning-hidden-markov-models-in-python/?couponCode=APR123 Data Science: Supervised Machine Learning in Python https://www.udemy.com/data-science-supervised-machine-learning-in-python/?couponCode=APR123 Bayesian Machine Learning in Python: A/B Testing https://www.udemy.com/bayesian-machine-learning-in-python-ab-testing/?couponCode=APR123 Ensemble Machine Learning in Python: Random Forest and AdaBoost https://www.udemy.com/machine-learning-in-python-random-forest-adaboost/?couponCode=APR123 Artificial Intelligence: Reinforcement Learning in Python https://www.udemy.com/artificial-intelligence-reinforcement-learning-in-python/?couponCode=APR123 SQL for Newbs and Marketers https://www.udemy.com/sql-for-marketers-data-analytics-data-science-big-data/?couponCode=APR123 And last but not least,$10 coupons for some helpful prerequisite courses. You NEED to know this stuff before you study machine learning: General (Site-wide coupon): http://bit.ly/2p3jHI8 Python http://bit.ly/2nMqqGg Calc 1 http://bit.ly/2oLayo8 Calc 2 http://bit.ly/2ocifGm Calc 3 http://bit.ly/2ocaNeo Linalg 1 http://bit.ly/2ocf4hO Linalg 2 http://bit.ly/2och8q2 Probability (option 1) http://bit.ly/2nZy4hy Probability (option 2) http://bit.ly/2nZN4vI Probability (option 3) http://bit.ly/2oLkY7A Probability (option 4) http://bit.ly/2o57IMG Remember, this post will self-destruct on April 12 (7 days). Act NOW! # New Years Udemy Coupons! All Udemy Courses only $10 January 1, 2017 Act fast! These$10 Udemy Coupons expire in 10 days. Ensemble Machine Learning: Random Forest and AdaBoost Deep Learning Prerequisites: Linear Regression in Python https://www.udemy.com/data-science-linear-regression-in-python/?couponCode=BOXINGDAY Deep Learning Prerequisites: Logistic Regression in Python https://www.udemy.com/data-science-logistic-regression-in-python/?couponCode=BOXINGDAY Deep Learning in Python https://www.udemy.com/data-science-deep-learning-in-python/?couponCode=BOXINGDAY Practical Deep Learning in Theano and TensorFlow https://www.udemy.com/data-science-deep-learning-in-theano-tensorflow/?couponCode=BOXINGDAY Deep Learning: Convolutional Neural Networks in Python https://www.udemy.com/deep-learning-convolutional-neural-networks-theano-tensorflow/?couponCode=BOXINGDAY Unsupervised Deep Learning in Python https://www.udemy.com/unsupervised-deep-learning-in-python/?couponCode=BOXINGDAY Deep Learning: Recurrent Neural Networks in Python https://www.udemy.com/deep-learning-recurrent-neural-networks-in-python/?couponCode=BOXINGDAY Advanced Natural Language Processing: Deep Learning in Python https://www.udemy.com/natural-language-processing-with-deep-learning-in-python/?couponCode=BOXINGDAY Easy Natural Language Processing in Python https://www.udemy.com/data-science-natural-language-processing-in-python/?couponCode=BOXINGDAY Cluster Analysis and Unsupervised Machine Learning in Python https://www.udemy.com/cluster-analysis-unsupervised-machine-learning-python/?couponCode=BOXINGDAY Unsupervised Machine Learning: Hidden Markov Models in Python https://www.udemy.com/unsupervised-machine-learning-hidden-markov-models-in-python/?couponCode=BOXINGDAY Data Science: Supervised Machine Learning in Python https://www.udemy.com/data-science-supervised-machine-learning-in-python/?couponCode=BOXINGDAY Bayesian Machine Learning in Python: A/B Testing https://www.udemy.com/bayesian-machine-learning-in-python-ab-testing/?couponCode=BOXINGDAY SQL for Newbs and Marketers https://www.udemy.com/sql-for-marketers-data-analytics-data-science-big-data/?couponCode=BOXINGDAY How to get ANY course on Udemy for $10 (please use my coupons above for my courses): Click here for a link to all courses on the site: http://bit.ly/2iVkMTx Click here for a great calculus prerequisite course: http://bit.ly/2iwKpt2 Click here for a great Python prerequisite course: http://bit.ly/2iwQENC Click here for a great linear algebra 1 prerequisite course: http://bit.ly/2hHoLTn Click here for a great linear algebra 2 prerequisite course: http://bit.ly/2isjr3z Go to comments # New course – Natural Language Processing: Deep Learning in Python part 6 August 9, 2016 [Scroll to the bottom for the early bird discount if you already know what this course is about] In this course we are going to look at advanced NLP using deep learning. Previously, you learned about some of the basics, like how many NLP problems are just regular machine learning and data science problems in disguise, and simple, practical methods like bag-of-words and term-document matrices. These allowed us to do some pretty cool things, like detect spam emails, write poetry, spin articles, and group together similar words. In this course I’m going to show you how to do even more awesome things. We’ll learn not just 1, but 4 new architectures in this course. First up is word2vec. In this course, I’m going to show you exactly how word2vec works, from theory to implementation, and you’ll see that it’s merely the application of skills you already know. Word2vec is interesting because it magically maps words to a vector space where you can find analogies, like: • king – man = queen – woman • France – Paris = England – London • December – Novemeber = July – June We are also going to look at the GLoVe method, which also finds word vectors, but uses a technique called matrix factorization, which is a popular algorithm for recommender systems. Amazingly, the word vectors produced by GLoVe are just as good as the ones produced by word2vec, and it’s way easier to train. We will also look at some classical NLP problems, like parts-of-speech tagging and named entity recognition, and use recurrent neural networks to solve them. You’ll see that just about any problem can be solved using neural networks, but you’ll also learn the dangers of having too much complexity. Lastly, you’ll learn about recursive neural networks, which finally help us solve the problem of negation in sentiment analysis. Recursive neural networks exploit the fact that sentences have a tree structure, and we can finally get away from naively using bag-of-words. All of the materials required for this course can be downloaded and installed for FREE. We will do most of our work in Numpy and Matplotlib,and Theano. I am always available to answer your questions and help you along your data science journey. See you in class! https://www.udemy.com/natural-language-processing-with-deep-learning-in-python/?couponCode=EARLYBIRDSITE UPDATE: New coupon if the above is sold out: https://www.udemy.com/natural-language-processing-with-deep-learning-in-python/?couponCode=SLOWBIRD_SITE #deep learning #GLoVe #natural language processing #nlp #python #recursive neural networks #tensorflow #theano #word2vec Go to comments # Data Science: Natural Language Processing in Python February 11, 2016 Do you want to learn natural language processing from the ground-up? If you hate math and want to jump into purely practical coding examples, my new course is for you. You can check it out at Udemy: https://www.udemy.com/data-science-natural-language-processing-in-python I am posting the course summary here also for convenience: In this course you will build MULTIPLE practical systems using natural language processing, or NLP. This course is not part of my deep learning series, so there are no mathematical prerequisites – just straight up coding in Python. All the materials for this course are FREE. After a brief discussion about what NLP is and what it can do, we will begin building very useful stuff. The first thing we’ll build is a spam detector. You likely get very little spam these days, compared to say, the early 2000s, because of systems like these. Next we’ll build a model for sentiment analysis in Python. This is something that allows us to assign a score to a block of text that tells us how positive or negative it is. People have used sentiment analysis on Twitter to predict the stock market. We’ll go over some practical tools and techniques like the NLTK (natural language toolkit) library and latent semantic analysis or LSA. Finally, we end the course by building an article spinner. This is a very hard problem and even the most popular products out there these days don’t get it right. These lectures are designed to just get you started and to give you ideas for how you might improve on them yourself. Once mastered, you can use it as an SEO, or search engine optimization tool. Internet marketers everywhere will love you if you can do this for them! As a thank you for visiting this site, I’ve created a coupon that gets you 70% off. Click here to get the course for only$15. #article spinner #latent semantic analysis #latent semantic indexing #machine learning #natural language processing #nlp #pca #python #spam detection #svd # Probability Smoothing for Natural Language Processing January 23, 2016 Level: Beginner Topic: Natural language processing (NLP) This is a very basic technique that can be applied to most machine learning algorithms you will come across when you’re doing NLP. Suppose for example, you are creating a “bag of words” model, and you have just collected data from a set of documents with a very small vocabulary. Your dictionary looks like this: {"cat": 10, "dog": 10, "parrot": 10} You would naturally assume that the probability of seeing the word “cat” is 1/3, and similarly P(dog) = 1/3 and P(parrot) = 1/3. Now, suppose I want to determine the probability of P(mouse). Since “mouse” does not appear in my dictionary, its count is 0, therefore P(mouse) = 0. This is a problem! If you wanted to do something like calculate a likelihood, you’d have $$P(document) = P(words that are not mouse) \times P(mouse) = 0$$ This is where smoothing enters the picture. We simply add 1 to the numerator and the vocabulary size (V = total number of distinct words) to the denominator of our probability estimate. $$P(word) = \frac{word count + 1}{total number of words + V}$$ Now our probabilities will approach 0, but never actually reach 0. For a word we haven’t seen before, the probability is simply: $$P(new word) = \frac{1}{N + V}$$ You can see how this accounts for sample size as well. If our sample size is small, we will have more smoothing, because N will be smaller. ## N-gram probability smoothing for natural language processing An n-gram (ex. bigram, trigram) is a probability estimate of a word given past words. For example, in recent years, $$P(scientist | data)$$ has probably overtaken $$P(analyst | data)$$. In general we want to measure: $$P(w_i | w_{i-1})$$ This probably looks familiar if you’ve ever studied Markov models. You can see how such a model would be useful for, say, article spinning. You could potentially automate writing content online by learning from a huge corpus of documents, and sampling from a Markov chain to create new documents. Disclaimer: you will get garbage results, many have tried and failed, and Google already knows how to catch you doing it. It will take much more ingenuity to solve this problem. The maximum likelihood estimate for the above conditional probability is: $$P(w_i | w_{i-1}) = \frac{count(w_i | w_{i-1})}{count(w_{i-1})}$$ You can see that as we increase the complexity of our model, say, to trigrams instead of bigrams, we would need more data in order to estimate these probabilities accurately. $$P(w_i | w_{i-1}, w_{i-2}) = \frac{count(w_i | w_{i-1}, w_{i-2})}{count(w_{i-1}, w_{i-2})}$$ So what do we do? You could use the simple “add-1” method above (also called Laplace Smoothing), or you can use linear interpolation. What does this mean? It means we simply make the probability a linear combination of the maximum likelihood estimates of itself and lower order probabilities. It’s easier to see in math… $$P(w_i | w_{i-1}, w_{i-2}) = \lambda_3 P_{ML}(w_i | w_{i-1}, w_{i-2}) + \lambda_2 P_{ML}(w_i | w_{i-1}) + \lambda_1 P_{ML}(w_i)$$ We treat the lambda’s like probabilities, so we have the constraints $$\lambda_i \geq 0$$ and $$\sum_i \lambda_i = 1$$. The question now is, how do we learn the values of lambda? One method is “held-out estimation” (same thing you’d do to choose hyperparameters for a neural network). You take a part of your training set, and choose values for lambda that maximize the objective (or minimize the error) of that training set. If you have ever studied linear programming, you can see how it would be related to solving the above problem. Another method might be to base it on the counts. This would work similarly to the “add-1” method described above. If we have a higher count for $$P_{ML}(w_i | w_{i-1}, w_{i-2})$$, we would want to use that instead of $$P_{ML}(w_i)$$. If we have a lower count we know we have to depend on$$P_{ML}(w_i)$$. ## Good-Turing smoothing and Kneser-Ney smoothing These are more complicated topics that we won’t cover here, but may be covered in the future if the opportunity arises. Have you had success with probability smoothing in NLP? Let me know in the comments below!
# Building.cpp ## Overview Building is a class declared in Building.h and is responsible for containing the full state of the game. It controls all of the Elevators and Floors by performing the Moves and adding the Person objects that are presented to it. There is a single instance of a Building class that will be used by the Game class to track the state of the assets while the game is played. ## Member Data • elevators[NUM_ELEVATORS] is an array that represents all of the elevators in the building. • floors[NUM_FLOORS] is an array that represents each floor in the building • time is an integer that represents the internal clock of the game, which ticks forward on each turn. It is used to regulate new spawns and the movement and updating of the elevators and floors. ## Member Functions Many member functions have been written for you. Please refer to their RME’s for reference. You will be writing the implementations for the following functions: ### tick /* * Requires: move is a valid move * Modifies: The private member variables of building * Effects: Increments time and calls update() on the input move. * Then, ticks all of the elevators with the new time. * Next, ticks all of the building floors, keeping track of new * exploded people. * Returns the total number of people that exploded in this tick */ int tick(Move move); tick is the function that advances the game one turn. During each turn, we give the player the opportunity to make one Move, which comes in as an argument to this function. Overall, this function is responsible for: 1. Updating the time 2. Applying the Move to the building and updating the appropriate state. 3. Calling tick with the new time on all the assets in the building. 4. Collecting and returning the information about the People who exploded that turn for scoring purposes ### update /* * Requires: move is a valid move * Modifies: The building member variables affected by the move * Effects: Applies the move to the building: * * If the move is a Pass Move, nothing happens * * If the move is a Pickup Move, copies the list of people to * pickup into an array, and calls removePeople() on the * appropriate floor * * For both Pickup Moves and Service Moves, the appropriate * elevator should be sent to service the targetFloor of the move */ void update(Move move); This is one part of a tick, during which we take the incoming Move and use it to update the appropriate Floor or Elevator. (See the section on Move for more details) for definitions of Pass Move, Pickup Move, and Service Move. • There’s nothing to be done for Pass Moves. Game moves (save or quit moves) will never be passed to this function by the game. • Both Pickup Moves and Service Moves require the appropriate elevator to change floors. This is done by calling elevator.serviceRequest() (See the Elevator section for more details) • For Pickup Moves, we have to remove People from the Floor where the Move is happening. This is done using floor.removePeople(). ### spawnPerson /* * Requires: newPerson is a valid Person object * Modifies: A floor in the building * Effects: Adds a person to the Floor corresponding to startingFloor * of the event */ void spawnPerson(Person newPerson); Each Person corresponds to a person appearing on a Floor. So, to process a person, we call floor.addPerson() on the Floor where this Person appeared.
Corpus ID: 49975199 # G L A S H O W - W E I N B E R G - SALAM THEORY OF E L E C T R O W E A K INTERACTIONS AND THE NEUTRAL CURRENTS ```@inproceedings{S2002GLA, title={G L A S H O W - W E I N B E R G - SALAM THEORY OF E L E C T R O W E A K INTERACTIONS AND THE NEUTRAL CURRENTS}, author={M. S. and Jǐŕı Ho{\vs}ek}, year={2002} }``` • Published 2002 In the first part of the review we expound m detail the umfied theory of weak and electromagnetic interactions of Glashow, Wemberg and Salam In the second part, on the basms of this theory a number of the neutral current induced processes are &scussed We consider in detail the deep inelastic scattermg of neutnnos on nucleons, the P-odd asymmetry m the deep inelastic scattering of longitudinally polarized electrons by nucleons, the scattenng of neutrmos on electrons, the elastic scattenng of… Expand #### References SHOWING 1-10 OF 101 REFERENCES Gauge Theories of Weak Interactions • Physics • 1976 Foreword Preface Notation and conventions 1. Introduction 2. Weak interactions and vector mesons 3. Photons 4. The Yang-Mills field 5. Spontaneous breaking of symmetries 6, Spontaneous breaking ofExpand Dynamical breaking and the electroweak model • Physics • 1981 On the assumption that the SU(2)(X)U(1) symmetry of the zero-mass electroweak model undergoes dynamical breakdown to charge U(1), the authors apply the gauge technique to determine an equation forExpand A Salam Rev Mod Phys • A Salam Rev Mod Phys • 1980 Rev Mod Phys • Rev Mod Phys • 1980 Nucl Phys • Nucl Phys Proc 1981 Intern Conf on Lepton and Photon InteractIons at High Energies • Proc 1981 Intern Conf on Lepton and Photon InteractIons at High Energies • 1981 Proc 1981 Intern Conf on Lepton and Photon Interactions at Hugh Energtes • Proc 1981 Intern Conf on Lepton and Photon Interactions at Hugh Energtes • 1981 Proc 1981 Intern Symp on Lepton and Photon Interactions • Proc 1981 Intern Symp on Lepton and Photon Interactions • 1981 Rev Mod Phys • Rev Mod Phys • 1981
# Word (group deory) In group deory, a word is any written product of group ewements and deir inverses. For exampwe, if x, y and z are ewements of a group G, den xy, z−1xzz and y−1zxx−1yz−1 are words in de set {xyz}. Two different words may evawuate to de same vawue in G,[1] or even in every group.[2] Words pway an important rowe in de deory of free groups and presentations, and are centraw objects of study in combinatoriaw group deory. ## Definition Let G be a group, and wet S be a subset of G. A word in S is any expression of de form ${\dispwaystywe s_{1}^{\varepsiwon _{1}}s_{2}^{\varepsiwon _{2}}\cdots s_{n}^{\varepsiwon _{n}}}$ where s1,...,sn are ewements of S and each εi is ±1. The number n is known as de wengf of de word. Each word in S represents an ewement of G, namewy de product of de expression, uh-hah-hah-hah. By convention, de identity (uniqwe)[3] ewement can be represented by de empty word, which is de uniqwe word of wengf zero. ## Notation When writing words, it is common to use exponentiaw notation as an abbreviation, uh-hah-hah-hah. For exampwe, de word ${\dispwaystywe xxy^{-1}zyzzzx^{-1}x^{-1}\,}$ couwd be written as ${\dispwaystywe x^{2}y^{-1}zyz^{3}x^{-2}.\,}$ This watter expression is not a word itsewf—it is simpwy a shorter notation for de originaw. When deawing wif wong words, it can be hewpfuw to use an overwine to denote inverses of ewements of S. Using overwine notation, de above word wouwd be written as fowwows: ${\dispwaystywe x^{2}{\overwine {y}}zyz^{3}{\overwine {x}}^{2}.\,}$ ## Words and presentations A subset S of a group G is cawwed a generating set if every ewement of G can be represented by a word in S. If S is a generating set, a rewation is a pair of words in S dat represent de same ewement of G. These are usuawwy written as eqwations, e.g. ${\dispwaystywe x^{-1}yx=y^{2}.\,}$ A set ${\dispwaystywe {\madcaw {R}}}$ of rewations defines G if every rewation in G fowwows wogicawwy from dose in ${\dispwaystywe {\madcaw {R}}}$, using de axioms for a group. A presentation for G is a pair ${\dispwaystywe \wangwe S\mid {\madcaw {R}}\rangwe }$, where S is a generating set for G and ${\dispwaystywe {\madcaw {R}}}$ is a defining set of rewations. For exampwe, de Kwein four-group can be defined by de presentation ${\dispwaystywe \wangwe i,j\mid i^{2}=1,\,j^{2}=1,\,ij=ji\rangwe .}$ Here 1 denotes de empty word, which represents de identity ewement. When S is not a generating set for G, de set of ewements represented by words in S is a subgroup of G. This is known as de subgroup of G generated by S, and is usuawwy denoted ${\dispwaystywe \wangwe S\rangwe }$. It is de smawwest subgroup of G dat contains de ewements of S. ## Reduced words Any word in which a generator appears next to its own inverse (xx−1 or x−1x) can be simpwified by omitting de redundant pair: ${\dispwaystywe y^{-1}zxx^{-1}y\;\;\wongrightarrow \;\;y^{-1}zy.}$ This operation is known as reduction, and it does not change de group ewement represented by de word. (Reductions can be dought of as rewations dat fowwow from de group axioms.) A reduced word is a word dat contains no redundant pairs. Any word can be simpwified to a reduced word by performing a seqwence of reductions: ${\dispwaystywe xzy^{-1}xx^{-1}yz^{-1}zz^{-1}yz\;\;\wongrightarrow \;\;xyz.}$ The resuwt does not depend on de order in which de reductions are performed. If S is any set, de free group over S is de group wif presentation ${\dispwaystywe \wangwe S\mid \;\rangwe }$. That is, de free group over S is de group generated by de ewements of S, wif no extra rewations. Every ewement of de free group can be written uniqwewy as a reduced word in S. A word is cycwicawwy reduced if and onwy if every cycwic permutation of de word is reduced. ## Normaw forms A normaw form for a group G wif generating set S is a choice of one reduced word in S for each ewement of G. For exampwe: • The words 1, i, j, ij are a normaw form for de Kwein four-group. • The words 1, r, r2, ..., rn-1, s, sr, ..., srn-1 are a normaw form for de dihedraw group Dihn. • The set of reduced words in S are a normaw form for de free group over S. • The set of words of de form xmyn for m,n ∈ Z are a normaw form for de direct product of de cycwic groupsx〉 and 〈y〉. ## Operations on words The product of two words is obtained by concatenation: ${\dispwaystywe \weft(xzyz^{-1}\right)\weft(zy^{-1}x^{-1}y\right)=xzyz^{-1}zy^{-1}x^{-1}y.}$ Even if de two words are reduced, de product may not be. The inverse of a word is obtained by inverting each generator, and switching de order of de ewements: ${\dispwaystywe \weft(zy^{-1}x^{-1}y\right)^{-1}=y^{-1}xyz^{-1}.}$ The product of a word wif its inverse can be reduced to de empty word: ${\dispwaystywe zy^{-1}x^{-1}y\;y^{-1}xyz^{-1}=1.}$ You can move a generator from de beginning to de end of a word by conjugation: ${\dispwaystywe x^{-1}\weft(xy^{-1}z^{-1}yz\right)x=y^{-1}z^{-1}yzx.}$ ## The word probwem Given a presentation ${\dispwaystywe \wangwe S\mid {\madcaw {R}}\rangwe }$ for a group G, de word probwem is de awgoridmic probwem of deciding, given as input two words in S, wheder dey represent de same ewement of G. The word probwem is one of dree awgoridmic probwems for groups proposed by Max Dehn in 1911. It was shown by Pyotr Novikov in 1955 dat dere exists a finitewy presented group G such dat de word probwem for G is undecidabwe.(Novikov 1955) ## References • Epstein, David; Cannon, J. W.; Howt, D. F.; Levy, S. V. F.; Paterson, M. S.; Thurston, W. P. (1992). Word Processing in Groups. AK Peters. ISBN 0-86720-244-0.. • Novikov, P. S. (1955). "On de awgoridmic unsowvabiwity of de word probwem in group deory". Trudy Mat. Inst. Stekwov (in Russian). 44: 1–143. • Robinson, Derek John Scott (1996). A course in de deory of groups. Berwin: Springer-Verwag. ISBN 0-387-94461-3. • Rotman, Joseph J. (1995). An introduction to de deory of groups. Berwin: Springer-Verwag. ISBN 0-387-94285-8. • Schupp, Pauw E; Lyndon, Roger C. (2001). Combinatoriaw group deory. Berwin: Springer. ISBN 3-540-41158-5. • Sowitar, Donawd; Magnus, Wiwhewm; Karrass, Abraham (2004). Combinatoriaw group deory: presentations of groups in terms of generators and rewations. New York: Dover. ISBN 0-486-43830-9. • Stiwwweww, John (1993). Cwassicaw topowogy and combinatoriaw group deory. Berwin: Springer-Verwag. ISBN 0-387-97970-0. 1. ^ for exampwe, fdr1 and r1fc in de group of sqware symmetries 2. ^ for exampwe, xy and xzz−1y 3. ^ Uniqweness of identity ewement and inverses
# How to fill a car with tennis balls if i want to fill interior of a car with tennis balls how many will be required. How to calculate this? if we are provided with length,breadth,height of car(height varies at each part) and radius of tennis ball. Some one please explain it in detail. - I removed the "logic" tag. – Carl Mummert Feb 8 '13 at 11:45 Do you know calculus? Multivariable Calculus? Do you have any idea what shape the car's roof is? You say height varies, so we will have to assume the base is a rectangle. Do you have an equation that describes, parametrizes the roof? With only this your question is incomplete. – user45099 Feb 8 '13 at 11:54 I know this can not be solved with out knowing the shape structure.I just want to how it can be solved. – kevin Feb 8 '13 at 12:08 A crude approximation would be to divide the volume of the car $lbw$ by the volume of a tennis ball $4\pi r^3 / 3$ and multiply by an appropriate sphere packing density. Let's say: $0.75*lbw / (4\pi r^3 / 3)$. – anthus Feb 8 '13 at 12:12 Alright, first of all you should try to work the volume of the car and the volume of the balls and figure it out by there. In trying to figure the volume of the car, you can seperate the car into pieces and work your way through there. However, this will not give you a correct answer in the real life. While trying to figure out the volume of the balls, you should consider the best possible combination in which you can put them. Because in whatever way you combine them, there will be some spaces between them, and this will cause the mistake in the previous calculation. For this combination thing, I suggest you looking at packaging ratio. http://mathworld.wolfram.com/SpherePacking.html I hope this would help you. -
# Package glossaries Error: Glossary entry has already been defined I'm getting this error when I use the \gls command: Package glossaries Error: Glossary entry DCD' has already been defined. These are how I my package: \documentclass[12pt]{book} \usepackage{graphicx}% Include figure files \usepackage{dcolumn}% Align table columns on decimal point \usepackage{bm}% bold math \usepackage{upgreek} \usepackage{float} \usepackage{siunitx} \usepackage{amsmath} \usepackage{a4wide} \usepackage{tabu} \usepackage{fancyhdr} \usepackage{graphicx} \usepackage{verbatim} \usepackage{amssymb,mathtools} \usepackage{pdfpages} \usepackage{array} \usepackage{braket} \usepackage{multirow} \usepackage{multicol} \usepackage[bookmarks]{hyperref} \usepackage[acronym]{glossaries} \usepackage[numbers,sort&compress]{natbib} \makeglossaries \newacronym{DCD}{DCD}{Double Crystal Diffractometer} It seems to work but I continuosly get the error (the red one in Overleaf). There are a few mistakes: 1. you must use \makeglossaries before creating any glossary or acronym 2. you should create a Minimum Working Example that is compilable so that we can understand your issu ;) 3. probably (I'm not sure as you did not write an MWE) you printed a glossary and not the acronyms 4. you load a file Acronyms.text and you create an acronym also in your preamble, which is a bit weird Proposed code: \documentclass[12pt]{book} \usepackage[bookmarks]{hyperref} \usepackage[acronym]{glossaries} \makeglossaries %\loadglsentries[type=\acronymtype]{Acronyms} % OK if you have an Acronyms.tex file \newacronym{DCD}{DCD}{Double Crystal Diffractometer} \begin{document} \gls{DCD} % \printglossary % NOT OK as you only defined acronyms \printacronyms \end{document} ` • you where totally right! Loading twice and one even before the makeglossaries was the mistake. – jjvv Oct 16 '19 at 9:58
# How to rig this simple machine? So i was trying to make this “unique fan”. I provided the video sample below, it’s bit low res but i think the mechanism on that machine is still readable : Video reference For anyone there still unclear or cant see much detail in that video, i just made a simulated rig by manually keyframing it to visualize better idea what im asking. Here : Simulated Rig mechanism Remember, this is a manual keyframe, and i didn't want to keyframe it manually because its so inaccurate, and i wanted that ARM 2 is staying on that FAN ARM hole. So i think rigging it is good idea. So i tried to recreate it, but I’m confused about how to rig it. I just did make the bones, parent the model to it. There are 2 armatures, 1 on arm, and 1 is the mechanical arm. but i honestly no idea how to put constraints on it. Can anyone help me? this is my first rig i tried, for learning purposes. here is the blend file : https://blenderartists.org/uploads/short-url/mt2kRaMVeA2ibumusd1lPG9egEx.blend here is the simulated rig blend file if you want to see it more clearly the mechanism : https://blenderartists.org/uploads/short-url/zNbCDaWdyi2vwlGtAtVOqfxI3y1.blend any help much appreciated! • Hi Chris, honestly, i am sorry, i cannot recognize from the video how the mechanism should work. Can you please provide a better quality video or explain with pictures how this should work? Thanks. Jan 20, 2021 at 21:49 • or do you mean like this: youtu.be/ZOvgmKNd3Js ? Jan 20, 2021 at 22:01 • Hello! Thanks for answering. I just edited the post, i just provided the simulated rig video by using keyframe to see the mechanism. Here is the video link : youtu.be/kzv4XvsqrQY Jan 21, 2021 at 7:04 • By your video link, yeah it goes close from what i want, but that ARM 2 (i just named it by now so you will know which arm im talking about) is static. I want that rotate too. (like my simulated rig video). any ideas? Jan 21, 2021 at 7:06 As @MohammadHossein has said, it's important to have the length of the link longer than the driving arm. I think of this as a 2-bone IK rig, just like a character arm, if you want to look up tutorials on how to do those. I've labelled the bones by analogy to an arm. • The Fore Arm bone has an IK bone constraint, target: IK Target, Pole: IK Pole. The chain length is 2, to include the Upper arm. • The bones, target, and pole all lie in the same plane, parallel to the floor, normal to the axes of rotation • The IK target is parented to the rotating arm, and is the only connection between the rotating arm and the IK chain. • The meshes do not need to be in the same plane as the bones. The differences from a character arm are: • It's mechanical, no need for deformation, weight-painting. Instead, the mesh-objects are directly Bone Parented to their bones • There are no poses, no disadvantage in having Empties as your IK target and IK Pole. The rotation of the driving arm will then result in the reciprocal motion of the fan. For this rig we should think like a mechanical engineer. we see that the driving force come from that little handle and others rotate with respect to this. you just need something like this: actually you need a reverse IK for your setup. keep in mind that for a clean rig you must create all of your bones in the same height which means the same z value. of course the important point is to have good lengths for your arms to achieve a clean movement. your middle arm should be longer than the driving arm.
2015 Dec 16 # Topology & geometry: Yochay Jerby (HUJI), " Exceptional collections on toric Fano manifolds and the Landau-Ginzburg equations" 11:00am to 2:30pm ## Location: Ross building, Hebrew University (Seminar Room 70A) Abstract: For a toric Fano manifold $X$ denote by $Crit(X) \subset (\mathbb{C}^{\ast})^n$ the solution scheme of the Landau-Ginzburg system of equations of $X$. Examples of toric Fano manifolds with $rk(Pic(X)) \leq 3$ which admit full strongly exceptional collections of line bundles were recently found by various authors. For these examples we construct a map $E : Crit(X) \rightarrow Pic(X)$ whose image $\mathcal{E}=\left \{ E(z) \vert z \in Crit(X) \right \}$ is a full strongly exceptional collection satisfying the M-aligned property. 2016 Jun 15 # Topology & geometry, Vasily Dolgushev (Temple University), "The Intricate Maze of Graph Complexes" 11:00am to 12:45pm ## Location: Ross building, Hebrew University (Seminar Room 70A) Abstract: In the paper "Formal noncommutative symplectic geometry'', Maxim Kontsevich introduced three versions of cochain complexes GCCom, GCLie and GCAs "assembled from'' graphs with some additional structures. The graph complex GCCom (resp. GCLie, GCAs) is related to the operad Com (resp. Lie, As) governing commutative (resp. Lie, associative) algebras. Although the graphs complexes GCCom, GCLie and GCAs (and their generalizations) are easy to define, it is hard to get very much information about their cohomology spaces. 2016 Feb 17 # Menachem Magidor 70th Birthday Conference Wed, 17/02/2016 (All day) to Fri, 19/02/2016 (All day) 2016 Nov 24 # Groups and dynamics- Oren Becker 10:30am to 11:30am ## Location: Ross 70 Speaker: Oren Becker Title: Locally testable groups Abstract: Arzhantseva and Paunescu [AP2015] showed that if two permutations X and Y in Sym(n) nearly commute (i.e. XY is close to YX), then the pair (X,Y) is close to a pair of permutations that really commute. 2016 Dec 22 # Groups and dynamics: Masaki Tsukamoto (lecture 3) 10:30am to 11:30am Ross 70 2016 Jan 07 # Groups & dynamics: Mark Shusterman (TAU) - Ranks of subgroups in boundedly generated groups 10:00am to 11:00am ## Location: Ross building, Hebrew University of Jerusalem, (Room 70) To every topological group, one can associate a unique universal minimal flow (UMF): a flow that maps onto every minimal flow of the group. For some groups (for example, the locally compact ones), this flow is not metrizable and does not admit a concrete description. However, for many "large" Polish groups, the UMF is metrizable, can be computed, and carries interesting combinatorial information. The talk will concentrate on some new results that give a characterization of metrizable UMFs of Polish groups. It is based on two papers, one joint 2016 Dec 08 # Groups and dynamics: Masaki Tsukamoto (lecture 2) 10:30am to 11:30am Ross 70 2016 Dec 29 # Groups and dynamics: Masaki Tsukamoto (lecture 4) 10:30am to 11:30am Ross 70 2016 Mar 03 # Groups & dynamics: Karim Adiprasito (HUJI) - Contractible manifolds, hyperbolicity and the fundamental pro-group at infinity 10:00am to 11:00am ## Location: Ross building, Hebrew University of Jerusalem, (Room 70) To every topological group, one can associate a unique universal minimal flow (UMF): a flow that maps onto every minimal flow of the group. For some groups (for example, the locally compact ones), this flow is not metrizable and does not admit a concrete description. However, for many "large" Polish groups, the UMF is metrizable, can be computed, and carries interesting combinatorial information. The talk will concentrate on some new results that give a characterization of metrizable UMFs of Polish groups. It is based on two papers, one joint 2016 Nov 17 # Groups and dynamics: Arie Levit 10:30am to 11:30am ## Location: Ross 70 Speaker: Arie Levit Weizmann Institute Title: Local rigidity of uniform lattices Abstract: A lattice is topologically locally rigid (t.l.r) if small deformations of it are isomorphic lattices. Uniform lattices in Lie groups were shown to be t.l.r by Weil [60']. We show that uniform lattices are t.l.r in any compactly generated topological group. 2016 Dec 15 # Groups and dynamics: Yair Hartman (Northwestern) - Percolation, Invariant Random Subgroups and Furstenberg Entropy 10:30am to 11:30am ## Location: Ross 70 Abstract: In this talk I'll present a joint work with Ariel Yadin, in which we solve the Furstenberg Entropy Realization Problem for finitely supported random walks (finite range jumps) on free groups and lamplighter groups. This generalizes a previous result of Bowen. The proof consists of several reductions which have geometric and probabilistic flavors of independent interests. All notions will be explained in the talk, no prior knowledge of Invariant Random Subgroups or Furstenberg Entropy is assumed. 2015 Dec 31 # Groups & dynamics: Thang Neguyen (Weizmann) - Rigidity of quasi-isometric embeddings 10:00am to 11:00am ## Location: Ross building, Hebrew University of Jerusalem, (Room 70) To every topological group, one can associate a unique universal minimal flow (UMF): a flow that maps onto every minimal flow of the group. For some groups (for example, the locally compact ones), this flow is not metrizable and does not admit a concrete description. However, for many "large" Polish groups, the UMF is metrizable, can be computed, and carries interesting combinatorial information. The talk will concentrate on some new results that give a characterization of metrizable UMFs of Polish groups. It is based on two papers, one joint 2016 Mar 31 # Groups & dynamics: Paul Nelson (ETH) - Quantum variance on quaternion algebras 10:00am to 11:00am ## Location: Ross building, Hebrew University of Jerusalem, (Room 70) To every topological group, one can associate a unique universal minimal flow (UMF): a flow that maps onto every minimal flow of the group. For some groups (for example, the locally compact ones), this flow is not metrizable and does not admit a concrete description. However, for many "large" Polish groups, the UMF is metrizable, can be computed, and carries interesting combinatorial information. The talk will concentrate on some new results that give a characterization of metrizable UMFs of Polish groups. It is based on two papers, one joint 2016 Dec 01 # Groups and dynamics: Masaki Tsukamoto (lecture 1) 10:30am to 11:30am ## Location: Ross 70 INTRODUCTION TO MEAN DIMENSION AND THE EMBEDDING PROBLEM OF DYNAMICAL SYSTEMS (Part 1) 2015 Dec 02 # Dynamics & probability: Ron Rosenthal (ETHZ) "Local limit theorem for certain ballistic random walks in random environments" 2:00pm to 3:00pm ## Location: Ross 70 Title: Local limit theorem for certain ballistic random walks in random environments Abstract: We study the model of random walks in random environments in dimension four and higher under Sznitman's ballisticity condition (T'). We prove a version of a local Central Limit Theorem for the model and also the existence of an equivalent measure which is invariant with respect to the point of view of the particle. This is a joint work with Noam Berger and Moran Cohen.
# Deriving the gamma distribution Let’s take a look at the gamma distribution: $f(x) = \frac{1}{\Gamma(\alpha)\theta^{\alpha}}x^{\alpha-1}e^{-x/\theta}$ I don’t know about you, but I think it looks pretty horrible. Typing it up in Latex is no fun and gave me second thoughts about writing this post, but I’ll plow ahead. First, what does it mean? One interpretation of the gamma distribution is that it’s the theoretical distribution of waiting times until the $\alpha$-th change for a Poisson process. In another post I derived the exponential distribution, which is the distribution of times until the first change in a Poisson process. The gamma distribution models the waiting time until the 2nd, 3rd, 4th, 38th, etc, change in a Poisson process. As we did with the exponential distribution, we derive it from the Poisson distribution. Let W be the random variable the represents waiting time. Its cumulative distribution function then would be $F(w) = P(W \le w) = 1 -P(W > w)$ But notice that $P(W > w)$ is the probability of fewer than $\alpha$ changes in the interval [0, w]. The probability of that in a Poisson process with mean $\lambda w$ is $= 1 - \sum_{k=0}^{\alpha-1}\frac{(\lambda w)^{k}e^{- \lambda w}}{k!}$ To find the probability distribution function we take the derivative of $F(w)$. But before we do that we can simplify matters a little by expanding the summation to two terms: $= 1 - \frac{(\lambda w)^{0}e^{-\lambda w}}{0!}-\sum_{k=1}^{\alpha-1}\frac{(\lambda w)^{k}e^{- \lambda w}}{k!} = 1 - e^{-\lambda w}-\sum_{k=1}^{\alpha-1}\frac{(\lambda w)^{k}e^{- \lambda w}}{k!}$ Why did I know to do that? Because my old statistics book did it that way. Moving on… $F'(x) = 0 - e^{-\lambda w}(-\lambda)-\sum_{k=1}^{\alpha-1}\frac{(k!(\lambda w)^{k}e^{- \lambda w}+e^{- \lambda w}k( \lambda w)^{k-1}\lambda}{(k!)^{2}}$ After lots of simplifying… $= \lambda e^{-\lambda w}+\lambda e^{-\lambda w}\sum_{k=1}^{\alpha-1}[\frac{(\lambda w)^{k}}{k!}-\frac{(\lambda w)^{k-1}}{(k-1)!}]$ And we’re done! Technically we have the gamma probability distribution. But it’s a little too bloated for mathematical taste. And of course it doesn’t match the form of the gamma distribution I presented in the beginning, so we have some more simplifying to do. Let’s carry out the summation for a few terms and see what happens: $\sum_{k=1}^{\alpha-1}[\frac{(\lambda w)^{k}}{k!}-\frac{(\lambda w)^{k-1}}{(k-1)!}] =$ $[\lambda w - 1] + [\frac{(\lambda w)^{2}}{2!}-\lambda w]+[\frac{(\lambda w)^{3}}{3!} - \frac{(\lambda w)^{2}}{2!}] +[\frac{(\lambda w)^{4}}{4!} - \frac{(\lambda w)^{3}}{3!}] + \ldots$ $+ [\frac{(\lambda w)^{\alpha - 2}}{(\alpha - 2)!} - \frac{(\lambda w)^{\alpha - 3}}{(\alpha - 3)!}] + [\frac{(\lambda w)^{\alpha - 1}}{(\alpha - 1)!} - \frac{(\lambda w)^{\alpha - 2}}{(\alpha - 2)!}]$ Notice that besides the -1 and 2nd to last term, everything cancels, so we’re left with $= -1 + \frac{(\lambda w)^{\alpha -1}}{(\alpha -1)!}$ Plugging that back into the gamma pdf gives us $= \lambda e^{-\lambda w} + \lambda e^{-\lambda w}[-1 + \frac{(\lambda w)^{\alpha -1}}{(\alpha -1)!}]$ This simplifies to $=\frac{\lambda (\lambda w)^{\alpha -1}}{(\alpha -1)!}e^{-\lambda w}$ Now that’s a lean formula, but still not like the one I showed at the beginning. To get the “classic” formula we do two things: 1. Let $\lambda = \frac{1}{\theta}$, just as we did with the exponential 2. Use the fact that $(\alpha -1)! = \Gamma(\alpha)$ Doing that takes us to the end: $= \frac{\frac{1}{\theta}(\frac{w}{\theta})^{\alpha-1}}{\Gamma(\alpha)}e^{-w/\theta} = \frac{1}{\theta}(\frac{1}{\theta})^{\alpha -1}w^{\alpha - 1}\frac{1}{\Gamma(\alpha)}e^{-w/\theta} = (\frac{1}{\theta})^{\alpha}\frac{1}{\Gamma(\alpha)}w^{\alpha -1}e^{w/\theta}$ $= \frac{1}{\Gamma(\alpha)\theta^{\alpha}}w^{\alpha-1}e^{-w/\theta}$ We call $\alpha$ the shape parameter and $\theta$ the scale parameter because of their effect on the shape and scale of the distribution. Holding $\theta$ (scale) at a set value and trying different values of $\alpha$ (shape) changes the shape of the distribution (at least when you go from $\alpha =1$ to $\alpha =2$: Holding $\alpha$ (shape) at a set value and trying different values of $\theta$ (scale) changes the scale of the distribution: In the applied setting $\theta$ (scale) is the mean wait time between events and $\alpha$ is the number of events. If we look at the first figure above, we’re holding the wait time at 1 and changing the number of events. We see that the probability of waiting 5 minutes or longer increases as the number of events increases. This is intuitive as it would seem more likely to wait 5 minutes to observe 4 events than to wait 5 minutes to observe 1 event, assuming a one minute wait time between each event. The second figure holds the number of events at 4 and changes the wait time between events. We see that the probability of waiting 10 minutes or longer increases as the time between events increases. Again this is pretty intuitive as you would expect a higher probability of waiting more than 10 minutes to observe 4 events when there is mean wait time of 4 minutes between events versus a mean wait time of 1 minute. Finally notice that if you set $\alpha = 1$, the gamma distribution simplifies to the exponential distribution. Update 5 Oct 2013: I want to point out that $\alpha$ and $\beta$ can take continuous values like 2.3, not just integers. So what I’ve really derived in this post is the relationship between the gamma and Poisson distributions. ## 6 thoughts on “Deriving the gamma distribution” First of all I would like to expres my great admiration for this wonderful derivation. However, I still have a problem. This derivation is perfect for alpha is a positive integer n. For this case the gamma distribution can be described as the sum of n independent exponentially distributed random variables each with the same exponential distribution. My question now is: how would you describe the gamma distribution for a continuous alpha, 0 < alpha? 1. ctlr Post author You raise a good point and I realize now this post is kind of wrong. Of course alpha can take continuous values. I describe the gamma distribution as if it only applies to waiting times in a Poisson process. What I should have said is something like “the waiting time W until the alpha-th change in a Poisson process has a gamma distribution.” In other words I derived the relationship between the gamma and Poisson distributions and should have clearly stated that. I will update the post accordingly in a few moments. To derive the distribution for continuous alpha, you start with the gamma function. Casella and Berger demonstrate this in their book, Statistical Inference, on page 99. 2. David Hey, this post was really helpful, thanks. In it you mention your “old statistics book”. Could you let me know the name of this book? Thanks- 1. ctlr Post author Glad the post was helpful. The book I referred to is Probability and Statistical Inference, 7th ed., by Hogg and Tanis. This site uses Akismet to reduce spam. Learn how your comment data is processed.
## HATS-17b: A TRANSITING COMPACT WARM JUPITER in A 16.3 DAY CIRCULAR ORBIT ### Description We report the discovery of HATS-17b, the first transiting warm Jupiter of the HATSouth network. HATS-17b transits its bright (V = 12.4) G-type (${M}_{\star }$ = $1.131\pm 0.030$ ${M}_{\odot }$, ${R}_{\star }$ = ${1.091}_{-0.046}^{+0.070}$ ${R}_{\odot }$) metal-rich ([Fe/H] = +0.3 dex) host star in a circular orbit with a period of P = $16.2546$ days. HATS-17b has a very compact radius of $0.777\pm 0.056$ ${R}_{{\rm{J}}}$ given its Jupiter-like mass of $1.338\pm 0.065$ ${M}_{{\rm{J}}}$. Up to...[Show more] Collections ANU Research Publications 2016 Journal article http://hdl.handle.net/1885/152028 Astronomical Journal 10.3847/0004-6256/151/4/89 Open Access
# How many chloride ions will be found in the formula for cobalt(II) chloride? Precisely $2$. $C o \left(I I\right)$ refers to a formal $C {o}^{2 +}$ ion. And thus we have $\text{cobaltous chloride, } C {o}^{2 +} + 2 \times C {l}^{-}$, $C o C {l}_{2}$.
# Siham Aouissi, Daniel C. Mayer, Moulay Chrif Ismaili, Mohamed Talbi, Abdelmalek Azizi $3$-rank of ambiguous class groups in cubic Kummer extensions Let $\mathrm{k}=\mathbb{Q}(\sqrt[3]{d},\zeta_3)$ be the Galois closure of a pure cubic field $\mathbb{Q}(\sqrt[3]{d})$, where $d>1$ is a cube-free positive integer and $\zeta_3$ is a primitive third root of unity. Denote by $C_{\mathrm{k},3}^{(\sigma)}$ the $3$-group of ambiguous ideal classes of the cubic Kummer extension $\mathrm{k}/\mathbb{Q}(\zeta_3)$ with relative group $G=\operatorname{Gal}(\mathrm{k}/\mathbb{Q}(\zeta_3))=\langle\sigma\rangle$. The aims of this paper are to determine all integers $d$ such that $\operatorname{rank}\,(C_{\mathrm{k},3}^{(\sigma)})=1$, to investigate the multiplicity $m(f)$ of the conductors $f$ corresponding to these radicands $d$, and to classify the fields $\mathrm{k}$ according to the cohomology of their unit groups $E_{\mathrm{k}}$ as Galois modules over $G$. The techniques employed for reaching these goals are relative $3$-genus fields, Hilbert norm residue symbols, and quadratic $3$-ring class groups modulo $f$. All theoretical achievements are underpinned by extensive computational results. Accéder au lien
# zbMATH — the first resource for mathematics Asymptotic properties of covariate-adjusted regression with correlated errors. (English) Zbl 1160.62079 Summary: In covariate-adjusted regression (CAR), the response $$(Y)$$ and the predictors $$(X_r,r=1,\dots ,p)$$ are not observed directly. The estimation is based on $$n$$ independent observations $$\{\tilde Y_i,\tilde X_{ri}, U_i\}_{i=1}^n$$, where $$\tilde Y_i = \psi (U_i)Y_i,\tilde X_{ri} = \phi _r (U_i)X_{ri}$$ and $$\psi (\cdot)$$ and $$\{\phi _r(\cdot )\}^p_{r=1}$$ are unknown functions. We discuss the asymptotic properties of this method when the observations are correlated, as in regression models for repeated measurements. ##### MSC: 62M10 Time series, auto-correlation, regression, etc. in statistics (GARCH) 62F12 Asymptotic properties of parametric estimators 62M20 Inference from stochastic processes and prediction Full Text: ##### References: [1] Hastie, T.; Tibshirani, R., Varying coefficient models, J. R. stat. soc. B., 55, 757-796, (1993) · Zbl 0796.62060 [2] Kaysen, G.A.; Dubin, J.A.; Müller, H.G.; Mitch, W.E.; Rosales, L.M.; Levin, N.W., Relationship among inflammation nutrition and physiologic mechanisms establishing albumin levels in hemodialysis patients, Kidney int., 61, 2240-2249, (2002) [3] Nguyen, D.V., Şentürk, D., 2009. Covariate-adjusted regression for longitudinal data incorporating correlation between repeated measurements. Austral. N. Z. J. Stat. (in press) [4] Şentürk, D.; Müller, H.G., Covariate-adjusted regression, Biometrika, 92, 75-89, (2005) · Zbl 1068.62082 [5] Şentürk, D.; Müller, H.G., Inference for covariate-adjusted regression via varying coefficient models, Ann. statist., 34, 654-679, (2006) · Zbl 1095.62045 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# Math Help - Partial Fraction Decomposition 1. ## Partial Fraction Decomposition Hi, I'm stuck on a problem that requires partial fraction decomposition: 2 / [(x2 + 1)( x2 + 4)] I can't figure out what to do about the x-squared form in the denominator. edit: I guess those exponents didn't format right. Can someone point me to some info on displaying equations in a pretty, math-ish type way? 2. $\displaystyle\frac{2}{(x^2+1)(x^2+4)}=\frac{Ax+B}{ x^2+1}+\frac{Cx+D}{x^2+4}=$ $\displaystyle=\frac{(A+C)x^3+(B+D)x^2+(4A+C)x+4B+D }{(x^2+1)(x^2+4)}$ Then $\displaystyle\left\{\begin{array}{c}A+C=0\\B+D=0\\ 4A+C=0\\4B+D=2\end{array}\right. \Rightarrow A=C=0, B=\frac{2}{3}, D=-\frac{2}{3}$ So, $\displaystyle\frac{2}{(x^2+1)(x^2+4)}=\frac{2}{3}\ left(\frac{1}{x^2+1}-\frac{1}{x^2+4}\right)$ 3. Aha, that makes sense. I'm used to only needing A and B. Thank You.
Definable real number A real number a is first-order definable in the language of set theory, without parameters, if there is a formula φ in the language of set theory, with one free variable, such that a is the unique real number such that φ(a) holds in the standard model of set theory (see Kunen 1980:153). For the purposes of this article, such reals will be called simply definable numbers. This should not be understood to be standard terminology. Note that this definition cannot be expressed in the language of set theory itself. ## General facts Assuming they form a set, the definable numbers form a field containing all the familiar real numbers such as 0, 1, π, e, et cetera. In particular, this field contains all the numbers named in the mathematical constants article, and all algebraic numbers (and therefore all rational numbers). However, most real numbers are not definable: the set of all definable numbers is countably infinite (because the set of all logical formulas is) while the set of real numbers is uncountably infinite (see Cantor's diagonal argument). As a result, most real numbers have no description (in the same sense of "most" as 'most real numbers are not rational'). The field of definable numbers is not complete; there exist convergent sequences of definable numbers whose limit is not definable (since every real number is the limit of a sequence of rational numbers). However, if the sequence itself is definable in the sense that we can specify a single formula for all its terms, then its limit will necessarily be a definable number. While every computable number is definable, the converse is not true: the numeric representations of the Halting problem, Chaitin's constant, the truth set of first order arithmetic, and 0# are examples of numbers that are definable but not computable. Many other such numbers are known. One may also wish to talk about definable complex numbers: complex numbers which are uniquely defined by a logical formula. However, whether this is possible depends on how the field of complex numbers is derived in the first place: it may not be possible to distinguish a complex number from its conjugate (say, 3+i from 3-i), since it is impossible to find a property of one that is not also a property of the other, without falling back on the underlying set-theoretic definition. Assuming we can define at least one nonreal complex number, however, a complex number is definable if and only if both its real part and its imaginary part are definable. The definable complex numbers also form a field if they form a set. The related concept of "standard" numbers, which can only be defined within a finite time and space, is used to motivate axiomatic internal set theory, and provide a workable formulation for illimited and infinitesimal number. Definitions of the hyper-real line within non-standard analysis (the subject area dealing with such numbers) overwhelmingly include the usual, uncountable set of real numbers as a subset. ## Notion does not exhaust "unambiguously described" numbers Not every number that we would informally say has been unambiguously described, is definable in the above sense. For example, if we can enumerate all such definable numbers by the Gödel numbers of their defining formulas then we can use Cantor's diagonal argument to find a particular real that is not first-order definable in the same language. The argument can be made as follows: Suppose that in a mathematical language L, it is possible to enumerate all of the defined numbers in L. Let this enumeration be defined by the function G: W → R, where G(n) is the real number described by the nth description in the sequence. Using the diagonal argument, it is possible to define a real number x, which is not equal to G(n) for any n. This means that there is a language L' that defines x, which is undefinable in L. ## Other notions of definability The notion of definability treated in this article has been chosen primarily for definiteness, not on the grounds that it's more useful or interesting than other notions. Here we treat a few others: ### Definability in other languages or structures #### Language of arithmetic The language of arithmetic has symbols for 0, 1, the successor operation, addition, and multiplication, intended to be interpreted in the usual way over the natural numbers. Since no variables of this language range over the reals, we cannot simply copy the earlier definition of definability. Rather, we say that a real a is definable in the language of arithmetic (or arithmetical) if its Dedekind cut can be defined as a predicate in that language; that is, if there is a first-order formula φ in the language of arithmetic, with two free variables, such that $\forall m \, \forall n \, (\varphi(n,m)\iff\frac{n}{m} #### 2nd-order language of arithmetic The second-order language of arithmetic is the same as the first-order language, except that variables and quantifiers are allowed to range over sets of naturals. A real that is second-order definable in the language of arithmetic is called analytical. ### Definability with ordinal parameters Sometimes it is of interest to consider definability with parameters; that is, to give a definition relative to another object that remains undefined. For example, a real a (or for that matter, any set a) is called ordinal definable if there is a first-order formula φ in the language of set theory, with two free variables, and an ordinal γ, such that a is the unique object such that φ(a,γ) holds (in V). The other sorts of definability thus far considered have only countably many defining formulas, and therefore allow only countably many definable reals. This is not true for ordinal definability, because an ordinal definable real is defined not only by the formula φ, but also by the ordinal γ. In fact it is consistent with ZFC that all reals are ordinal-definable, and therefore that there are uncountably many ordinal-definable reals. However it is also consistent with ZFC that there are only countably many ordinal-definable reals. ## References • Kunen, Kenneth (1980), Set Theory: An Introduction to Independence Proofs, Amsterdam: North-Holland, ISBN 978-0-444-85401-8 Wikimedia Foundation. 2010. ### Look at other dictionaries: • Real number — For the real numbers used in descriptive set theory, see Baire space (set theory). For the computing datatype, see Floating point number. A symbol of the set of real numbers …   Wikipedia • Definable — In mathematical logic, the word definable may refer to: A definable real number. A definable set. A definable integer sequence. A relation or function definable over a first order structure. A mathematical object or concept that is well defined.… …   Wikipedia • Definable set — In mathematical logic, a definable set is an n ary relation on the domain of a structure whose elements are precisely those elements satisfying some formula in the language of that structure. A set can be defined with or without parameters, which …   Wikipedia • Real closed field — In mathematics, a real closed field is a field F in which any of the following equivalent conditions are true:#There is a total order on F making it an ordered field such that, in this ordering, every positive element of F is a square in F and… …   Wikipedia • Constructible number — For numbers constructible in the sense of set theory, see Constructible universe. A point in the Euclidean plane is a constructible point if, given a fixed coordinate system (or a fixed line segment of unit length), the point can be constructed… …   Wikipedia • Computable number — In mathematics, particularly theoretical computer science and mathematical logic, the computable numbers, also known as the recursive numbers or the computable reals, are the real numbers that can be computed to within any desired precision by a… …   Wikipedia • Hyperreal number — *R redirects here. For R*, see Rockstar Games. The system of hyperreal numbers represents a rigorous method of treating the infinite and infinitesimal quantities. The hyperreals, or nonstandard reals, *R, are an extension of the real numbers R… …   Wikipedia • Condition number — In the field of numerical analysis, the condition number of a function with respect to an argument measures the asymptotically worst case of how much the function can change in proportion to small changes in the argument. The function is the… …   Wikipedia • Algebraic number — In mathematics, an algebraic number is a complex number that is a root of a non zero polynomial in one variable with rational (or equivalently, integer) coefficients. Complex numbers such as pi that are not algebraic are said to be transcendental …   Wikipedia • List of mathematical logic topics — Clicking on related changes shows a list of most recent edits of articles to which this page links. This page links to itself in order that recent changes to this page will also be included in related changes. This is a list of mathematical logic …   Wikipedia
Index Working with MUMIE as author Working with MUMIE as teacher Using MUMIE via plugin in local LMS Overview Go back to the old wiki We revise and update this wiki. We apologize for the inconvenience this may cause you. # Sell The Simple E-Learning Language (=: SELL) is an open standard for mathematical e-learning questions with the following objectives: • Simple syntax: No unneeded overhead. Defining a task/question should be similar to describe the task/question on a blackboard. • Expressive langage: Programs are short and concise such that the semantics can be grasp quickly. Irrelevant detail is avoided. • Reuse of established standards: It inherit AsciiMath for mathematical expressions and Markdown for text formatting. Project maintainer: Andreas Schwenk / TH Köln. ## Language Definition SELL is domain-specific langague (DSL). Visit the official website for detailed information. ## Hello, World! The following example demonstrates a question in a mumie article written in SELL. \begin{sell}    Matrizenoperationen         a := { 1, 2, 3 }        A, B in MM(3 x 3 | a )        C := A - B        input rows := resizable        input cols := resizable     Berechne $A - B = #C$\end{sell} rendered by MUMIE: ## More Examples You'll find a set of working examples (in German language) here. The corresponding SELL-code can be found in Gitlab repository in directory /src/training/
# Thread: HOWTO convert LaTeX to OpenOffice .odt and MS Word .doc 1. ## HOWTO convert LaTeX to OpenOffice .odt and MS Word .doc Hi, Ubuntu 11.10 here. I used the simplest way: $> latex foo.tex$> mk4ht foo.tex It worked aparently fine. However, when I needed to split an equation using \begin{multline} .... \end{multline} the equation is not splited, despite the dvi version shows the correct spliting. Does anyone have a tip to split equations that works in mk4ht? Thanks 2. ## Re: HOWTO convert LaTeX to OpenOffice .odt and MS Word .doc Hi, I found a workaround: \begin{split} .... \end{split} The only problem is that you need to edit the equation in LibreOffice in order to delete the numbering (# "(2)"}), otherwise you will have a double numbering for the same equation. 3. ## Re: HOWTO convert LaTeX to OpenOffice .odt and MS Word .doc Btw, the resulting equation format is ugly. 4. First Cup of Ubuntu Join Date May 2008 Beans 4 5. ## Re: HOWTO convert LaTeX to OpenOffice .odt and MS Word .doc On Ubuntu 12.04 a simple install of tex4ht with apt-get is enough. dvipng will also be installed. tex4ht package version is 20090611-1.1. So far no issues. #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •
Kaisa Matomäki, Maksym Radziwill, Xuancheng Shao, Joni Teräväinen and I have just uploaded our preprint “Singmaster’s conjecture in the interior of Pascal’s Triangle” to the arXiv. This paper uses the theory of exponential sums over prime numbers to advance a well-known conjecture by Singmaster that claims that any natural number is greater than ${1}$ there is at most a limited number in Pascal’s triangle. That is, for any whole number ${t geq 2}$, there are at most ${O (1)}$ Solutions to the equation $displaystyle binom {n} {m} = t (1)$ With”https://s0.wp.com/latex.php?latex=%7B1+%5Cleq+m+%3C+n%7D&bg=ffffff&fg=000000&s=0&c=20201002″ old =”{1 leqm < n}" class="latex" />. Currently, the largest number of solutions known to be achievable are eight, with ${t}$ equals $displaystyle 3003 = binom {3003} {1} = binom {78} {2} = binom {15} {5} = binom {14} {6} = binom {14} {8} = binomial {15} {10}$ $displaystyle = binom {78} {76} = binom {3003} {3002}.$ Because of the symmetry ${ binom {n} {m} = binom {n} {nm}}$ of Pascal’s triangle, it is natural to focus attention on the left half ${1 leq m leq n / 2}$ of the triangle. Our main result defines this assumption in the “inner” area of ​​the triangle: Movement 1 (Singmeister conjecture inside the triangle) If”https://s0.wp.com/latex.php?latex=%7B0+%3C+%5Cvarepsilon+%3C+1%7D&bg=ffffff&fg=000000&s=0&c=20201002″ old =”{0 < varepsilon < 1}" class="latex" /> and ${t}$ is big enough depending on ${ varepsilon}$, there are at most two solutions for (1) in the region $displaystyle exp ( log ^ {2/3 + varepsilon} n) leq m leq n / 2 (2)$ and thus a maximum of four in the region $displaystyle exp ( log ^ {2/3 + varepsilon} n) leq m leq n - exp ( log ^ {2/3 + varepsilon} n).$ In addition, there is at most one solution in the region $displaystyle exp ( log ^ {2/3 + varepsilon} n) leq m leq n / exp ( log ^ {1- varepsilon} n).$ In order to fully verify Singmaster’s guess, in view of this result it is sufficient to verify the guess in the edge area (or equivalent”https://s0.wp.com/latex.php?latex=%7Bn+-+%5Cexp%28%5Clog%5E%7B2%2F3%2B%5Cvarepsilon%7D+n%29+%3C+m+%5Cleq+ n% 7D & bg = ffffff & fg = 000000 & s = 0 & c = 20201002″ old =”{n – exp ( log ^ {2/3 + varepsilon} n) < m leq n}" class="latex" />). The upper bound of two for the number of solutions in region (7) is due to the infinite solution family of the equation. best possible $displaystyle binom {n + 1} {m + 1} = binom {n} {m + 2} (3)$ come from ${n = F_ {2j + 2} F_ {2j + 3} -1}$, ${m = F_ {2j} F_ {2j + 3} -1}$ and ${F_j}$ is the ${j ^ {th}}$ Fibonacci number. The look of the crowd ${ exp ( log ^ {2/3 + varepsilon} n)}$ in Theorem 1, readers may be familiar with Vinogradov’s bounds on exponential sums, which are ultimately the new main component of our argument. In principle, this threshold could be lowered if we had stronger limits for exponential sums. To try to control solutions for (1), we use a combination of an “Archimedean” and a “non-Archimedean” approach. In the “Archimedean” approach (based on an earlier work by Kane on this problem) we consider ${n, m}$ mainly as real numbers rather than integers, and in terms of the gamma function, express (1) as $displaystyle frac { Gamma (n + 1)} { Gamma (m + 1) Gamma (n-m + 1)} = t.$ This equation can be used to solve for ${n}$ in terms of ${m, t}$ how $displaystyle n = f_t (m)$ for a certain real analytic function ${f_t}$ whose asymptotics are easy to calculate (for example one has the asymptotics ${f_t (m) asymp mt ^ {1 / m}}$). One can then see the problem by trying to control the number of grid points on the graph ${ {(m, f_t (m)): m in { bf R} }}$. Here we can take advantage of the fact that in the regime ${m leq f_t (m) / 2}$ (corresponds to the work in the left half ${m leq n / 2}$ of Pascal’s triangle), the function ${f_t}$ can be shown as convex, but not too convex, in the sense that one has both upper and lower bounds on the second derivative of ${f_t}$ (actually you can show that ${f '' _ t (m) asymp f_t (m) ( log t / m ^ 2) ^ 2}$). This can be used to rule out the possibility of having a cluster of three or more neighboring grid points on the graph ${ {(m, f_t (m)): m in { bf R} }}$, basically because the area bounded by the triangle connecting three of these points is between ${0}$ and ${1/2}$which contradicts Pick’s theorem. In developing these ideas, we were able to show Suggestion 2 To let ${ varepsilon> 0}” class=”latex” />, and assume is big enough depending on ${ varepsilon}$. If ${(m, n)}$ is a solution of (1) in the left half ${m leq n / 2}$ of Pascal’s triangle, then there is at most one other solution ${(m ', n')}$ to this equation in the left half with $displaystyle | m-m '| + | n-n '| ll exp (( log log t) ^ {1- varepsilon}).$ Here, too, the example from (3) shows that a cluster of two solutions is entirely possible; the convexity argument only takes effect when you have a cluster of three or more solutions. To complete the proof of Theorem 1, one has to show that any two solutions ${(m, n), (m ', n')}$ to (1) in the region of interest must be close enough for the above suggestion to apply. Here we switch to the “non-Archimedean” approach, in which we use the ${p}$-adic ratings ${ nu_p ( binom {n} {m})}$ the binomial coefficient, defined as the frequency of a prime number ${p}$ Splits ${ binom {n} {m}}$. From the fundamental theorem of arithmetic, there is a collision $displaystyle binom {n} {m} = binom {n '} {m'}$ between binomial coefficients occurs if and only if there is a match between the ratings $displaystyle nu_p ( binom {n} {m}) = nu_p ( binom {n '} {m'}). (4)$ From the Legendre formula $displaystyle nu_p (n!) = sum_ {j = 1} ^ infty lfloor frac {n} {p ^ j} rfloor$ we can rewrite this latter identity (4) as $displaystyle sum_ {j = 1} ^ infty { frac {m} {p ^ j} } + { frac {nm} {p ^ j} } - { frac {n} { p ^ j} } = sum_ {j = 1} ^ infty { frac {m '} {p ^ j} } + { frac {n'-m'} {p ^ j} } - { frac {n '} {p ^ j} }, (5)$ Where ${ {x }: = x - lfloor x rfloor}$ denotes the fraction of ${x}$. (These sums are not really infinite because the summands vanish once ${p ^ j}$ is bigger than ${ max (n, n ')}$.) A key idea in our approach is to look at this condition (5) statistical, for example by looking ${p}$ as a randomly drawn prime number from an interval such as such ${[P, P + P log^{-100} P]}$ for some suitably chosen scale parameters ${P}$so that the two sides of (5) now become random variables. It is then advantageous to compare correlations between these two random variables and some additional test random variables. For example when ${n}$ and ${n '}$ are far apart, then one would expect the left hand side of (5) to have a higher correlation with the fraction ${ { frac {n} {p} }}$because this term appears in the summation on the left, but not on the right. Similar if ${m}$ and ${m '}$ are far apart (although there are some annoying cases that need to be dealt with separately in the event of an “unexpected adequacy”, for example when ${n'-m '}$ is a rational multiple of ${m}$ where the rational has bounded numerators and denominators). To carry out this strategy, it turns out (after some standard Fourier expansion) that you have good control over exponential sums such as $displaystyle sum_ {P leq p leq P + log ^ {- 100} P} e ( frac {N} {p} + frac {M} {p ^ j})$ for different parameter selection ${P, N, M, j}$, Where ${e ( theta): = e ^ {2 pi i theta}}$. Fortunately, Vinogradov’s methods (the more general with sums like ${ sum_ {n in I} e (f (n))}$ and ${ sum_ {p in I} e (f (p))}$ for various analysis functions ${f}$) can give useful limits on such sums as long as ${N}$ and ${M}$ aren’t too big compared to ${P}$; more precisely, Vinogradov’s estimates are not trivial in the regime ${N, M ll exp ( log ^ {3 / 2- varepsilon} P)}$, and this ultimately leads to a distance barrier $displaystyle m '- m ll_ varepsilon exp ( log ^ {2/3 + varepsilon} (n + n'))$ between each colliding pair ${(n, m), (n ', m')}$ bound in the left half of Pascal’s triangle, as well as the variant $displaystyle n '- n ll_ varepsilon exp ( log ^ {2/3 + varepsilon} (n + n'))$ $displaystyle m ', m geq exp ( log ^ {2/3 + varepsilon} (n + n')).$ Compare these limits to Theorem 2 and use some basic guesses about the function ${f_t}$, we can close sentence 1. Modifying the arguments also gives similar results for the equation $displaystyle (n) _m = t (6)$ Where ${(n) _m: = n (n-1) dots (n-m + 1)}$ is the decreasing factorial: Sentence 3 If”https://s0.wp.com/latex.php?latex=%7B0+%3C+%5Cvarepsilon+%3C+1%7D&bg=ffffff&fg=000000&s=0&c=20201002″ old =”{0 < varepsilon < 1}" class="latex" /> and ${t}$ is big enough depending on ${ varepsilon}$, there are at most two solutions for (6) in the region Again, the upper limit of two is thanks to identities like $displaystyle (a ^ 2-a) _ {a ^ 2-2a} = (a ^ 2-a-1) _ {a ^ 2-2a + 1}.$
## International Workshop on Elliptic and Kinetic Partial Differential Equations Mini Courses Week 1: Prof. Lihe Wang (Iowa) Regularity theory of elliptic equations - Abstract: The first lecture will start out with the maximum principle and their gradient estimates. Most importantly we will introduce the idea of geometrical interpretation of the estimate and its scaling.We will talk about the weak formulation of the elliptic equation. Second lecture is about Schauder estimate. Both maximum principle approach and the regularity of weak solution will be represented. We will present the DeGiorgi and Nash theory in the last lecture. We will discuss the geometrical interpretation along each of the computations. Prof. Luigi Ambrosio (Scuola Normale Superiore - Pisa) Flow of nonsmooth vector fields and applications. Part I - Abstract: At the beginning of the 1990→s, DiPerna and Lions made a deep study on the connection between transport equations and ordinary differential equations. In particular, by proving existence and uniqueness of bounded solutions for transport equations with Sobolev vector-fields, they obtained (roughly speaking) existence and uniqueness of solutions for ODEs for a.e. initial condition. Ten years later, Ambrosio extended this result to BV vector fields, providing also a new axiomatization of the theory of flows, more based on probabilistic tools. In recent years, several new extensions have been obtained, that give rise to applications to PDEs which include some systems of conservation laws, semi-geostrophic equations, the linear Schrodinger equation and the Vlasov-Poisson equation. In the first part of the lectures (by L. Ambrosio), we will introduce the general theory of flows, covering the duality between the ODE well-posedness and the PDE well-posedness and presenting basic classes of vector fields (Sobolev, BV, ...) where this theory applies. In the second part of the lectures (by A. Figalli), we shall focus on the more recent extensions and their applications to PDEs. Prof. Laure Saint-Raymond (École Normale Supérieure - Paris) From particle systems to collisional kinetic equations - Abstract: 1. The low density limit: formal derivation Consider a deterministic system of $N$ hard spheres of diameter $\eps$. Assume that they are initially independent and identically distributed. Then, in the limit when $N\to \infty$ and $\eps \to 0$ with $N\eps^2=1$ (Boltzmann-Grad scaling), the one-particle density can be approximated by the solution to the kinetic Boltzmann equation. In particular, particles remain asymptotically independent. In the first lecture, we will present the formal derivation of this low density limit, and discuss two important features, namely the propagation of chaos and the appearance of irreversibility. 2. A short time convergence result Lanford's theorem states that in the Boltzmann-Grad limit the one-particle density converges to the solution of the kinetic Boltzmann equation almost everywhere on a short time interval (corresponding actually to a fraction of the average first collision time). The proof relies on a careful study of the recollision mechanism (which is not described by the Boltzmann dynamics), and on a priori bounds obtained by a Cauchy-Kowalewski argument. In the second lecture, we will give a sketch of this proof, and show that the time restriction is due to the lack of global a priori bounds Prof. Arshak Petrosyan (Purdue) The Thin Obstacle Problem - Abstract: We will discuss the techniques in the study of the thin obstacle problem, developed in the recent years: - Almgren's, Weiss's, and Monneau's monotonicity formulas; - Epiperimetric inequality; - Partial hodograph-Legendre transform (and connection with subelliptic equations) - Higher order boundary Harnack principle of De Silva - Savin. #### Contributed Talks Friday - July 10th Nome: Dennis Kriventsov (University of Texas at Austin - USA) Título: Free boundary problem related to thermal insulation Nome: Robin Neumayer (University of Texas at Austin - USA) Título: A strong form of the quantitative Wulff inequality Nome: Juliana F.S. Pimentel (ICMC - USP - Brazil) Título: Asymptotic behavior of nondissipative scalar reaction-diffusion equations Nome: Javier Morales (University of Texas at Austin - USA) Título: Self propelled particles, Optimal transportation and the logarithmic Sobolev inequality Nome: Roberto Velho (KAUST - Saudi Arabia) Título: A Short Introduction to Mean Field Games Nome: David Evangelista da Silveira Junior (KAUST - Saudi Arabia) Título: Generalised mean-field games with congestion Nome: Léonard Monsaingeon (University of Texas at Austin - IST-Lisbon) Título: A new optimal transport distance between nonnegative measures in R^d Nome: Maja Taskovic (University of Texas at Austin - USA) Título: Exponential tails for solutions to the homogeneous Boltzmann equation Nome: Edgard Pimentel (UFC & UFSCAR - Brazil) Título: Elliptic Mean Field Games Systems Nome: Juan Spedaletti (Universidad de San Luis - CONICET Argentina) Título: Convergence Results for the Steklov eigenvalue and optimal windows in Oscillating Domains Nome: Rohit Jain (University of Texas at Austin - USA) Título: The Fully Nonlinear Stochastic Impulse Control Problem Mini Courses Week 2: Prof. Henri Berestycki (CNRS/EHESS - Paris) Reaction-diffusion and propagation in non-homogenous media - Abstract: The classical theory of reaction-diffusion deals with nonlinear parabolic equations that are homogenous in space and in time. It analyses travelling waves, long time behavior and the speed of propagation. More general, heterogeneous reaction-diffusion equations arise naturally in models of ecology, biology and medicine that lead to challenging mathematical questions. In this series of lectures, after reviewing fundamental results of the classical theory, I will describe recent progress on models that involve spatially heterogeneous non-linear parabolic and elliptic equations. I will also consider cases with non-local diffusions. The course will involve the following themes: 1. Review of the classical theory of homogenous reaction-diffusion equations. 2. The effect of a line with fast diffusion on Fisher-KPP invasion. 3. The effect of domain shape. 4. Models with non-local operators. 5. Propagation and spreading speeds in non-homogeneous media. Prof. Irene Gamba (Texas - Austin) Analytical issues for non-local multi-linear interaction models: The Boltzmann and related equations - Abstract: The Boltzmann equation models the evolution of continuum random processes for multilinear dynamics. Nowadays, diverse models based on the ideas of Boltzmann and Maxwell, referred also as collisional kinetic transport in particle interacting systems, are widely used in modeling phenomena ranging from rarefied classical gas dynamics, inelastic interacting systems in granular or polymer kinetic flows, collisional plasmas and electron transport in nanostructures in mean field theories, to self-organized or social interacting dynamics. This type of models share a common description based in a Markovian framework of birth and death processes in a multi-linear setting. Following the Introductory lectures “From particle systems to collisional kinetic equations” by Laure Saint-Raymond, in the first two lectures, we will focus on diverse analytical issues depending on the properties of the transition probability rates associated to the Markovian process. We will discuss both the space homogeneous as well as the inhomogeneous problems. The results strongly depend on the structure of transition probability rates, which controls regularity, high-energy tail properties, as well as long time behavior of the solutions to steady or self-similar states. We will also discuss the Coulomb potential limit case the yields the Landau equation widely use in collisional plasma theory. The third lecture will focus on an interesting result that distinguish the characterization of space inhomogeneous stationary solutions of in all space vs. a tori, where the effects of dispersion and dissipation interplay producing unexpected effects. We analyze the existence and long time behavior of solutions of the space inhomogeneous Boltzmann equation in the whole space for initial data in the vicinity of a global Maxwellian, and show, surprisingly, the existence of a scattering regime that leads to the construction of eternal solutions that do not converge to such global Maxwellian, as expected from the H-theorem. Prof. Alessio Figalli (Texas - Austin) Flow of nonsmooth vector fields and applications. Part II - Abstract: At the beginning of the 1990s, DiPerna and Lions made a deep study on the connection between transport equations and ordinary differential equations. In particular, by proving existence and uniqueness of bounded solutions for transport equations with Sobolev vector-fields, they obtained (roughly speaking) existence and uniqueness of solutions for ODEs for a.e. initial condition. Ten years later, Ambrosio extended this result to BV vector fields, providing also a new axiomatization of the theory of flows, more based on probabilistic tools. In recent years, several new extensions have been obtained, that give rise to applications to PDEs which include some systems of conservation laws, semi-geostrophic equations, the linear Schrodinger equation and the Vlasov-Poisson equation. In the first part of the lectures (by L. Ambrosio), we will introduce the general theory of flows, covering the duality between the ODE well-posedness and the PDE well-posedness and presenting basic classes of vector fields (Sobolev, BV, ...) where this theory applies. In the second part of the lectures (by A. Figalli), we shall focus on the more recent extensions and their applications to PDEs.
# Maths Algebra Level 2 For all $$x$$ in the domain of the function $$\dfrac{x+1}{x^3-x}$$, this function is equivalent to: F. $$\dfrac1{x^2} -\dfrac1{x^3}$$ G. $$\dfrac1{x^3} -\dfrac1x$$ H. $$\dfrac1{x^2-1}$$ J. $$\dfrac1{x^2-x}$$ K. $$\dfrac1{x^3}$$ ×
NZ Level 3 Balancing with mixed operations II Lesson When we balance number problems, we want them to have the same value on each side.   What can we do to one side, so that it has the same value as the other?  A seesaw can help you imagine how you might do this! #### Worked Examples ##### Question 1 Complete the number sentence: 1. $15-\editable{}=14-2$15=142 ##### Question 2 Complete the number sentence: 1. $24\div4=2\times\editable{}$24÷​4=2× ##### Question 3 Complete the number sentence: 1. $3+6=18\div\editable{}$3+6=18÷​ ### Outcomes #### NA3-1 Use a range of additive and simple multiplicative strategies with whole numbers, fractions, decimals, and percentages.
5 # Find the points of horizontal tangency to the polar curve_ r = a sin 0 0 < 0 < T, a > 0(r, 0) =0,0(smaller value)(r, 0) =(larger r value)Find the points of... ## Question ###### Find the points of horizontal tangency to the polar curve_ r = a sin 0 0 < 0 < T, a > 0(r, 0) =0,0(smaller value)(r, 0) =(larger r value)Find the points of vertical tangency to the polar curve_ (r, 0) = (smaller 0 value)(r, 0) =(larger value)Need Help?Rcad ItTalk to a TutorSubmit AnswerSave Progress Find the points of horizontal tangency to the polar curve_ r = a sin 0 0 < 0 < T, a > 0 (r, 0) = 0,0 (smaller value) (r, 0) = (larger r value) Find the points of vertical tangency to the polar curve_ (r, 0) = (smaller 0 value) (r, 0) = (larger value) Need Help? Rcad It Talk to a Tutor Submit Answer Save Progress ## Answers #### Similar Solved Questions 5 answers ##### 2 8 H 3 2 2 3 L 7 347 0 3 2 18 i 8 ! @ 1 1 1 8 2 8 8 3 HL L ; N W 5 0 8 1 ] 3 3 8 3 3 1 81 ; [ 3 9 3 8 1 V J/ { 1 2 ~! J 1 L 1 1MIA 2 8 H 3 2 2 3 L 7 347 0 3 2 18 i 8 ! @ 1 1 1 8 2 8 8 3 HL L ; N W 5 0 8 1 ] 3 3 8 3 3 1 81 ; [ 3 9 3 8 1 V J/ { 1 2 ~ ! J 1 L 1 1 MIA... 5 answers ##### Solve the system by using the inverse of the coefficient matrix3x, 6x2 -3 3X1 42 23OA (2,5) 2 B. (5,2) 0 C. (-5,- 2) JD. (-2.5) Solve the system by using the inverse of the coefficient matrix 3x, 6x2 -3 3X1 42 23 OA (2,5) 2 B. (5,2) 0 C. (-5,- 2) JD. (-2.5)... 5 answers ##### Radrual 0 eontentr 0 ahtotal 0 1 Qpej wand Jnd Qucer 11 8888778 EEEZ radii W] charge H 9 9 8 Iu 598 outer Mili 1 U ^ etectriceiold edd Srzero, large the regon 5 reqion 299 Is given ~29(dneor2)0 chargesphericalsh Sncimit1 cand outer radrual 0 eontentr 0 ahtotal 0 1 Qpej wand Jnd Qucer 1 1 8888778 EEEZ radii W] charge H 9 9 8 Iu 598 outer Mili 1 U ^ etectriceiold edd Srzero, large the regon 5 reqion 299 Is given ~29(dneor2) 0 charge sphericalsh Sncimit 1 cand outer... 1 answers ##### (1.4.6 in Tao) Let (X d) metric space, anc (Tn) sequence Prove that if L1 and Lg are both eluster points of (Tn) and Li F Ly; then (Tn) is not Canchy. (1.4.6 in Tao) Let (X d) metric space, anc (Tn) sequence Prove that if L1 and Lg are both eluster points of (Tn) and Li F Ly; then (Tn) is not Canchy.... 5 answers ##### Man with mass m| = 90.0 kg stands at the end of uniform beam with mass m2 = 100 kg and length L = 2.90 m. Another person with mass m3 = 53.0 kg stands on the far right end of the beam and holds a medicine ball with mass m4 14.0 kg (assume that the medicine ball is at the far right end of the beam aS well, shown in the figure). Let the origin of our coordinate system be the initial position of the middle of the beam as shown in the drawing: Assume there is no friction between the beam and floor: man with mass m| = 90.0 kg stands at the end of uniform beam with mass m2 = 100 kg and length L = 2.90 m. Another person with mass m3 = 53.0 kg stands on the far right end of the beam and holds a medicine ball with mass m4 14.0 kg (assume that the medicine ball is at the far right end of the beam aS... 5 answers ##### Ctainist btratcs 160 0 mL ofa 051S5Mhydrocyar€ acid (HCN) Suno with ! L3SJMKOH zolution 25 C. Caalate the pll &t caurvalence. Ihepkg 8ihydrocyanic jad i 9 213Round Your_ansmner [D deomal pltcsInolodvoncrd students: You [i0 ayiumtcolumcplue Uy UrenutKOH solution addrd ctainist btratcs 160 0 mL ofa 051S5Mhydrocyar€ acid (HCN) Suno with ! L3SJMKOH zolution 25 C. Caalate the pll &t caurvalence. Ihepkg 8ihydrocyanic jad i 9 213 Round Your_ansmner [D deomal pltcs Inol odvoncrd students: You [i0 ayiumt columc plue Uy Urenut KOH solution addrd... 5 answers ##### The standard deviation ofheights of 6 active volcanoes in North America outside of Alaska is 2194.8. The standard deviation of Volcanoes heights in Alaska is 2385.9 feet: Is there sufficient evidence that the standard deviation in the heights of volcanoes outside Alaska is less than the standard deviation in height of Alaskan volcanoes? Use a=0.05 The standard deviation ofheights of 6 active volcanoes in North America outside of Alaska is 2194.8. The standard deviation of Volcanoes heights in Alaska is 2385.9 feet: Is there sufficient evidence that the standard deviation in the heights of volcanoes outside Alaska is less than the standard dev... 5 answers ##### Which of these stages is the first one out of sequence?a. cleavageb. blastulac. morulad. gastrulae. neurula Which of these stages is the first one out of sequence? a. cleavage b. blastula c. morula d. gastrula e. neurula... 1 answers ##### A 40.0 -kg box initially at rest is pushed 5.00 m along a rough, horizontal floor with a constant applied horizontal force of $130 \mathrm{N} .$ If the coefficient of friction between box and floor is $0.300,$ find $(a)$ the work done by the applied force, (b) the increase in internal energy in the box-floor system due to friction, (c) the work done by the normal force, (d) the work done by the gravitational force, (e) the change in kinetic energy of the box, and (f) the final speed of the box. A 40.0 -kg box initially at rest is pushed 5.00 m along a rough, horizontal floor with a constant applied horizontal force of $130 \mathrm{N} .$ If the coefficient of friction between box and floor is $0.300,$ find $(a)$ the work done by the applied force, (b) the increase in internal energy in the ... 5 answers 4 answers ##### Stote tha nyporCycyof the following scdilcond stotethe hvdothcse dlauroom leHS 117,Ha: p > .171.thev were stoted4tewusconducted deterinin# if there wus suiticient e dencethat tha prejortion sucls pEjulatiom exceeds the prcopnijn u suetessex Rjoulaticn 2 % mort *iIn simple lineai reerexicn #rzksis Wuscondurtedto deternir# "netne we cune condudethat th# inciei e Yfora[unr increaie in % i> diftere"t than44ta40 Secjolati:mua BSb sample ircrthat pcaulation wasconjucted whether there Stote tha nyporCycy of the following scdilcond stotethe hvdothcse dlauroom leHS 117,Ha: p > .171. thev were stoted 4tewusconducted deterinin# if there wus suiticient e dencethat tha prejortion sucls pEjulatiom exceeds the prcopnijn u suetessex Rjoulaticn 2 % mort *i In simple lineai reerexicn #rz... 5 answers ##### In the shown circuit,You are given that FV Eind (#volrs) the mlagninude of_the potential difference berween points and (IAVabl)Moving to another question wvill save this response: In the shown circuit,You are given that FV Eind (#volrs) the mlagninude of_the potential difference berween points and (IAVabl) Moving to another question wvill save this response:... 5 answers ##### We study the relation between the number of accidents on the BQE(Interstate 278) and traffic density (measured in the number ofcars per mile). We obtained the following regression resultsaccident = 7.52 + 0.378traffic (0.2108) (0.0922) where accident isthe variable measuring the number of accidents, traffic is thevariable measuring traffic density (standard deviations of OLSestimates in parenthesis). The density of traffic has a significanteffect on the number of accidents at the 2.2%. True Fals We study the relation between the number of accidents on the BQE (Interstate 278) and traffic density (measured in the number of cars per mile). We obtained the following regression results accident = 7.52 + 0.378traffic (0.2108) (0.0922) where accident is the variable measuring the number of accide... 5 answers ##### A student performed the reaction shown below:OHHzCHzCOHCH3benzoinacetic anhydridebenzoin acetateacetic acid Which spectrum corresponds to benzoin and which spectrum corresponds to benzoin acetate (write the structure in the blank section of the spectra below)? For each spectrum, provide the written data for the spectrum in the format used for lab reportsWritten data:CH3 A student performed the reaction shown below: OH HzC HzC OH CH3 benzoin acetic anhydride benzoin acetate acetic acid Which spectrum corresponds to benzoin and which spectrum corresponds to benzoin acetate (write the structure in the blank section of the spectra below)? For each spectrum, provide the... 5 answers ##### HOOHAn esterHzo HO OH An ester Hzo... 5 answers ##### For cach ol the following Frocesscs undicale whetct you would posilive (+}or ncgativc ( cxpect Ihe sign of AllandFloreeSienALJHAJenOnasce cuhc Inceersubliralion of dry IcePCIs ()PCh (g) Cl; (9)olwnys sponlancous, Icvcr spontizous Indieaic #hcthcr cach ofthe following reuctioas spontuneous at high temperlure , eponlinceus low tcmeralurcs: Iction #ith 4H +170 LJimol und AS #70 Jmol K ncvcr high Icmp JotIcM alway5 +[2 Jimol K reaclion with AH = -J3S kJimol and AS high emp low Icip always Mver 338 c For cach ol the following Frocesscs undicale whetct you would posilive (+}or ncgativc ( cxpect Ihe sign of Alland Floree SienALJH AJenOnas ce cuhc Inceer subliralion of dry Ice PCIs () PCh (g) Cl; (9) olwnys sponlancous, Icvcr spontizous Indieaic #hcthcr cach ofthe following reuctioas spontuneous at... 5 answers ##### Find the general solution using the method of Undetermined Coefficients 3y Sy=10 +31+7 Find the general solution using the method of Undetermined Coefficients 3y Sy=10 +31+7... 5 answers ##### Arlice recorts [ne {Dllominovield (7) Mean Lemperalure ovepenod beme0cominomops and dalepicking (*ano mean percentagesunshine during tne same penco (*z) fcr E F uqglE vanel18.9 41 211 107 102 10217.3(a) When the model Y = 0 + 0*1 + 0x*2 + 0*1" + 0417 + 05*1*2 + & Is fit to the hops data tne estmate of 05 Is P5 0.657 with esbmated standard deviation S,5 0.92. Test Ha: 05 D versus R;: 05 calculate tne test statistic and determine the / value (Round vourtest stabst c to two decimal places arlice recorts [ne {Dllomino vield (7) Mean Lemperalure ove penod beme0 comino mops and dale picking (* ano mean percentage sunshine during tne same penco (*z) fcr E F uqglE vanel 18.9 41 211 107 102 102 17.3 (a) When the model Y = 0 + 0*1 + 0x*2 + 0*1" + 0417 + 05*1*2 + & Is fit to the hop... 5 answers ##### Find the first derivative: Please simplify your answer if possible. y=sin2(Sx) tan(x?) Find the first derivative: Please simplify your answer if possible. y=sin2(Sx) tan(x?)... -- 0.022662--
## Filters Q&A - Ask Doubts and Get Answers Q # How many times a wheel of radius 28 cm must rotate to go 352 m? Q. 16.     How many times a wheel of radius $28\; cm$ must rotate to go $352\; m$? (Take $\pi =\frac{22}{7}$ ) Views It is given that radius of wheel is $28\; cm$ Now, we know that Circumference of circle = $2 \pi r$ $\Rightarrow 2 \pi r = 2 \times 3.14 \times 28 = 175.84 \ cm$ Now, number of rotation done by wheel to go 352 m  is $\Rightarrow \frac{352 \ m}{175.84 \ cm} = \frac{35200}{175.84}\cong 200$ Therefore,  number of rotation done by wheel to go 352 m is  200 Exams Articles Questions
## January 21, 2017 ### Planet Mozilla — Assigning blame to unsafe code While I was at POPL the last few days, I was reminded of an idea regarding how to bring more struture to the unsafe code guidelines process that I’ve been kicking around lately, but which I have yet to write about publicly. The idea is fresh on my mind because while at POPL I realized that there is an interesting opportunity to leverage the “blame” calculation techniques from gradual typing research. But before I get to blame, let me back up and give some context. ### The guidelines should be executable I’ve been thinking for some time that, whatever guidelines we choose, we need to adopt the principle that they should be automatically testable. By this I mean that we should be able to compile your program in a special mode (“sanitizer mode”) which adds in extra assertions and checks. These checks would dynamically monitor what your program does to see if it invokes undefined behavior: if they detect UB, then they will abort your program with an error message. Plenty of sanitizers or sanitizer-like things exist for C, of course. My personal favorite is valgrind, but there are a number of other examples (the data-race detector for Go also falls in a similar category). However, as far as I know, none of the C sanitizers is able to detect the full range of undefined behavior. Partly this is because C UB includes untestable (and, in my opinion, overly aggressive) rules like “every loop should do I/O or terminate”. I think we should strive for a sound and complete sanitizer, meaning that we guarantee that if there is undefined behavior, we will find it, and that we have no false positives. We’ll see if that’s possible. =) The really cool thing about having the rules be executable (and hopefully efficiently executable) is that, in the (paraphrased) words of John Regehr, it changes the problem of verifying safety from a formal one into a matter of test coverage, and the latter is much better understood. My ultimate goal is that, if you are the developer of an unsafe library, all you have to do is to run cargo test --unsafe (or some such thing), and all of the normal tests of your library will run but in a special sanitizer mode where any undefined behavior will be caught and flagged for you. But I think there is one other important side-effect. I have been (and remain) very concerned about the problem of programmers not understanding (or even being aware of) the rules regarding correct unsafe code. This is why I originally wanted a system like the Tootsie Pop rules, where programmers have to learn as few things as possible. But having an easy and effective way of testing for violations changes the calculus here dramatically: I think we can likely get away with much more aggressive rules if we can test for violations. To play on John Regehr’s words, this changes the problem from being one of having to learn a bunch of rules to having to interpret error messages. But for this to work well, of course, the error messages have to be good. And that’s where this idea comes in. ### Proof of concept: miri As it happens, there is an existing project that is already doing a limited form of the kind of checks I have in mind: miri, the MIR interpreter created by Scott Olson and now with significant contributions by Oliver Schneider. If you haven’t seen or tried miri, I encourage you to do so. It is very cool and surprisingly capable – in particular, miri can not only execute safe Rust, but also unsafe Rust (e.g., it is able to interpret the definition of Vec). The way it does this is to simulate the machine at a reasonably low-level. So, for example, when you allocate memory, it stores that as a kind of blob of bytes of a certain size. But it doesn’t only store bytes; rather, it tracks additional metadata about what has been stored into various spots. For example, it knows whether memory has been initialized or not, and it knows which bits are pointers (which are stored opaquely, not with an actual address). This allows is to interpret a lot of unsafe code, but it also allows it to detect various kinds of errors. ### An example fn main() { let mut b = Box::new(22); innocent_looking_fn(&b); *b += 1; } fn innocent_looking_fn(b: &Box<usize>) { // This wicked little bit of code will take a borrowed // Box and free it. unsafe { let p: *const usize = &**b; let q: Box<usize> = std::mem::transmute(p); } } The problem here is that this “innocent looking function” claims to borrow the box b but it actually frees it. So now when main() comes along to execute *b += 1, the box b has been freed. This situation is often called a “dangling pointer” in C land. We might expect then that when you execute this program, something dramatic will happen, but that is not (necessarily) the case: > rustc tests/dealloc.rs > ./dealloc As you can see, I got no error or any other indication that something went awry. This is because, internally, freeing the box just throws its address on a list for later re-use. Therefore when I later make use of that address, it’s entirely possible that the memory is still sitting there, waiting for me to use it, even if I’m not supposed to. This is part of what makes tracking down a “use after free” bug incredibly frustrating: oftentimes, nothing goes wrong! (Until it does.) It’s also why we need some kind of sanitizer mode that will do additional checks beyond what really happens at runtime. ### Detecting errors with miri But what happens when I run this through miri? > cargo run tests/dealloc.rs Finished dev [unoptimized + debuginfo] target(s) in 0.2 secs Running target/debug/miri tests/dealloc.rs error: dangling pointer was dereferenced --> tests/dealloc.rs:8:5 | 8 | *b += 1; | ^^^^^^^ | note: inside call to main --> tests/dealloc.rs:5:1 | 5 | fn main() { | _^ starting here... 6 | | let mut b = Box::new(22); 7 | | evil(&b); 8 | | *b += 1; 9 | | } | |_^ ...ending here error: aborting due to previous error (First, before going further, let’s just take a minute to be impressed by the fact that miri bothered to give us a nice stack trace here. I had heard good things about miri, but before I started poking at it for this blog post, I expected something a lot less polished. I’m impressed.) You can see that, unlike the real computer, miri detected that *b was freed when we tried to access it. It was able to do this because when miri is interpreting your code, it does so with respect to a more abstract model of how a computer works. In particular, when memory is freed in miri, miri remembers that the address was freed, and if there is a later attempt to access it, an error is thrown. (This is very similar to what tools like valgrind and electric fence do as well.) So even just using miri out of the box, we see that we are starting to get a certain amount of sanitizer rules. Whatever the unsafe code guidelines turn out to be, one can be sure that they will declare it illegal to access freed memory. As this example demonstrates, running your code through miri could help you detect a violation. ### Blame This example also illustrates another interesting point about a sanitizer tool. The point where the error is detected is not necessarily telling you which bit of code is at fault. In this case, the error occurs in the safe code, but it seems clear that the fault lies in the unsafe block in innocent_looking_fn(). That function was supposed to present a safe interface, but it failed to do so. Unfortunately, for us to figure that out, we have to trawl through the code, executing backwards and trying to figure out how this freed pointer got into the variable b. Speaking as someone who has spent years of his life doing exactly that, I can tell you it is not fun. Anything we can do to get a more precise notion of what code is at fault would be tremendously helpful. It turns out that there is a large body of academic work that I think could be quite helpful here. For some time, people have been exploring gradual typing systems. This is usually aimed at the software development process: people want to be able to start out with a dynamically typed bit of software, and then add types gradually. But it turns out when you do this, you have a similar problem: your statically typed code is guaranteed to be internally consistent, but the dynamically typed code might well feed it values of the wrong types. To address this, blame systems attempt to track where you crossed between the static and dynamic typing worlds so that, when an error occurs, the system can tell you which bit of code is at fault. Traditionally this blame tracking has been done using proxies and other dynamic mechanisms, particularly around closures. For example, Jesse Tov’s Alms language allocated stateful proxies to allow for owned types to flow into a language that didn’t understand ownership (this is sort of roughly analogous to dynamically wrapping a value in a RefCell). Unfortunately, introducing proxies doesn’t seem like it would really work so well for a “no runtime” language like Rust. We could probably get away with it in miri, but it would never scale to running arbitrary C code. Interestingly, at this year’s POPL, I saw a paper that seemed to present a solution to this problem. In Big types in little runtime, Michael Vitousek, Cameron Swords (ex-Rust intern!), and Jeremy Siek describe a system for doing gradual typing in Python that works even without modifying the Python runtime – this rules out proxies, because the runtime would have to know about them. Instead, the statically typed code keeps a log “on the side” which tracks transitions to and from the unsafe code and other important events. When a fault occurs, they can read this log and reconstruct which bit of code is at fault. This seems eminently applicable to this setting: we have control over the safe Rust code (which we are compiling in a special mode), but we don’t have to modify the unsafe code (which might be in Rust, but might also be in C). Exciting! ### Conclusion This post has two purposes, in a way. First, I want to advocate for the idea that we should define the unsafe code guidelines in an executable way. Specifically, I think we should specify predicates that must hold at various points in the execution. In this post we saw a simple example: when you dereference a pointer, it must point to memory that has been allocated and not yet freed. (Note that this particular rule only applies to the moment at which the pointer is dereferenced; at other times, the pointer can have any value you want, though it may wind up being restricted by other rules.) It’s much more interesting to think about assertions that could be used to enforce Rust’s aliasing rules, but that’s a good topic for another post. Probably the best way for us to do this is to start out with a minimal “operational semantics” for a representative subset of MIR (bascally a mathematical description of what MIR does) and then specify rules by adding side-clauses and conditions into that semantics. I have been talking to some people who might be interested in doing that, so I hope to see progress here. That said, it may be that we can instead do this exploratory work by editing miri. The codebase seems pretty clean and capable, and a lot of the base work is done. In the long term, I expect we will want to instead target a platform like valgrind, which would allow us to apply these rules even around to unsafe C code. I’m not sure if that’s really feasible, but it seems like the ideal. The second purpose of the post is to note the connection with gradual typing and the opportunity to apply blame research to the problem. I am very excited about this, because I’ve always felt that guidelines based simply on undefined behavior were going to be difficult for people to use, since errors are are often detected in code that is quite disconnected from the origin of the problem. ### Planet Mozilla — An Overview of Asia Tech Conferences in 2017 I’ve been attending and even talking at tech conferences for some time. One of the challenge is to keep track of when those conference will take place. Also there is no single list of all conferences I’m interested. There are some website that collects them, but they often missed some community-organized events in Asia. Or there are some community-maintained list of open source conferences (Thanks Barney!), but they don’t include for-profit conferences. Therefore I build a simple website that collects all conferences I know in Asia, focusing on open source software, web, and startup: #The Technology Stack Since I don’t really need dynamic-generated content, I use the Jekyll static site generator. For the look and feel, I use the Material Design Lite (MDL) CSS framework. (I did try other material design frameworks like Materialize or MUI, but MDL is the most mature and clean one I can find.) One of the challenge is to provide the list in different languages. I found a plugin-free way to make Jekyll support I18N (Internationalization). The essence is to create language specific sub-directories like en/index.md and zh_tw/index.md. Then put all language specific string in the index.md files. One pitfall is that by adding another level of directory, the relative paths (e.g. path to CSS and JS files) might not work, so you might need to use absolute path instead. For Traditional and Simplified Chinese translation, I’m too lazy to maintain two copy of the data. So I use a JavaScript snippet to do the translation on-the-fly. ### How to Contribute If you know any conference, meetup or event that should be on the list, please feel free to drop and email to asia.confs@gmail.com. Or you can create a pull request or file and issue to our GitHub repo. Enjoy the conferences and Happy Chinese New Year! ### Planet Mozilla — Adding CSP to bugzilla.mozilla.org We're about to enable a Content Security Policy (CSP) on bugzilla.mozilla.org. CSP will mitigate several types of attack on our users and our site, including Cross-Site Request Forgery (XSRF) and Cross-Site Scripting (XSS). The first place we're deploying this is in the bug detail page in the new Modal view (which, you may recall, we're making the default view) with a goal for the site to have complete CSP coverage. As a side-effect of this work, CSP may break add-ons that modify the bug detail page. If we have broken something of yours, we can quickly fix it. We're already enabling the Socorro Lens add-on. You can see how that was addressed. WebExtensions can modify the DOM of a bug detail page through content.js. Add-ons and WebExtentions will not be able to load resources from third parties into the bug detail page unless we make an exception for you. Long term, if you have a feature from an add-on you'd like to make part of BMO, please seek me out on irc://irc.mozilla.org/bteam or open a new ticket in the bugzilla.mozilla.org product in Bugzilla and set the severity to 'enhancement'. ETA: clarify what an add-on or WebExtension is allowed to do. Thanks to the WebExtensions team for answering questions on IRC tonight. ### Planet Mozilla — 45.7.0 available for realsies Let's try that again. TenFourFox 45.7.0 is now available for testing ... again (same download location, same release notes, new hashes), and as before, will go live late Monday evening if I haven't been flooded out of my house by the torrential rains we've been getting in currently-not-so-Sunny So Cal. You may wish to verify you got the correct version by manually checking the hash on the off-chance the mirrors are serving the old binaries. ## January 20, 2017 ### Planet Mozilla — Migrating to WebExtensions: port your stored data WebExtensions are the new standard for add-on development in Firefox, and will be the only supported type of extension in release versions of Firefox later this year. Starting in Firefox 57, which is scheduled to arrive in November 2017, extensions other than WebExtensions will not load, and developers should be preparing to migrate their legacy extensions to WebExtensions. If you have a legacy extension that writes data to the filesystem, and you’re planning to port it to WebExtensions, Embedded WebExtensions are available now in Firefox 51 to help you transition. Embedded WebExtensions can be used to transfer the stored data of your add-on to a format that can be used by WebExtensions. This is essential because it lets you to convert your users without the need for them to take any actions. ### What is an Embedded WebExtension? An Embedded WebExtension is an extension that combines two types of extensions in one, by incorporating a WebExtension inside of a bootstrapped or SDK extension. ### Why use an Embedded WebExtension? There are attributes (functions) of legacy add-ons that are used to store information related to the add-on that are not available in WebExtensions. Examples of these functions include user preferences, arbitrary file system access for storing assets, configuration information, stateful information, and others. If your add-on makes use of functionality like these to store information, you can use an Embedded WebExtension to access your legacy add-on data and move it over to a WebExtension. The earlier you do this, the more likely all your users will transition over smoothly. It’s important to emphasize that Embedded WebExtensions are intended to be a transition tool, and will not be supported past Firefox 57. They should not be used for add-ons that are not expected to transition to WebExtensions. ### How do I define an Embedded WebExtension? https://github.com/mdn/webextensions-examples/tree/master/embedded-webextension-sdk ### Planet Mozilla — Nightlies in TaskCluster - go team! As catlee has already mentioned, yesterday we shipped the first nightly builds for Linux and Android off our next-gen Mozilla continuous integration (CI) system known as TaskCluster. I eventually want to talk more about why this important and how we got to here, but for now I’d like to highlight some of the people who made this possible. Thanks to Aki’s meticulous work planning and executing on a new chain of trust (CoT) model, the nightly builds we now ship on TaskCluster are arguably more secure than our betas and releases. Don’t worry though, we’re hard at work porting the chain of trust to our release pipeline. Jordan and Mihai tag-teamed the work to get the chain-of-trust-enabled workers doing important things like serving updates and putting binaries in the proper spots. Kim did the lion’s share of the work getting our task graphs sorted to tie together the disparate pieces. Callek wrangled all of the l10n bits. On the testing side, gbrown did some heroic work getting reliable test images setup for our Linux platforms. Finally, I’d be remiss if I didn’t also call out Dustin who kept us all on track with his migration tracker and who provided a great deal of general TaskCluster platform support. Truly it was a team effort, and thanks to all of you for making this particular milestone happen. Onward to Mac, Windows, and release promotion! ### Planet Mozilla — Communicating the Dangers of Non-Secure HTTP HTTPS, the secure variant of the HTTP protocol, has long been a staple of the modern Web. It creates secure connections by providing authentication and encryption between a browser and the associated web server. HTTPS helps keep you safe from eavesdropping and tampering when doing everything from online banking to communicating with your friends. This is important because over a regular HTTP connection, someone else on the network can read or modify the website before you see it, putting you at risk. To keep users safe online, we would like to see all developers use HTTPS for their websites. Using HTTPS is now easier than ever. Amazing progress in HTTPS adoption has been made, with a substantial portion of web traffic now secured by HTTPS: Changes to Firefox security user experience Up until now, Firefox has used a green lock icon in the address bar to indicate when a website is using HTTPS and a neutral indicator (no lock icon) when a website is not using HTTPS. The green lock icon indicates that the site is using a secure connection. Current secure (HTTPS) connection Current non-secure (HTTP) connection In order to clearly highlight risk to the user, starting this month in Firefox 51 web pages which collect passwords but don’t use HTTPS will display a grey lock icon with a red strike-through in the address bar. Clicking on the “i” icon, will show the text, “Connection is Not Secure” and “Logins entered on this page could be compromised”. This has been the user experience in Firefox Dev Edition since January 2016. Since then, the percentage of login forms detected by Firefox that are fully secured with HTTPS has increased from nearly 40% to nearly 70%, and the number of HTTPS pages overall has also increased by 10%, as you can see in the graph above. In upcoming releases, Firefox will show an in-context message when a user clicks into a username or password field on a page that doesn’t use HTTPS.  That message will show the same grey lock icon with red strike-through, accompanied by a similar message, “This connection is not secure. Logins entered here could be compromised.”: In-context warning for a password field on a page that doesn’t use HTTPS What to expect in the future To continue to promote the use of HTTPS and properly convey the risks to users, Firefox will eventually display the struck-through lock icon for all pages that don’t use HTTPS, to make clear that they are not secure. As our plans evolve, we will continue to post updates but our hope is that all developers are encouraged by these changes to take the necessary steps to protect users of the Web through HTTPS. Thanks! Thank you to the engineering, user experience, user research, quality assurance, and product teams that helped make this happen – Sean Lee, Tim Guan-tin Chien, Paolo Amadini, Johann Hofmann, Jonathan Kingston, Dale Harvey, Ryan Feeley, Philipp Sackl, Tyler Downer, Adrian Florinescu, and Richard Barnes. And a very special thank you to Matthew Noorenberghe, without whom this would not have been possible. ### Planet Mozilla — What is participation design anyway? As part of our insights phase for Diversity & Inclusion for Participation at Mozilla, we’ve identified ‘Participation Design’ as being as one of 5 important topics for focus group discussion.  Here is how I describe Participation Design (and thanks to Paul) for the question: Participation design is the framework(s) we use to generate contribution opportunities that empower volunteers to …. • Recognize, and embrace and personalize the opportunity of lending time and skills to a project at Mozilla – technical and non-technical. • Understand the steps they need to take to be successful and engaged at a very basic level.  (task trackers, chat rooms, blogs, newsletters, wikis). • Complete a contribution  with  success on project goals, and value to the volunteer. • Grow in skills, knowledge and influence as community members, and leaders/mobilizers at Mozilla and in the broader open source community. In our focus group for this topic, we’ll explore from both contributor and maintainer perspectives – what it means to design for participation for diversity, equality and inclusion.   If you want to know more about how focus groups work – here’s a great resource. If you, or someone you know from Mozilla past, present or future has insights, experience and vision for inclusive participation design. Please nominate them!  (and select the topic ‘Participation Design’). ### Planet Mozilla — Webdev Beer and Tell: January 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in... ### Planet WebKit — Introducing Riptide:WebKit’s Retreating Wavefront Concurrent Garbage Collector As of r209827, 64-bit ARM and x86 WebKit ports use a new garbage collector called Riptide. Riptide reduces worst-case pause times by allowing the app to run concurrently to the collector. This can make a big difference for responsiveness since garbage collection can easily take 10 ms or more, even on fast hardware. Riptide improves WebKit’s performance on the JetStream/splay-latency test by 5x, which leads to a 5% improvement on JetStream. Riptide also improves our Octane performance. We hope that Riptide will help to reduce the severity of GC pauses for many different kinds of applications. This post begins with a brief background about concurrent GC (garbage collection). Then it describes the Riptide algorithm in detail, including the mature WebKit GC foundation, on which it is built. The field of incremental and concurrent GC goes back a long time and WebKit is not the first system to use it, so this post has a section about how Riptide fits into the related work. This post concludes with performance data. ## Introduction Garbage collection is expensive. In the worst case, for the collector to free a single object, it needs to scan the entire heap to ensure that no objects have any references to the one it wants to free. Traditional collectors scan the entire heap periodically, and this is roughly how WebKit’s collector has worked since the beginning. The problem with this approach is that the GC pause can be long enough to cause rendering loops to miss frames, or in some cases it can even take so long as to manifest as a spin. This is a well-understood computer science problem. The originally proposed solution for janky GC pauses, by Guy Steele in 1975, was to have one CPU run the app and another CPU run the collector. This involves gnarly race conditions that Steele solved with a bunch of locks. Later algorithms like Baker’s were incremental: they assumed that there was one CPU, and sometimes the application would call into the collector but only for bounded increments of work. Since then, a huge variety of incremental and concurrent techniques have been explored. Incremental collectors avoid some synchronization overhead, but concurrent collectors scale better. Modern concurrent collectors like DLG (short for Doligez, Leroy, Gonthier, published in POPL ’93 and ’94) have very cheap synchronization and almost completely avoid pausing the application. Taking garbage collection off-core rather than merely shortening the pauses is the direction we want to take in WebKit, since almost all of the devices WebKit runs on have more than one core. The goal of WebKit’s new Riptide concurrent GC is to achieve a big reduction in GC pauses by running most of the collector off the main thread. Because Riptide will be our always-on default GC, we also want it to be as efficient — in terms of speed and memory — as our previous collector. ## The Riptide Algorithm The Riptide collector combines: • Marking: The collector marks objects as it finds references to them. Objects not marked are deleted. Most of the collector’s time is spent visiting objects to find references to other objects. • Constraints: The collector allows the runtime to supply additional constraints on when objects should be marked, to support custom object lifetime rules. • Parallelism: Marking is parallelized on up to eight logical CPUs. (We limit to eight because we have not optimized it for more CPUs.) • Generations: The collector lets the mark state of objects stick if memory is plentiful, allowing the next collection to skip visiting those objects. Sticky mark bits are a common way of implementing generational collection without copying. Collection cycles that let mark bits stick are called eden collections in WebKit. • Concurrency: Most of the collector’s marking phase runs concurrently to the program. Because this is by far the longest part of collection, the remaining pauses tend to be 1 ms or less. Riptide’s concurrency features kick in for both eden and full collections. • Conservatism: The collector scans the stack and registers conservatively, that is, checking each word to see if it is in the bounds of some object and then marking it if it is. This means that all of the C++, assembly, and just-in-time (JIT) compiler-generated code in our system can store heap pointers in local variables without any hassles. • Efficiency: This is our always-on garbage collector. It has to be fast. This section describes how the collector works. The first part of the algorithm description focuses on the WebKit mark-sweep algorithm on which Riptide is based. Then we dive into concurrency and how Riptide manages to walk the heap while the heap is in flux. ### Efficient Mark-Sweep Riptide retains most of the basic architecture of WebKit’s mature garbage collection code. This section gives an overview of how our mark-sweep collector works: WebKit uses a simple segregated storage heap structure. The DOM, the Objective-C API, the type inference runtime, and the compilers all introduce custom marking constraints, which the GC executes to fixpoint. Marking is done in parallel to maximize throughput. Generational collection is important, so WebKit implements it using sticky mark bits. The collector uses conservative stack scanning to ease integration with the rest of WebKit. #### Simple Segregated Storage WebKit has long used the simple segregated storage heap structure for small and medium-sized objects (up to about 8KB): • Small and medium-sized objects are allocated from segregated free lists. Given a desired object size, we perform a table lookup to find the appropriate free list and then pop the first object from this list. The lookup table is usually constant-folded by the compiler. • Memory is divided into 16KB blocks. Each block contains cells. All cells in a block have the same cell size, called the block’s size class. In WebKit jargon, an object is a cell whose JavaScript type is “object”. For example, a string is a cell but not an object. The GC literature would typically use object to refer to what our code would call a cell. Since this post is not really concerned with JavaScript types, we’ll use the term object to mean any cell in our heap. • At any time, the active free list for a size class contains only objects from a single block. When we run out of objects in a free list, we find the next block in that size class and sweep it to give it a free list. Sweeping is incremental in the sense that we only sweep a block just before allocating in it. In WebKit, we optimize sweeping further with a hybrid bump-pointer/free-list allocator we call bump’n’pop (here it is in C++ and in the compilers). A per-block bit tells the sweeper if the block is completely empty. If it is, the sweeper will set up a bump-pointer arena over the whole block rather than constructing a free-list. Bump-pointer arenas can be set up in O(1) time while building a free-list is a O(n) operation. Bump’n’pop achieves a big speed-up on programs that allocate a lot because it avoids the sweep for totally-empty blocks. Bump’n’pop’s bump-allocator always bumps by the block’s cell size to make it look like the objects had been allocated from the free list. This preserves the block’s membership in its size class. Large objects (larger than about 8KB) are allocated using malloc. #### Constraint-Based Marking Garbage collection is ordinarily a graph search problem and the heap is ordinarily just a graph: the roots are the local variables, their values are directional edges that point to objects, and those objects have fields that each create edges to some other objects. WebKit’s garbage collector also allows the DOM, compiler, and type inference system to install constraint callbacks. These constraints are allowed to query which objects are marked and they are allowed to mark objects. The WebKit GC algorithm executes these constraints to fixpoint. GC termination happens when all marked objects have been visited and none of the constraints want to mark anymore objects. In practice, the constraint-solving part of the fixpoint takes up a tiny fraction of the total time. Most of the time in GC is spent performing a depth-first search over marked objects that we call draining. #### Parallel Draining Draining takes up most of the collector’s time. One of our oldest collector optimizations is that draining is parallelized. The collector has a draining thread on each CPU. Each draining thread has its own worklist of objects to visit, and ordinarily it runs a graph search algorithm that only sees this worklist. Using a local worklist means avoiding worklist synchronization most of the time. Each draining thread will check in with a global worklist under these conditions: • It runs out of work. When a thread runs out of work, it will try to steal 1/Nth of the global worklist where N is the number of idle draining threads. This means acquiring the global worklist’s lock. • Every 100 objects visited, the draining thread will consider donating about half of its worklist to the global worklist. It will only do this if the global worklist is empty, the global worklist lock can be acquired without blocking, and the local worklist has at least two entries. This algorithm appears to scale nicely to about eight cores, which is good enough for the kinds of systems that WebKit usually runs on. Draining in parallel means having to synchronize marking. Our marking algorithm uses a lock-free CAS (atomic compare-and-swap instruction) loop to set mark bits. #### Sticky Mark Bits Generational garbage collection is a classic throughput optimization first introduced by Lieberman and Hewitt and Ungar. It assumes that objects that are allocated recently are unlikely to survive. Therefore, focusing the collector on objects that were allocated since the last GC is likely to free up lots of memory — almost as much as if we collected the whole heap. Generational collectors track the generation of objects: either young or old. Generational collectors have (at least) two modes: eden collection that only collects young objects and full collection that collects all objects. During an eden collection, old objects are only visited if they are suspected to contain pointers to new objects. Generational collectors need to overcome two hurdles: how to track the generation of objects, and how to figure out which old objects have pointers to new objects. The collector needs to know the generation of objects in order to determine which objects can be safely ignored during marking. In a traditional generational collector, eden collections move objects and then use the object’s address to determine its generation. Our collector does not move objects. Instead, it uses the mark bit to also track generation. Quite simply, we don’t clear any mark bits at the start of an eden collection. The marking algorithm will already ignore objects that have their mark bits set. This is called sticky mark bit generational garbage collection. The collector will avoid visiting old objects during an eden collection. But it cannot avoid all of them: if an old object has pointers to new objects, then the collector needs to know to visit that old object. We use a write barrier — a small piece of instrumentation that executes after every write to an object — that tells the GC about writes to old objects. In order to cheaply know which objects are old, the object header also has a copy of the object’s state: either it is old or it is new. Objects are allocated new and labeled old when marked. When the write barrier detects a write to an old object, we tell the GC by setting the object’s state to old-but-remembered and putting it on the mark stack. We use separate mark stacks for objects marked by the write barrier, so when we visit the object, we know whether we are visiting it due to the barrier or because of normal marking (i.e. for the first time). Some accounting only needs to happen when visiting the object for the first time. The complete barrier is simply: object->field = newValue; if (object->cellState == Old) remember(object); Generational garbage collection is an enormous improvement in performance on programs that allocate a lot, which is common in JavaScript. Many new JavaScript features, like iterators, arrow functions, spread, and for-of allocate lots of objects and these objects die almost immediately. Generational GC means that our collector does not need to visit all of the old objects just to delete the short-lived garbage. #### Conservative Roots Garbage collection begins by looking at local variables and some global state to figure out the initial set of marked objects. Introspecting the values of local variables is tricky. WebKit uses C++ local variables for pointers to the garbage collector’s heap, but C-like languages provide no facility for precisely introspecting the values of specific variables of arbitrary stack frames. WebKit solves this problem by marking objects conservatively when scanning roots. We use the simple segregated storage heap structure in part because it makes it easy to ask whether an arbitrary bit pattern could possibly be a pointer to some object. We view this as an important optimization. Without conservative root scanning, C++ code would have to use some API to notify the collector about what objects it points to. Conservative root scanning means not having to do any of that work. #### Mark-Sweep Summary Riptide implements complex notions of reachability via arbitrary constraint callbacks and allows C++ code to manipulate objects directly. For performance, it parallelizes marking and uses generations to reduce the average amount of marking work. ### Handling Concurrency Riptide makes the draining phase of garbage collection concurrent. This works because of a combination of concurrency features: • Riptide is able to stop the world for certain tricky operations like stack scanning and DOM constraint solving. • Riptide uses a retreating wavefront write barrier to manage races between marking and object mutation. Using retreating wavefront allows us to avoid any impedance mismatch between generational and concurrent collector optimizations. • Retreating wavefront collectors can suffer from the risk of GC death spirals, so Riptide uses a space-time scheduler to put that in check. • Visiting an object while it is being reshaped is particularly hard, and WebKit reshapes objects as part of type inference. We use an obstruction-free double collect snapshot to ensure that the collector never marks garbage memory due to a visit-reshape race. • Lots of objects have tricky races that aren’t on the critial path, so we put a fast, adaptive, and fair lock in every JavaScript object as a handy way to manage them. It fits in two otherwise unused bits. While we wrote Riptide for WebKit, we suspect that the underlying intuitions could be useful for anyone wanting to write a concurrent, generational, parallel, conservative, and non-copying collector. This section describes Riptide in detail. #### Stopping The World and Safepoints Riptide does draining concurrently. It is a goal to eventually make other phases of the collector concurrent as well. But so long as some phases are not safe to run concurrently, we need to be able to bring the application to a stop before performing those phases. The place where the collector stops needs to be picked so as to avoid reentrancy issues: for example stopping to run the GC in the middle of the GC’s allocator would create subtle problems. The concurrent GC avoids these problems by only stopping the application at those points where the application would trigger a GC. We call these safepoints. When the collector brings the application to a safepoint, we say that it is stopping the world. Riptide currently stops the world for most of the constraint fixpoint, and resumes the world for draining. After draining finishes, the world is again stopped. A typical collection cycle may have many stop-resume cycles. #### Retreating Wavefront Draining concurrently means that just as we finish visiting some object, the application may store to one of its fields. We could store a pointer to an unmarked object into an object that is already visited, in which case the collector might never find that unmarked object. If we don’t do something about this, the collector would be sure to prematurely delete objects due to races with the application. Concurrent garbage collectors avoid this problem using write barriers. This section describes Riptide’s write barrier. Write barriers ensure that the state of the collector is still valid after any race, either by marking objects or by having objects revisited (GC Handbook, chapter 15). Marking objects helps the collector make forward progress; intuitively, it is like advancing the collector’s wavefront. Having objects revisited retreats the wavefront. The literature of full of concurrent GC algorithms, like the Metronome, C4, and DLG, that all use some kind of advancing wavefront write barrier. The simplest such barrier is Dijkstra’s, which marks objects anytime a reference to them is created. I used these kinds of barriers in my past work because they make it easy to make the collector very deterministic. Adding one of those barriers to WebKit would be likely to create some performance overhead since this means adding new code to every write to the heap. But the retreating wavefront barrier, originally invented by Guy Steele in 1975, works on exactly the same principle as our existing generational barrier. This allows Riptide to achieve zero barrier overhead by reusing WebKit’s existing barrier. It’s easiest to appreciate the similarity by looking at some barrier code. Our old generational barrier looked like this: object->field = newValue; if (object->cellState == Old) remember(object); Steele’s retreating wavefront barrier looks like this: object->field = newValue; if (object->cellState == Black) revisit(object); Retreating wavefront barriers operate on the same principle as generational barriers, so it’s possible to use the same barrier for both. The only difference is the terminology. The black state means that the collector has already visited the object. This barrier tells the collector to revisit the object if its cellState tells us that the collector had already visited it. This state is part of the classic tri-color abstraction: white means that the GC hasn’t marked the object, grey means that the object is marked and on the mark stack, and black means that the object is marked and has been visited (so is not on the mark stack anymore). In Riptide, the tri-color states that are relevant to concurrency (white, grey, black) perfectly overlap with the sticky mark-bit states that are relevant to generations (new, remembered, old). The Riptide cell states are as follows: • DefinitelyWhite: the object is new and white. • PossiblyGrey: the object is grey, or remembered, or new and white. • PossiblyBlack: the object is black and old, or grey, or remembered, or new and white. A naive combination generational/concurrent barrier might look like this: object->field = newValue; if (object->cellState == PossiblyBlack) slowPath(object); This turns out to need tweaking to work. The PossiblyBlack state is too ambiguous, so the slowPath needs additional logic to work out what the object’s state really was. Also, the order of execution matters: the CPU must run the object->cellState load after it runs the object->field store. That’s hard, since CPUs don’t like to obey store-before-load orderings. Finally, we need to guarantee that the barrier cannot retreat the wavefront too much. ##### Disambiguating Object State The GC uses the combination of the object’s mark bit in the block header and the cellState byte in the object’s header to determine the object’s state. The GC clears mark bits at the start of full collection, and it sets the cellState during marking and barriers. It doesn’t reset objects’ cellStates back to DefinitelyWhite at the start of a full collection, because it’s possible to infer that the cellState should have been reset by looking at the mark bit. It’s important that the collector never scans the heap to clear marking state, and even mark bits are logically cleared using versioning. If an object is PossiblyBlack or PossiblyGrey and its mark bit is logically clear, then this means that the object is really white. Riptide’s barrier slowPath is almost like our old generational slow path but it has a new check: it will not do anything if the mark bit of the target object is not set, since this means that we’re in the middle of a GC and the object is actually white. Additionally, the barrier will attempt to set the object back to DefinitelyWhite so that the slowPath path does not have to see the object again (at least not until it’s marked and visited).
# Gracious Living The Construction of the Reals: Metric Completion and Dedekind Cuts March 6, 2011, 21:31 Filed under: Algebra, Math, Set Theory, Topology | Tags: , , , , It looks like I’m getting views now, which is surprising.  I’ve been pretty busy with schoolwork, but I really want to get this blog up to speed, particularly because I’d like to start discussing things as I’m learning about them.  I’d also like to make more non-mathematical posts, but maybe these are best left to a separate blog?  Thoughts? Our first example of a field was the field of rationals, $\mathbb{Q}$.  Recall that this was the field of fractions of the integers, which were in turn the free abelian group on one generator with their natural multiplication.  But now it appears that we’re stuck.  While we intuitively know what $\mathbb{R}$ should be — it’s a line, for crying out loud — there seems to be no algebraic way of “deriving” it from $\mathbb{Q}$.  A first guess might be to add in solutions of polynomials, like $\sqrt{2}$ as the solution of $f(x)=x^2-2$, but not only does this include some complex numbers, it also misses some real numbers like $e$ and $\pi$.  (We call such numbers — those that aren’t solutions of polynomials with rational coefficients — transcendental.  It’s actually quite difficult to prove that transcendental numbers even exist.) Instead, we turn to topology.  Below, I give two ways of canonically defining $\mathbb{R}$, one using the metric properties of $\mathbb{Q}$, one using its order properties.  I found this really interesting when I first saw it, but I can’t see it interesting everyone, so be warned if you’re not a fan of set theory or canonical constructions.  One of the topological techniques we’ll see will be useful later, but at that point it’ll be treated in its own right. Ordinals November 6, 2010, 08:17 Filed under: Math, Set Theory | Tags: , , , , When we talked about cardinality, we defined “standardized” finite cardinal numbers as the set $\mathbb{N}$, which we modeled as $0=\emptyset, 1=\{0\},2=\{0,1\},$ and so on.  We’ve since noted certain special properties of this model: • the set exists by the ZFC axioms • because of this, it is “pure” — everything is a set of sets, there are no ur-elements • the “successor function” $S(n)=n\cup\{n\}$ is well-defined, injective, and its image is everything but $0$ • because of this, if a statement is true for $0$ and its truth for $n$ implies its truth for $S(n)$, it is true for all elements of $\mathbb{N}$ — this is the “inductive property” • $n\subset m$ iff $n\in m$, and these synonymous relations are total orders on $\mathbb{N}$. In the discussion of the Axiom of Choice, we defined a “well-order” as a total order in which every subset has a least element, and proved that every set can be well-ordered if we assume the AC.  In fact, the subset/element ordering on $\mathbb{N}$ is already a well-order: given $M\subset\mathbb{N}$, the set $\bigcap M$ is a least element, and you should prove that this is always an element of $M$ (induction might help). Cardinalities tell us everything about sets up to bijection.  But when sets also have orders on them, this isn’t enough.  If we care about the orders on $X$ and $Y$, the only functions we should be caring about are those that are order-preserving: that is, that $f(a)\le_Y f(b)$ when $a\le_X b$.  Likewise, rather than all bijections, we care about the order isomorphisms: bijections that are order-preserving and have order-preserving inverses.  We’re “pairing off” the sets again, but in the same order.  None of the bijections with $\mathbb{N}$ in the post on countability did this, and it’s pretty clear why: any order isomorphism has to preserve the type of ordering, and $\mathbb{N}$ is well-ordered while $\mathbb{Z}$ and $\mathbb{Q}$ aren’t. The order isomorphism classes (or order types) of general posets or tosets are many and difficult to talk about.  But the order types of well-ordered sets are easier to study.  Below the fold, let’s take them on. The Axiom of Choice November 5, 2010, 06:33 Filed under: Math, Set Theory | Tags: , , , , This post is going to be a bit more technical than most, so feel free to skim it if you find it difficult.  You should know the statements of the theorems and the definition of well-ordering, though. When I stated the ZFC set axioms, I put choice last for a reason.  As I said there, it’s a bit like the parallel postulate in Euclidean geometry (if you don’t know this story, you should read about it — it’s fascinating).  Unlike the other axioms, it’s non-constructive, and it looks complicated enough that you should be able to get it from the other axioms, though it’s in fact independent of them (meaning they can’t prove it or disprove it).  For these reasons, many mathematicians in the early half of the century used choice sparingly, and made it very clear what pieces of their work required it.  But as we’ve seen, we needed it to show that every infinite set contains a sequence, and likewise, many of its consequences are so useful that it’s common nowadays to use it without reservation.  (And yes, Munroe, the Banach-Tarski paradox is a good thing.) Besides important consequences, there are about four important equivalent statements to the axiom of choice, and that’s what I’ll be talking about today.  Hopefully this will give you an idea of what a choice function is and how powerful it is.  We’ll be showing the following statements are equivalent: 1. The axiom of choice: every set $S$ has a choice function $c:\mathcal{P}(S)\rightarrow S$ such that $c(A)\in A$ for all $A\subset S$. 2. Every Cartesian product of non-empty sets is non-empty. 3. Zorn’s Lemma: Every partially ordered set, in which every totally ordered subset has an upper bound, has a maximal element. 4. Hausdorff Maximum Principle: Every partially ordered set has a maximal totally ordered subset. 5. Well-Ordering Theorem: Every set can be well-ordered. Below the fold, definitions of these terms and proofs of equivalence.
# An azimuthally-modified linear phase grating: Generation of varied radial carpet beams over different diffraction orders with controlled intensity sharing among the generated beams • 270 Accesses ## Abstract Diffraction gratings are important optical components and are used in many areas of optics such as in spectroscopy. A diffraction grating is a periodic structure that splits and diffracts the impinging light beam into several beams travelling in different directions. The diffracted beams from a grating are commonly called diffraction orders. The directions of the diffraction orders depend on the grating period and the wavelength of the impinging light beam so that a grating can be used as a dispersive element. In the diffraction of a plane wave from a conventional grating, the intensities of diffracted beams decrease with increasing order of diffraction. Here, we introduce a new type of grating where in the diffraction of a plane wave, the intensity of a given higher order diffracted beam can be higher than the intensity of the lower orders. We construct these gratings by adding an azimuthal periodic dependency to the argument of the transmission function of a linear phase grating that has a sinusoidal profile and we call them azimuthally-modified linear phase gratings (AMLPGs). In this work, in addition to introducing AMLPGs, we present the generation of varied radial carpet beams over different diffraction orders of an AMLPG with controlled intensity sharing among the generated beams. A radial carpet beam is generated in the diffraction of a plane wave from a radial phase grating. We show that for a given value of the phase amplitude over the host linear phase grating, one of the diffraction orders is predominant and by increasing the value of the phase amplitude, the intensity sharing changes in favor of the higher orders. The theory of the work and experimental results are presented. In comparison with the diffraction of a plane wave from radial phase gratings, the use of AMLPGs provides high contrast diffraction patterns and presents varied radial carpet beams over the different diffraction orders of the host linear phase grating. The resulting patterns over different diffraction orders are specified and their differences are determined. The diffraction grating introduced with controlled intensity sharing among different diffraction orders might find wide applications in many areas of optics such as optical switches. We show that AMLPG-based radial carpet beams can be engineered in which they acquire sheet-like spokes. This feature nominates them for potential applications in light sheet microscopy. In addition, a detailed analysis of the multiplication of the diffraction pattern of an AMLPG by the 2D structure of a spatial light modulator is presented. The presented theory is confirmed by respective experiments. ## Introduction A conventional optical diffraction grating, say linear grating, is a periodic structure in the Cartesian coordinates. In the diffraction of a plane wave from a linear grating, the beam is split and diffracted into several beams travelling in different directions. These diffracted beams are known as diffraction orders. For the conventional gratings, the intensities of diffraction orders decrease with increasing the number of order of diffraction. In the near-field regime, superposition of diffraction orders form self-images and sub-images of the grating’s structure at certain propagation distances. This effect is known as Talbot effect1. In addition, the intensity pattern over a plane includes the propagation axis and the grating vector (say the longitudinal plane) that is called the Talbot carpet2. In the far-field regime, the diffraction orders appear as the impulses of the Fourier transform of the grating’s transmission function. The linear gratings have numerous applications in optics and other areas of sciences and technologies such as in astronomical spectroscopy, moiré fringe technique including moiré deflectometry and moiré topography3, interferometry4,5,6, lithography7,8, strain and stress analysis9, displacement measurement10, optical alignment technique, color printing11, and 3D displays12,13. In recent two to three decades amplitude and phase gratings with topological defects have found serious applications in generating vortex beams and/or characterizing such beams14,15,16,17,18,19,20,21. The use of phase hologram gratings instead of the conventional one has some advantages22,23,24,25. For instance, a computer generated hologram phase grating provides the implemention of a desired phase map on the incident wave in a controlled way26. In this work, we integrate the features of linear and radial gratings to introduce a new grating that, in addition to the properties of linear and radial gratings, has additional properties. We construct this type of grating by adding an azimuthal periodic dependency to the argument of the transmission function of a linear phase grating that has a sinusoidal profile and we call them azimuthally-modified linear phase gratings (AMLPGs). In the diffraction of a plane wave from an AMLPG, we observe different diffraction orders in which the pattern of each order is very similar (but not exactly equal) to the diffraction pattern of a radial grating. Unlike the conventional gratings, here the intensity of a given higher order diffracted beam can be higher than the intensities of the lower orders. We show that the intensity sharing among different diffraction orders of an AMLPG can be adjusted by the value of the phase amplitude of the host linear grating. This kind of grating might find wide applications in many areas of optics such as optical switches. The diffraction of a plane wave from AMLPGs is formulated and using a spatial light modulator (SLM) the respective experimental works are presented. In comparing with the diffraction of a plane wave from radial phase gratings31 the use of AMLPGs provides a set of high contrast and varied radial carpet patterns over the different diffraction orders of the host linear phase grating. Since an SLM has a two dimensional periodic structure it multiples the diffraction pattern of the AMLPG. A detailed formulation is also presented and the theoretical predictions are verified by experiments. It is worth noting that, as the transmittance of an AMLPG is not the product of the transmittances of the radial and linear gratings, the diffraction pattern from an AMLPG is not the convolution of the diffraction patterns of radial and linear gratings. However, the transmittance of an AMLPG imposed on an SLM is the product of an ideal AMLPG and the two-dimensional periodic structure of the SLM, therefore the resulting diffraction pattern is the convolution of the diffraction pattern of the ideal AMLPG and the two dimensional impulses of the SLM. ## Results ### Formulation of plane wave diffraction from an AMLPG Here, we formulate the diffraction of a plane wave from an AMLPG. When the phase structure of a linear phase grating with a sinusoidal profile hosting an additional radial phase grating, it can be considered as “azimuthally-modified linear phase grating (AMLPG)”. Some AMLPGs with almost sinusoidal phase profiles are shown in Fig. 1. Since a radial phase grating is periodic in the azimuthal direction and has a phase singularity at the origin, an AMLPG has also the same property. We introduce an AMLPG with the following transmission function: $$T(\rho ,\theta )=\exp [i\gamma \,\cos (\frac{2\pi }{d}\rho \,\cos \,\theta +\,\cos \,l\theta )],$$ (1) where γ, l, d, and $$(\rho ,\theta )$$ are the amplitude of the phase variation, the number of spokes of the radial part of the grating, the period of the linear part of the grating, and the polar coordinates in the input plane, respectively. As is apparent from Eq. 1, the transmission function of the AMLPG is not the product of the transmission functions of two distinct radial and linear gratings33. Therefore the diffraction pattern from an AMLPG is not the convolution of the diffraction patterns of the radial and linear gratings. Below we formulate the diffraction of a plane wave from an AMLPG. Using the Jacobi-Anger identity34 $${{\rm{e}}}^{i\gamma \cos \theta }=\mathop{\sum }\limits_{s=-\infty }^{s=+\infty }\,{(i)}^{s}{J}_{s}(\gamma ){{\rm{e}}}^{is\theta },$$ (2) where Js is the s–th Bessel function of the first kind, the grating’s transmission function, Eq. 1, can be rewritten in the following form: $$T(\rho ,\theta )=\mathop{\sum }\limits_{s=-\infty }^{s=+\infty }\,{(i)}^{s}{J}_{s}(\gamma )exp[is(\frac{2\pi }{d}\rho \,\cos \,\theta +\,\cos \,l\theta )].$$ (3) By illuminating this phase structure with a plane wave, the complex amplitude of the light field after the structure is given by $$U(r,\phi ,z)={h}_{0}\,{{\rm{e}}}^{i\alpha {r}^{2}}\,{\int }_{0}^{\infty }\,{\int }_{0}^{2\pi }\,T(\rho ,\theta ){{\rm{e}}}^{i\alpha {\rho }^{2}}{{\rm{e}}}^{-2i\alpha \rho r\cos (\theta -\phi )}\rho \,d\rho \,d\theta ,$$ (4) where $${h}_{0}=\frac{{e}^{ikz}}{i\lambda z}$$ and $$\alpha =\frac{\pi }{\lambda z}$$, in which λ is the wavelength of the light beam, $$k=\frac{2\pi }{\lambda }$$ is the wave-number, and $$(r,\phi )$$ are the polar coordinates on the output plane. By substituting Eq. 3 in Eq. 4, we have $$\begin{array}{ccc}U(r,\phi ,z) & = & {h}_{0}\,{{\rm{e}}}^{i\alpha {r}^{2}}\,{\int }_{0}^{{\rm{\infty }}}\,{\int }_{0}^{2\pi }\,\mathop{\sum }\limits_{s=-{\rm{\infty }}}^{+{\rm{\infty }}}\,{(i)}^{s}{J}_{s}(\gamma ){e}^{[is(\frac{2\pi }{d}\rho \cos \theta +\cos l\theta )]}\\ & & \times \,{{\rm{e}}}^{i\alpha {\rho }^{2}}{{\rm{e}}}^{-2i\alpha r\rho \cos (\theta -\phi )}\rho \,d\rho d\theta \\ & = & {h}_{0}\,{{\rm{e}}}^{i\alpha {r}^{2}}\,{\int }_{0}^{{\rm{\infty }}}\,{\int }_{0}^{2\pi }\,\mathop{\sum }\limits_{s=-{\rm{\infty }}}^{+{\rm{\infty }}}\,{(i)}^{s}{J}_{s}(\gamma ){{\rm{e}}}^{i\alpha {\rho }^{2}}{{\rm{e}}}^{-2i\alpha \rho r\sin \phi sin\theta }\\ & & \times \,{{\rm{e}}}^{-2i\alpha \rho \cos \theta (r\cos \phi -\frac{s\lambda z}{d})}{{\rm{e}}}^{is\cos 1\theta }\rho \,d\rho d\theta .\end{array}$$ (5) Equation 5 can be solved respect to the azimuthal variable. First we use the following variable transformations in the output plane: $$r\,\cos \,\phi -\frac{s\lambda z}{d}={r}_{s}\,\cos \,{\phi }_{s},\,r\,\sin \,\phi ={r}_{s}\,\sin \,{\phi }_{s},$$ (6) where $$\begin{array}{rcl}{r}_{s} & = & \sqrt{{r}^{2}+{(\frac{s\lambda z}{d})}^{2}-2\frac{s\lambda z}{d}r\,\cos \,\phi },\\ {\phi }_{s} & = & {\tan }^{-1}(\frac{r\,\sin \,\phi }{r\,\cos \,\phi -\frac{s\lambda z}{d}}),\\ s & = & 0,\pm \,1,\pm \,2,\ldots \end{array}$$ (7) and rewrite Eq. 5 in the following form: $$U(r,\phi ,z)={h}_{0}\,{{\rm{e}}}^{i\alpha {r}^{2}}\,{\int }_{0}^{{\rm{\infty }}}\,\mathop{\sum }\limits_{s=-{\rm{\infty }}}^{+{\rm{\infty }}}\,{(i)}^{s}{J}_{s}(\gamma ){{\rm{\Theta }}}_{s\cos 1\theta }{{\rm{e}}}^{i\alpha {\rho }^{2}}\rho \,d\rho ,$$ (8) in which $${{\rm{\Theta }}}_{s\cos {\rm{1}}\theta }={\int }_{0}^{2\pi }\,{{\rm{e}}}^{-2i\alpha \rho {r}_{s}\cos (\theta -{\phi }_{s})}\,{{\rm{e}}}^{is\cos {\rm{1}}\theta }d\theta \mathrm{.}$$ (9) Now using Eqs 2 and 9 reduces to the following form: $$\begin{array}{rcl}{{\rm{\Theta }}}_{s\cos {\rm{1}}\theta } & = & {\int }_{0}^{2\pi }\,\mathop{\sum }\limits_{p=-\infty }^{+\infty }\,{(-i)}^{p}{J}_{p}\mathrm{(2}\alpha \rho {r}_{s}){{\rm{e}}}^{ip(\theta -{\phi }_{s})}\,\mathop{\sum }\limits_{q=-\infty }^{+\infty }\,{(i)}^{q}{J}_{q}(s){{\rm{e}}}^{iql\theta }\,d\theta \\ & = & \mathop{\sum }\limits_{p=-\infty }^{+\infty }\,\mathop{\sum }\limits_{q=-\infty }^{+\infty }\,{\int }_{0}^{2\pi }\,{(-i)}^{p}{(i)}^{q}{J}_{p}\mathrm{(2}\alpha \rho {r}_{s}){J}_{q}(s){{\rm{e}}}^{i(p+ql)\theta }{{\rm{e}}}^{-ip{\phi }_{s}}d\theta ,\end{array}$$ (10) and using the following identity: $${\int }_{0}^{2\pi }\,{e}^{i(p+ql)\theta }d\theta =2\pi {\delta }_{(p,-ql)},$$ (11) we have $${{\rm{\Theta }}}_{s\cos {\rm{1}}\theta }=2\pi \,\mathop{\sum }\limits_{q=-\infty }^{+\infty }\,{(-i)}^{-ql}{(i)}^{q}{J}_{-ql}\mathrm{(2}\alpha \rho {r}_{s}){J}_{q}(s){{\rm{e}}}^{iql{\phi }_{s}}\mathrm{.}$$ (12) As $${J}_{-n}(x)={(-1)}^{n}{J}_{n}(x)$$, we have $${{\rm{\Theta }}}_{s\cos {\rm{1}}\theta }=2\pi \,\mathop{\sum }\limits_{q=-\infty }^{+\infty }\,{(-i)}^{ql}{(i)}^{q}{J}_{ql}\mathrm{(2}\alpha \rho {r}_{s}){J}_{q}(s){{\rm{e}}}^{iql{\phi }_{s}}\mathrm{.}$$ (13) By substituting Eq. 13 in Eq. 8 we have $$\begin{array}{rcl}U(r,\phi ,z) & = & 2\pi {h}_{0}\,{{\rm{e}}}^{i\alpha {r}^{2}}\,\mathop{\sum }\limits_{s=-\infty }^{+\infty }\,\mathop{\sum }\limits_{q=-\infty }^{+\infty }\,{(i)}^{s}{(i)}^{q}{(-i)}^{ql}{J}_{s}(\gamma ){J}_{q}(s){{\rm{e}}}^{iql{\phi }_{s}}\\ & & \times \,\,{\int }_{0}^{\infty }\,{J}_{ql}\mathrm{(2}\alpha \rho {r}_{s}){{\rm{e}}}^{i\alpha {\rho }^{2}}\,\rho \,d\rho .\end{array}$$ (14) Now using the following integral identity35: $${\int }_{0}^{\infty }\,{J}_{v}(b\rho ){{\rm{e}}}^{i\alpha {\rho }^{2}}\rho \,d\rho =\frac{b}{8}(\frac{\sqrt{\pi }}{{\alpha }^{\frac{3}{2}}}){e}^{-i(\frac{{b}^{2}}{8\alpha }-\frac{v\pi }{4})}[{J}_{\frac{v+1}{2}}(\frac{{b}^{2}}{8\alpha })+i\,{J}_{\frac{v-1}{2}}(\frac{{b}^{2}}{8\alpha })],$$ (15) the resulting light field can be written in the following form: $$\begin{array}{rcl}U(r,\phi ,z) & = & {{\rm{e}}}^{ikz}{{\rm{e}}}^{i\alpha {r}^{2}}\{\mathop{\sum }\limits_{s=-\infty }^{+\infty }\,{(i)}^{s}{J}_{s}(\gamma ){J}_{0}(s){e}^{-i\alpha {r}_{s}^{2}}\\ & & +\,\mathop{\sum }\limits_{s=-\infty }^{+\infty }\,\mathop{\sum }\limits_{q=1}^{+\infty }\,{(i)}^{s}{(i)}^{-q(\frac{l}{2}-1)-1}{J}_{s}(\gamma ){J}_{q}(s){r}_{s}(\frac{\pi }{\sqrt{\lambda z}}){e}^{\frac{-i\alpha {r}_{s}^{2}}{2}}\\ & & \times \,[{J}_{\frac{ql+1}{2}}(\frac{\alpha {r}_{s}^{2}}{2})+i\,{J}_{\frac{ql-1}{2}}(\frac{\alpha {r}_{s}^{2}}{2})]\,\cos (ql{\phi }_{s})\}.\end{array}$$ (16) Here s shows the number of the diffraction order of the AMLPG, and the corresponding light field is $$\begin{array}{rcl}{U}_{s}({r}_{s},{\phi }_{s},z) & = & {{\rm{e}}}^{ikz}{{\rm{e}}}^{i\alpha {r}^{2}}\{{(i)}^{s}{J}_{s}(\gamma ){J}_{0}(s){e}^{-i\alpha {r}_{s}^{2}}\\ & & +\,\mathop{\sum }\limits_{q=1}^{+\infty }\,{(i)}^{s}{(i)}^{-q(\frac{l}{2}-1)-1}{J}_{s}(\gamma ){J}_{q}(s){r}_{s}(\frac{\pi }{\sqrt{\lambda z}}){e}^{\frac{-i\alpha {r}_{s}^{2}}{2}}\\ & & \times \,[{J}_{\frac{ql+1}{2}}(\frac{\alpha {r}_{s}^{2}}{2})+i\,{J}_{\frac{ql-1}{2}}(\frac{\alpha {r}_{s}^{2}}{2})]\,\cos (ql{\phi }_{s})\},\\ s & = & 0,\pm \,1,\pm \,2,\ldots \end{array}$$ (17) This is the main result, showing the complex amplitude of the diffracted beam from an AMLPG and indicates that all the diffraction patterns forming over the individual diffraction orders are different. It is worth noting that unlike the case of diffraction from the product of a given function and a linear grating in which the spectrum of the function is multiplied over different diffraction orders of the grating, here such behavior does not occur. The reason is that the structure of an AMLPG is not separable into a linear grating structure and another definite function. Equation 17 can be considered as a summation over individual diffraction orders $$U(r,\phi ,z)=\mathop{\sum }\limits_{s=-\infty }^{+\infty }\,{U}_{s}({r}_{s},{\phi }_{s},z\mathrm{).}$$ (18) Similar to the diffraction of a plane wave from a radial phase grating31, here again we see that each of individual diffraction patterns has a radial form with a considerable structural complexity. All of the patterns are shape invariant under propagation. Therefore we consider each of them a “radial carpet beam”. In one of the next subsections we will show that for a specific value of the phase amplitude of an AMLPG having a given value of l, one of the individual diffraction patterns will get the same intensity distribution of the radial carpet beam produced directly in the diffraction of a plane wave from a radial phase grating with the same spokes number, l. #### Relative rotation of the resulting patterns over ±s diffraction orders We show that the diffraction patterns generated over a pair of diffraction orders ±s are similar in form but have a relative rotation with respect to each other. For a given positive value of s, substituting s with −s in Eq. 17 we have the following equation: $$\begin{array}{rcl}{U}_{-s}({r}_{-s},{\phi }_{-s},z) & = & {{\rm{e}}}^{ikz}{{\rm{e}}}^{i\alpha {r}^{2}}\{{(i)}^{-s}{J}_{-s}(\gamma ){J}_{0}(\,-\,s){e}^{-i\alpha {r}_{-s}^{2}}\\ & & +\,\mathop{\sum }\limits_{q=1}^{+\infty }\,{(i)}^{-s}{(i)}^{-q(\frac{l}{2}-1)-1}{J}_{-s}(\gamma ){J}_{q}(\,-\,s){r}_{-s}(\frac{\pi }{\sqrt{\lambda z}}){e}^{\frac{-i\alpha {r}_{-s}^{2}}{2}}\\ & & \times \,[{J}_{\frac{ql+1}{2}}(\frac{\alpha {r}_{-s}^{2}}{2})+i\,{J}_{\frac{ql-1}{2}}(\frac{\alpha {r}_{-s}^{2}}{2})]\,\cos (ql{\phi }_{-s})\},\end{array}$$ (19) and using Eq. 7 we have $${r}_{-s}=\sqrt{{r}^{2}+{(\frac{s\lambda z}{d})}^{2}+2\frac{s\lambda z}{d}r\,\cos \,\varphi },\,\& \,{\phi }_{-s}={\tan }^{-1}(\frac{r\,\sin \,\phi }{r\,\cos \,\phi +\frac{s\lambda z}{d}}).$$ (20) Now we use Eq. 20 in Eq. 19 and we have $$\begin{array}{rcl}{U}_{-s}({r}_{-s},{\phi }_{-s},z) & = & {{\rm{e}}}^{ikz}{(i)}^{s}\{{J}_{0}(s){J}_{s}(\gamma ){e}^{i\alpha ({r}^{2}-{r}_{-s}^{2})}\\ & & +\,\mathop{\sum }\limits_{q=1}^{+\infty }\,{(i)}^{-q(\frac{l}{2}-1)-1}{J}_{q}(s){J}_{s}(\gamma ){r}_{-s}(\frac{\pi }{\sqrt{\lambda z}}){e}^{i\alpha ({r}^{2}-\frac{{r}_{-s}^{2}}{2})}\\ & & \times \,[{J}_{\frac{ql+1}{2}}(\frac{\alpha {r}_{-s}^{2}}{2})+i\,{J}_{\frac{ql-1}{2}}(\frac{\alpha {r}_{-s}^{2}}{2})]\,\cos (ql({\phi }_{-s}-\frac{\pi }{l}))\},\end{array}$$ (21) where we also used $${J}_{n}(\,-\,x)={(-1)}^{-n}{J}_{n}(x)$$, $$n\ge 0$$ and $$\cos (q\phi )={(-\mathrm{1)}}^{q}\,\cos (q\phi -q\pi )$$, $$q\in Z$$. Now by comparing Eqs 17 and 21 we can show the following result: $${U}_{s}({r}_{s},{\phi }_{s},z)={U}_{-s}({r}_{-s},{\phi }_{-s}-\frac{\pi }{l},z).$$ (22) This means that the resulting patterns over a pair of diffraction orders ±s are similar except there is a relative rotation with a value of $$\frac{\pi }{l}$$ between them. ### Controlled intensity sharing among different diffraction orders: Effect of γ on the resulting patterns Here we show that the intensity of an incident beam on an AMLPG can be divided among different diffraction orders with desired proportions. For an AMLPG, unlike the conventional gratings, the intensity share of the higher diffraction orders may be larger than the intensity on the lower diffraction orders, when given values are chosen for the phase amplitude of the grating, γ. Figure 2 shows the calculated diffracted intensity patterns for an AMLPG with $$l=10$$ spokes and different values of γ at a distance $$z=555\,{\rm{cm}}$$. By selecting the values of 2.4048, 3.8317, and 5.1356 for the arguments of J0, J1, and J2 their first zeros appear, respectively, and 5.5201 leads to the second zero of J0. As is apparent, the visibility of the patterns and intensity sharing among different diffraction orders depend to the value of γ. • For $$\gamma =\frac{\pi }{2}$$ and $$\gamma =2.4048$$ the diffraction orders of $$s=\pm \,1$$ are more visible and they have the maximum values of the intensities between the diffraction orders. • For $$\gamma =\pi$$ the diffraction orders for $$s=\pm \,2$$ are the dominant patterns. • For $$\gamma =3.8317$$ and $$\gamma =\frac{3\pi }{2}$$ the diffraction orders for $$s=\pm \,3$$ are the dominant patterns and have the maximum share of the intensity. • For $$\gamma =5.1356$$ and $$\gamma =5.5201$$ the diffraction orders for $$s=\pm \,4$$ share the maximum intensities. • For $$\gamma =2\pi$$ the diffraction orders of $$s=\pm \,5$$ are more visible. This means that by increasing the value of γ the energy of the incident light transfers to the higher diffraction orders. As is apparent, at larger radii, all individual diffraction patterns have the same number of spokes equal to the number of spokes of the AMLPG, but at smaller radii, the fine structure of the patterns strongly depends on the value of s. Also, it is seen that for larger values of s, the spoke patterns become needle-like and therefore the path of each needle-like pattern under propagation becomes a light sheet. These features might find some application in light sheet microscopy. Now, we estimate the diffraction coefficients of an AMLPG by calculating the percentage of the incident power flows among different diffraction orders. This is done by calculating the ratios of the mean values of the intensities over the diffracted patterns to the mean value of the incident beam intensity. The mean value of the intensity over each of the individual patterns, separated by the dashed lines in the first row of Fig. 2, was calculated. Table 1 shows the results of the intensity sharing between different orders for some selected values of γ. Using Eq. 17, we can directly calculate the diffraction coefficients of an AMLPG as the ratio of the power of each order of diffraction Ps to the power of the incident beam Pi $${P}_{s,i}=\frac{{P}_{s}}{{P}_{i}}=\frac{{\int }_{{A}_{s}}\,{I}_{s}({r}_{s},{\phi }_{s},z)\,dA}{{\int }_{{A}_{i}}\,{I}_{i}(x,y,z)\,dA}=\frac{{\int }_{{A}_{s}}\,{u}_{s}({r}_{s},{\phi }_{s},z){u}_{s}^{\ast }({r}_{s},{\phi }_{s},z)\,dA}{{I}_{i}(x,y,z){A}_{i}}=\frac{{\bar{I}}_{s}}{{\bar{I}}_{i}},$$ (23) where Ii and Is are the intensity values over the incident beam and the diffracted beam of order s, respectively. Their mean values over the corresponding areas are given by $${\bar{I}}_{i}={I}_{i}$$ and $${\bar{I}}_{s}=\frac{{\int }_{{A}_{s}}\,{u}_{s}({r}_{s},{\phi }_{s},z){u}_{s}^{\ast }({r}_{s},{\phi }_{s},z)\,dA}{{A}_{s}}$$, respectively. The same results in Table 1 were derived using Eq. 23. The following results in Table 1 are worth noting: • For $$\gamma =\frac{\pi }{2}$$ and $$\gamma =2.4048$$, the diffraction orders with $$s=\pm \,1$$ have the maximum values of intensity sharing of 32% and 27%, respectively. • For $$\gamma =\pi$$ the diffraction orders with $$s=\pm \,2$$, have the maximum values of intensity sharing of 23.5%. • For $$\gamma =3.8317$$ and $$\gamma =\frac{3\pi }{2}$$, the diffraction orders with $$s=\pm \,3$$ have the maximum values of intensity sharing of 17.6% and 16.4%, respectively. • For $$\gamma =5.1356$$ and $$\gamma =5.5201$$, the diffraction orders with $$s=\pm \,4$$ have the maximum values of intensity sharing of 15.7% and 15.6%, respectively. • For $$\gamma =2\pi$$, the diffraction orders with $$s=\pm \,5$$ have the maximum values of intensity sharing of 13.8%. • For the values of $$\gamma =2.4048$$ and $$\gamma =5.5201$$ the intensity sharing of the DC terms, with $$s=0$$, are zero. These values of γ correspond to the first and second zeros of J0. • For the values of $$\gamma =3.8317$$ and $$\gamma =5.1356$$, the intensity sharing of the $$s=\pm \,1$$ and $$s=\pm \,2$$ terms are zero. The value of $$\gamma =3.8317$$ corresponds to the first zero of J1 and the value of $$\gamma =5.1356$$ corresponds to the first zero of J2. ### Comparison of the diffraction patterns of an AMLPG and a radial phase grating Here we compare the diffraction pattern of a plane wave diffracted directly from a radial phase grating31 with the diffraction pattern of an AMLPG. The diffracted light field in the diffraction of a plane wave from a radial phase grating having a sinusoidal transmission function can be written as31 $$\begin{array}{rcl}\psi (r,\phi ,z) & = & {{\rm{e}}}^{ikz}\{{J}_{0}(\gamma )+\mathop{\sum }\limits_{q=1}^{+\infty }\,{(i)}^{-(\frac{l}{2}-1)q-1}{J}_{q}(\gamma )r(\frac{\pi }{\sqrt{\lambda z}})\\ & & \times \,{e}^{\frac{-i\alpha {r}^{2}}{2}}[{J}_{\frac{ql+1}{2}}(\frac{\alpha {r}^{2}}{2})+i\,{J}_{\frac{ql-1}{2}}(\frac{\alpha {r}^{2}}{2})]\,\cos (ql\phi )\}.\end{array}$$ (24) Comparing Eqs 17 and 24, we see that if the following relations are established $$\begin{array}{rcl}{J}_{0}(\gamma ) & = & {(i)}^{s}{J}_{0}(s){J}_{s}(\gamma ),\\ {J}_{q}(\gamma ) & = & {(i)}^{s}{J}_{q}(s){J}_{s}(\gamma ),\end{array}$$ (25) the two equations will be the same. It is apparent that this is the case for $$\gamma =s$$. This means that the diffraction pattern from a radial phase grating, known as the “radial carpet beam”31, and the s–th diffracted order of an AMLPG are similar when the phase amplitude and spokes number of the AMLPG are equal to the spokes number of the radial phase grating. In Fig. 3 the calculated diffraction pattern (or equally the whole spectrum) of an AMLPG (left column) and the diffraction pattern of a plane wave diffracted directly from a radial phase grating (right column) are illustrated. As it is seen for $$s=\gamma$$ we have $${U}_{s}{U}_{s}^{\ast }=\psi {\psi }^{\ast }$$ and the s–th diffraction order has the maximum value of intensity between all the diffraction orders. From Figs 2 and 3, it is also seen that by increasing the value of γ, the higher diffraction orders get a larger fraction of the energy of the incident beam. Since the value of intensity at different diffraction orders can be adjusted with the value of the grating phase amplitude, γ, this feature can be used in optical switching. ### Replication of the spectrum of an AMLPG by an SLM structure Here we investigate the replication of the spectrum of an AMLPG by the SLM structure under diffraction. First we consider the diffraction of a plane wave from an SLM when a secondary structure is not embedded in it. Assume that the structure of the SLM is a rectilinear 2D grating with the following transmission: $$f(x,y)={f}_{x}(x){f}_{y}(y),$$ (26) where fx and fy show two one dimensional (1D) linear gratings placed together at right angles. The Fourier transform of $$f(x,y)$$, apart from a multiplicative factor, is given by36 $$\begin{array}{rcl}F(\nu ,\eta ) & = & {F}_{x}(\nu ) \circledast {F}_{y}(\eta )\\ & = & \delta (\eta )\,\mathop{\sum }\limits_{m=0}^{+\infty }\,{B}_{m}\delta (\nu -\frac{m}{2{{\rm{\Lambda }}}_{x}}) \circledast \delta (\nu )\,\mathop{\sum }\limits_{n=0}^{+\infty }\,{B}_{n}\delta (\eta -\frac{n}{2{{\rm{\Lambda }}}_{y}}),\end{array}$$ (27) where δ is the delta function, Bm and Bn are the Fourier coefficients, $${F}_{x}(\nu )$$ and $${F}_{y}(\eta )$$ are the 1D Fourier transforms of fx and fy, respectively, and $$\circledast$$ is the convolution symbol. Now suppose that the structure of an AMLPG with a transmission function of $$T(\rho ,\theta )$$ is imposed on the SLM and a plane wave is diffracted by it. In this case the light field immediately after the SLM is given by the multiplication of $$f(x,y)$$ and $$T(\rho ,\theta )$$, and the resulting diffraction pattern or equally the corresponding spatial spectrum is obtained as $${U}_{total}(r,\phi )=F(\nu ,\eta ) \circledast U(r,\phi \mathrm{).}$$ (28) The diffracted light field distribution or equally the spectrum of the AMLPG is replicated by the impulses of the SLM. The replicated spectrum by the $$(m,n)$$ impulse of the SLM can be written as $${U}_{m,n}(r,\phi )={F}_{m,n}(\nu ,\eta ) \circledast U(r,\phi \mathrm{).}$$ (29) This means that the spectrum of an AMLPG is replicated by each of the diffraction impulses of the SLM. Using Eqs 18 and 27 in Eq. 29, the distribution of the diffracted light field corresponding to the $$(m,n)$$ impulse of the SLM after a propagation distance of z is given by $${U}_{m,n}({r}_{m,n},{\phi }_{m,n},z)=\mathop{\sum }\limits_{s=-\infty }^{+\infty }\,{U}_{m,n,s}({r}_{m,n,s},{\phi }_{m,n,s},z),$$ (30) where $$\begin{array}{rcl}{U}_{m,n,s}({r}_{m,n,s},{\phi }_{m,n,s},z) & = & {{\rm{e}}}^{ikz}{{\rm{e}}}^{i\alpha {r}^{2}}\{{(i)}^{s}{J}_{s}(\gamma ){J}_{0}(s){e}^{-i\alpha {r}_{m,n,s}^{2}}\\ & & +\,\mathop{\sum }\limits_{q=1}^{+\infty }\,{(i)}^{s-1}{(i)}^{-q(\frac{l}{2}-1)}{J}_{s}(\gamma ){J}_{q}(s){r}_{m,n,s}(\frac{\pi }{\sqrt{\lambda z}}){e}^{\frac{-i\alpha {r}_{m,n,s}^{2}}{2}}\\ & & \times \,[{J}_{\frac{ql+1}{2}}(\frac{\alpha {r}_{m,n,s}^{2}}{2})+i\,{J}_{\frac{ql-1}{2}}(\frac{\alpha {r}_{m,n,s}^{2}}{2})]\,\cos (ql{\phi }_{m,n,s})\},\end{array}$$ (31) in which $${r}_{m,n,s}=\sqrt{{({r}_{s}\cos {\phi }_{s}+m\frac{\lambda z}{{{\rm{\Lambda }}}_{x}})}^{2}+{({r}_{s}\sin {\phi }_{s}+n\frac{\lambda z}{{{\rm{\Lambda }}}_{y}})}^{2}}$$ and $${\phi }_{m,n,s}={\tan }^{-1}(\frac{{r}_{s}\,\sin \,{\phi }_{s}+n\frac{\lambda z}{{{\rm{\Lambda }}}_{y}}}{{r}_{s}\,\cos \,{\phi }_{s}+m\frac{\lambda z}{{{\rm{\Lambda }}}_{x}}})$$. The patterns of various radial carpet beams over different diffraction orders of the AMLPG with controlled intensity sharing among the generated beams are replicated over the SLM diffraction orders. Each of the generated radial carpet beams is given by $${U}_{m,n,s}(r,\phi )={F}_{m,n}(\nu ,\eta ) \circledast {U}_{s}(r,\phi );s\in Z,$$ (32) where s shows the order of diffraction of the AMLPG imposed on the SLM. ## Discussion We show that by adding an azimuthally periodic term into the argument of a linear phase grating, say $$\cos (l\theta )$$ in Eq. 1, and by adjusting the value of γ, one can control the intensity sharing between different diffracted beams (see Figs 2 and 3). The theoretical perditions show that, in order to have maximum share of energy on a higher order diffraction pattern, we need to use an SLM with a large value of phase variation. The proposed method for controlling the intensity sharing between different diffraction orders can be implemented with the aid of other additional phase terms in Eq. 1. For example, by replacing $$\cos (l\theta )$$ with the phase function of a zone plate $$\cos (\frac{\pi {\rho }^{2}}{s})$$, where s is the zone plate constant, at given propagation distances over the different diffraction patterns focused beams with different values of intensities will be produced. This feature can be used for optical switching. Another example is the phase function of a defected zone plate $$\cos (l\theta +\frac{\pi {\rho }^{2}}{s})$$. The results of these studies will appear elsewhere. ## Methods In this section we present experimental work that verifies the above theoretical results. ### Experimental generation of the various radial carpet beams We used a conventional SLM extracted from a video projector (LCD projector KM3 MOD. NO. X50) to provide the desired AMLPGs. Maximum amplitude of the phase modulation was limited to $$\gamma =\pi /2\,{\rm{rad}}$$ shown in Fig. 1. In the experiments, the whole area of the SLM was illuminated with a plane wave which was the second harmonic of an Nd:YAG diode–pumped laser beam having a wavelength of $$\lambda =532\,{\rm{nm}}$$. The active area of the SLM was 11 mm × 15 mm. An AMLPG was imposed on the SLM and the collimated wavefront of the laser beam propagated through it. At different distances from the SLM, the diffracted patterns were recorded with a camera (Nikon D7200). We recorded the diffraction patterns by two methods. In some parts of the experiments, the diffraction patterns were directly formed over the active area of the camera by removing the imaging lens of the camera. Also in some of the experiments, the desired diffraction patterns were formed on a diffuser and then their patterns were imaged by the imaging lens of the camera on its active area. In the latter case, a magnification in the size of the images appeared. The active image area of the camera was 23.4 mm × 15.6 mm. Figure 4(a) shows an experimentally recorded diffraction pattern of a plane wave from the SLM when a uniform phase map is imposed on it. A diffuser was placed at $$z=77\,{\rm{cm}}$$ and the pattern was imaged by the camera. Since the SLM has a two dimensional periodic structure, each of the observed rectangular patterns in Fig. 4(a) corresponds to one of the diffraction orders of the SLM’s main structure. Figure 4(b) shows the same pattern after a propagation distance of $$z=350\,{\rm{cm}}$$. In Fig. 4(b), pairs of numbers show the numbers of diffraction orders in the horizontal (x) and vertical (y) directions corresponding to the SLM’s main structure. Figure 5 shows the diffraction pattern of a plane wave from the SLM when a 1D linear phase grating with a sinusoidal profile in the x direction and a period of 0.11 mm is imposed on it. The transmittance of a linear phase grating with a sinusoidal profile is given by Eq. 1 when cos(lθ) = 0. The sets of numbers indicate the orders of the diffraction from the SLM structure and the linear phase grating, $$(m,n,s)$$. In Fig. 6, second column, the central area of the diffraction pattern of a plane wave from an SLM with $$\gamma =\pi /2$$ is shown when an AMLPG is imposed on it. In the experiment, a diffuser was placed at $$z=350\,{\rm{cm}}$$ and the central area of the diffraction pattern imaged by the camera. The radial phase structure with $$l=10$$ is imposed on the structure of a linear phase grating with a period of $$d=0.11\,{\rm{mm}}$$ in the x direction (see Eq. 1). For better illustration of the results, four typical diffraction orders are enlarged in the first and third columns in Fig. 6. Each of the illustrated diffraction patterns is determined by the corresponding diffraction order $$(m,n,s)$$. As is apparent, the intensity profiles for two individual diffraction patterns are the same when their order numbers, s, are the same. For two given individual diffraction patterns with orders $$(m,n,s)$$ and $$(m,n,-\,s)$$, the patterns are rotated at an angle $$\frac{\pi }{l}$$ with respect to each other. Figure 7(a,b) show the experimental diffraction patterns corresponding to the $$(0,0)$$ diffraction order of the SLM when two AMLPGs with $$l=10$$ and $$l=15$$ were imposed on it, respectively. Here again, a diffuser was placed at $$z=350\,{\rm{cm}}$$ and the central diffraction order of the SLM with order number (0,0) was imaged by the camera. In Fig. 7(c,d) the corresponding theoretically produced patterns are illustrated. As it is seen, when the value of l is odd, the intensity patterns of $${I}_{(m,n,s)}$$ and $${I}_{(m,n,-s)}$$ are mirrors of each other relative to the plane of $$s=0$$ order. In Fig. 8 experimentally recorded diffraction patterns are shown for four AMLPGs having spokes numbers of 5, 10, 15, and 20, respectively. Here, each of the individual patterns was recorded directly over the active area of the camera by removing its imaging lens at distance $$z=555\,{\rm{cm}}$$. ## Conclusion In this work, we introduced a new kind of phase grating with controlled intensity among different diffraction orders. We constructed an AMLPG by adding an azimuthal periodic dependency to the argument of the transmission function of a linear phase grating having a sinusoidal profile. Generation of diverse radial carpet beams was investigated over different diffraction orders of an AMLPG with controlled predominant diffraction order. A detailed theoretical analysis was reported and its experimental verification was presented by generating diverse radial carpet beams with controlled shift of intensity in the illumination of an AMLPG with a spatially coherent light beam. We specified diverse radial carpet beams produced over different diffraction orders of the host linear phase grating. It was shown that all the diffraction patterns are different and only the pairs of positive and negative orders with the same order numbers are similar except that there is a relative rotation between them. The diffraction grating introduced with controlled intensity sharing among different diffraction orders might find wide applications in many areas of optics such as optical switches. Also, the radial carpet beams produced over different diffraction orders might find applications in light sheet microscopy. ## References 1. 1. Talbot, H. F. Lxxvi. facts relating to optical science. no. iv. Philos. Mag. 9(56), 401–407 (1836). 2. 2. Case, W. B., Tomandl, M., Deachapunya, S. & Arndt, M. Realization of optical carpets in the talbot and talbot-lau configurations. Opt. Express 17, 20966–20974 (2009). 3. 3. Patorski, K. & Kujawinska, M. Handbook of the moiré fringe technique (Elsevier Science, 1993). 4. 4. Yokozeki, S. & Suzuki, T. Shearing interferometer using the grating as the beam splitter. Appl. Opt. 10(7), 1575–1580 (1971). 5. 5. Rasouli, S., Sakha, F. & Yeganeh, M. Infinite–mode double-grating interferometer for investigating thermal–lens–acting fluid dynamics. Meas. Sci. Technol. 29, 085201 (2018). 6. 6. Rasouli, S. & Ghorbani, M. Nonlinear refractive index measuring using a double-grating interferometer in pump–probe configuration and fourier transform analysis. J. Opt. 14(3), 035203 (2012). 7. 7. Alkaisi, M. M., Blaikie, R. J., McNab, S. J., Cheung, R. & Cumming, D. R. S. Sub-diffraction-limited patterning using evanescent near-field optical lithography. Appl. Phys. Lett. 75, 3560–3562 (1999). 8. 8. Naqavi, A., Peter Herzig, H. & Rossi, M. High-contrast self-imaging with ordered optical elements. J. Opt. Soc. Am. B 33, 2374–2382 (2016). 9. 9. Walker, C. A. Handbook of moiré measurement (CRC Press, 2003). 10. 10. Rasouli, S. & Shahmohammadi, M. A portable and long-range displacement and vibration sensor that chases moving moiré fringes using the three-point intensity detection method. OSA Continuum, to be published (2018). 11. 11. Amidror, I. The Theory of the Moiré Phenomenon, vol. I and II (Springer, 2007). 12. 12. Saveljev, V., Kim, S.-K., Lee, H., Kim, H.-W. & Lee, B. Maximum and minimum amplitudes of the moiré patterns in one- and two-dimensional binary gratings in relation to the opening ratio. Opt. Express 24(3), 2905–2918 (2016). 13. 13. Saveljev, V., Kim, S. K. & Kim, J. Moiré effect in displays: a tutorial. Opt. Eng. 57(3), 030803 (2018). 14. 14. Janicijevic, L. & Topuzoski, S. Fresnel and fraunhofer diffraction of a gaussian laser beam by fork-shaped gratings. J. Opt. Soc. Am. A 25, 2659 (2008). 15. 15. Topuzoski, S. & Janicijevic, L. Fraunhofer diffraction of a laguerre–gaussian laser beam by fork-shaped grating. J. Mod. Opt. 58(2), 138–145 (2011). 16. 16. Kotlyar, V. V. et al. Generation of phase singularity through diffracting a plane or gaussian beam by a spiral phase plate. J. Opt. Soc. Am. A 22(5), 849–861 (2005). 17. 17. Li, Y., Kim, J. & Escuti, M. J. Orbital angular momentum generation and mode transformation with high efficiency using forked polarization gratings. Appl. Opt. 51, 8236 (2012). 18. 18. Topuzoski, S. & Janicijevic, L. Diffraction characteristics of optical elements designed as phase layers with cosine-profiled periodicity in the azimuthal direction. J. Opt. Soc. Am. A 28, 2465–2472 (2011). 19. 19. Davis, J. A., Carcole, E. & Cottrell, D. M. Intensity and phase measurements of nondiffracting beams generated with a magneto-optic spatial light modulator. Appl. Opt. 35, 593 (1996). 20. 20. Pang, H. et al. Non-iterative phase-only fourier hologram generation with high image quality. Opt. Express 25, 14323 (2017). 21. 21. Kuang, Z., Perrie, W., Edwardson, S. P., Fearon, E. & Dearden, G. Ultrafast laser parallel microdrilling using multiple annular beams generated by a spatial light modulator. J. Phys. D: Appl. Phys. 47, 115501 (2014). 22. 22. Heckenberg, N. R., McDuff, R., Smith, C. P. & White, A. G. Generation of optical-phase singularities by computer-generated holograms. Opt. Lett 17, 221–223 (1992). 23. 23. Coullet, P., Gill, L. & Rocca, F. Optical vortices. Opt. Commun. 73, 403–408 (1989). 24. 24. Terhalle, B. et al. Generation of extreme ultraviolet vortex beams using computer generated holograms. Opt. Lett. 36(21), 4143–4145 (2011). 25. 25. Carpentier, A. V., Michinel, H., Salgueiro, J. R. & Olivieri, D. Making optical vortices with computer-generated holograms. American Journal of Physics 76(10), 916–921 (2008). 26. 26. Tricoles, G. Computer generated holograms: an historical review. Appl. Opt. 26, 4351–4360 (1987). 27. 27. Rasouli, S., Khazaei, A. M. & Hebri, D. Talbot carpet at the transverse plane produced in the diffraction of plane wave from amplitude radial gratings. J. Opt. Soc. Am. A 35, 55 (2018). 28. 28. Rasouli, S., Hebri, D. & Khazaei, A. M. Investigation of various behaviors of near- and far-field diffractions from multiplicatively separable structures in the x and y directions, and a detailed study of the near-field diffraction patterns of 2d multiplicatively separable periodic structures using the contrast variation method. J. Opt. 19, 095601 (2017). 29. 29. Rasouli, S. & Hebri, D. Contrast enhanced quarter-talbot images. J. Opt. Soc. Am. A 34, 2145–2156 (2017). 30. 30. Hebri, D., Rasouli, S. & Yeganeh, M. Intensity-based measuring of the topological charge alteration by the diffraction of vortex beams from amplitude sinusoidal radial gratings. J. Opt. Soc. Am. B 35, 724–730 (2018). 31. 31. Rasouli, S., Khazaei, A. M. & Hebri, D. Radial carpet beams: A class of nondiffracting, accelerating, and self-healing beams. Phys. Rev. A 97, 033844 (2018). 32. 32. Hebri, D. & Rasouli, S. Combined half-integer bessel-like beams: A set of solutions of the wave equation. Phys. Rev. A 98, 003800 (2018). 33. 33. Yeganeh, M. & Rasouli, S. Investigation of the moiré patterns of defected radial and circular gratings using the reciprocal vectors approach. J. Opt. Soc. Am. A 33, 416–425 (2016). 34. 34. Arfken, G. B. Mathematical Methods for Physicists 3nd (Academic Press, 1985). 35. 35. Jeffrey, A. & Zwillinger, D. eds Table of integrals, series, and products (Academic Press, 2007). 36. 36. Reynolds, G. O., DeVelis, J. B., Parrent, G. B. & Thompson, B. J. The New physical optics notebook: tutorials Fourier optics (American Institute of Physics, 1989). ## Acknowledgements The authors would like to thank Dr. Bahman Farnudi from IASBS for the linguistic editing of the paper. This work was supported in part by the IASBS Research Council under Grants No. G2018IASBS12632 and No. G2019IASBS12632. ## Author information S.R. designed the research. A.M.K. and S.R. performed the experiments, simulations, and theoretical works. S.R. performed analysis and interpretation of the results. S.R. drafted the manuscript. Correspondence to Saifollah Rasouli. ## Ethics declarations ### Competing Interests The authors declare no competing interests. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions
### 名古屋微分方程式セミナー セミナー世話人:杉本充 菱田俊明 津川光太郎 加藤淳 寺澤祐高 2018 年度 10月22日(月) We prove existence of weak solutions for a diffuse interface model for the flow of two viscous incompressible Newtonian fluids with different densities in a bounded domain in two and three space dimensions. In contrast to previous works, we study a model with a singular non-local free energy, which controls the fractional Sobolev norm of the volume fraction. We show existence of weak solutions for large times with the aid of an implicit time discretization. This talk is based on a joint work with Helmut Abels (Regensburg). 10月29日(月) 11月5日(月) [集中講義 ] 11月12〜16日 ・自己共役作用素のスペクトル理論の復習 ・加藤の滑らかな摂動の理論 ・シュレディンガー作用素に対する一様ソボレフ評価 ・2体散乱理論への応用:波動作用素の存在と漸近完全性 11月19日(月) 11月26日(月) 過去のセミナー 4月16日(月) 4月23日(月) Levinson の定理は, 量子系の散乱に由来するある量がその系の束縛状態の数に等しいという主張です. Levinson (1949) が球対称ポテンシャルをもつ Schrödinger 作用素に対して証明して以来, 様々なモデルに対して調べられてきました. 一般的な証明の多くが複素解析などに基づく一方, Kellendonk-Richard (2007) は全く異なるアプローチを提案しました. 彼らは, C* 環の K 理論というトポロジーの手法を用いることで, Levinson の定理の正体が実は Atiyah-Singer の指数定理であることを明らかにしました. これにより束縛状態が有限個の場合, Levinson の定理の証明はある種の C* 環の問題に帰着されます. それでは, 束縛状態の数が無限個ある場合はどうでしょうか? 本講演では, 半直線上のある Schrödinger 作用素をモデルとして, 束縛状態が無限個の場合の Levinson の定理について考えます. このモデルに対して, (概) 周期的擬微分作用素から生成される C* 環を考えることで, 無限個の場合でも意味のある等式が得られることを紹介します. 本講演は, S. Richard 氏 (名大) との共同研究に基づきます. 5月7日(月) We study the Cauchy problem of the linear damped wave equation and give sharp $L^p$-$L^q$ estimates of the solution. This is an improvement of the so-called Matsumura estimates. Moreover, as its application, we consider the nonlinear problem with slowly decaying initial data, and determine the critical exponent. In particular, we prove that the small data global existence holds in the critical case if the initial data does not belong to $L^1$. This talk is based on a joint work with Masahiro Ikeda (RIKEN), Mamoru Okamoto (Shinshu University), and Takahisa Inui (Osaka University). 5月14日(月) 5月21日(月) 5月28日(月) We study the periodic traveling wave solutions of the derivative nonlinear Schrödinger equation (DNLS). It is known that (DNLS) has two types of solitons on the whole line; one has exponential decay and the other has algebraic decay. The latter corresponds to the soliton for the massless case. In the new global results recently obtained by Fukaya, Hayashi and Inui, the properties of two-parameter of the solitons are essentially used in the proof, and especially the soliton for the massless case plays an important role. To investigate further properties of the solitons, we construct exact periodic traveling wave solutions which yield the solitons on the whole line including the massless case in the long-period limit. Moreover, we study the regularity of the convergence of these exact solutions in the long-period limit. 6月4日(月) In this talk I will focus on the asymptotic behavior of nonsmooth radial solutions of semilinear Schrödinger equations with a barely supercritical nonlinearity (i.e a nonlinearity that grows faster than the critical power but not faster than a logarithm). It is known that we have scattering of smooth radial solutions of defocusing loglog energy-supercritical Schrödinger equations. I will recall the techniques used to prove this result. Then I will explain how we can use Jensen-type inequalities to prove scattering of nonsmooth radial solutions of defocusing loglog energy-supercritical Schrödinger equations. 6月11日(月) 6月18日(月) In this talk, I will discuss about a critical exponent for semilinear wave equations with time-dependent damping. When the damping is “effective,” it is Fujita exponent which is known to be the one for semilinear heat equations. Recently, by showing a sub-critical blow-up result, I have introduced a new conjecture that it is Strauss exponent which is known to be one for semilinear wave equations as far as the damping is “scattering.” I will also discuss about other nonlinearities and an intermediate situation, namely, the scaling invariant case. All the results in this talk are joint works with Ning-An Lai (Lishui University, China). 7月2日(月) [集中講義 ] 7月9〜13日 1. 序 (動機) 2. 定義 3. 扱える方程式の例 4. 比較原理 (一意性) 5. 比較定理 6. 比較原理再訪 7. 最大値原理 8. ハルナック不等式 [研究集会 ] 9月18日 (火) 〜 19日 (水) 「Workshop on the Navier–Stokes flow」 プログラム:PDF file (Website) 10月1日(月) In this talk we show boundedness of spectral multipliers for Schrödinger operators on an arbitrary open set. Furthermore, we present its application to the theory of Besov spaces and bilinear estimates. This talk is based on the joint work with T. Iwabuchi (Tohoku Univ.) and T. Matsuyama (Chuo Univ.). 10月15日(月)[名古屋確率論セミナーとの合同開催] In this talk we give a simple introduction of Gubinelli-Imkeller-Perkowski's paracontrolled calculus. (This is basically a survey talk, but at the end we may present our own result a little bit.) This theory solves many formerly ill-defined, but physically important stochastic PDEs and is now competing with Hairer's regularity structure theory. Fortunately, paracontrolled calculus is based on existing theories and therefore not too big. It uses Besov space theory, in particular, Bony's paradifferential calculus. To make our presentation clear to non-experts, we give up generality and focus on the most important example, namely, the 3D dynamic $\Phi^4$-model (also known as the 3D stochastic quantization equation). It is a singular SPDE on $(0, \infty) \times T^3$ and looks like this: $$\partial_t u= \triangle_x u -u^3 +\xi \quad(\mbox{with u_0 given}).$$ Here, $\xi$ is a space-time white noise and $T^3$ is the 3 dimensional torus.
• February 25, 2021, 05:54:11 PM Login with username, password and session length ### Author Topic: (Question) How to get second reference when using Perturbation Theory?  (Read 407 times) 0 Members and 1 Guest are viewing this topic. #### Mr Rebooted • Fractal Phenom • Posts: 47 ##### (Question)How to get second reference when using Perturbation Theory? « on: November 07, 2020, 04:23:16 AM » I've Implemented Perturbation theory in my fractal explorer, UniFract II, although there's a few problems. For #1, It's the image attached below. You can compare it with the image below the image. Top is Perturbation disabled, Bottom is Perturbation enabled. To fix this, how would I make another reference point? First reference is the center of the viewport (A.K.A. the offset) Linkback: https://fractalforums.org/programming/11/how-to-get-second-reference-when-using-perturbation-theory/3849/ #### Mr Rebooted • Fractal Phenom • Posts: 47 ##### Re: How to get second reference when using Perturbation Theory? « Reply #1 on: November 07, 2020, 04:27:09 AM » Second Image: #### pauldelbrot • 3f • Posts: 2517 ##### Re: How to get second reference when using Perturbation Theory? « Reply #2 on: November 07, 2020, 05:58:20 AM » The reference is escaping. Nanoscope's solution to this is to find blocks of pixels that have identical iteration counts down to the fractional part, which also catches most glitches. The largest connected component of the set of such pixels is found, then a pixel belonging to the block is found as near to its center of mass as possible, and then that pixel is calculated using normal arbitrary precision. That serves as the new reference (and also immediately does that one pixel). All other glitched/ref-escaped pixels are then recalculated using this reference. The test for such pixels is then repeated, and the whole procedure continues until they've all been corrected. So-called "noisy glitches" are dealt with by detecting the loss of precision and bailing early, which results in the noisy glitch (and a perimeter around it) being turned into a "solid" glitch, which the above deals with. #### Mr Rebooted • Fractal Phenom • Posts: 47 ##### Re: How to get second reference when using Perturbation Theory? « Reply #3 on: November 07, 2020, 06:08:02 AM » So what your saying is that of the reference escapes, I need to set it back to what the reference's starting value was, right? If not, how would I implement this by code in the fragment shader? #### Adam Majewski • Fractal Frogurt • Posts: 450 ##### Re: How to get second reference when using Perturbation Theory? « Reply #4 on: November 07, 2020, 11:27:55 AM » "noisy glitches" are dealt with by detecting the loss of precision and bailing early, which results in the noisy glitch (and a perimeter around it) being turned into a "solid" glitch, which the above deals with. Is it oK ? noisy glitches = isolated pixels solid glitch= The largest connected component of the set of such pixels https://en.wikibooks.org/wiki/Fractals/Image_noise#glitches #### gerrit • 3f • Posts: 2329 ##### Re: How to get second reference when using Perturbation Theory? « Reply #5 on: November 07, 2020, 07:58:53 PM » As far as I know nobody knows the best method to select the next reference point. Knighty's SMB which I think is still a bit faster than KF though being more of a testbed than a usable renderer puts the glitches in distinct sets (G1,.., Gn) with same iteration number where glitch was detected. Next references will then be 1 random pixel from each of the G1,..,Gn and is used only to recalculate the pixels in each set G. Secondary glitches simply generate another set G and you just put then in some queue or stack and keep going at it till no more G sets left. When dealing with glitched pixels you can use the same series expansion as a starting point. I've experimented a lot with using non-escaping reference points (centers of minibrot or Misiurewicz points) so glitched never occur because reference does not escape but never found a reliable way to do that though it can be faster (fewer glitches). Search for the SMB code which is easy to understand apparently, I can almost understand it. Other methods of great ingenuity have been proposed, esp. one by Claude which allows you to not just start over after glitching but seamlessly continue with a new non-escaping reference, but it does not always work AFAIK. It's been a while since I worked on this, if I garbled anything someone will hopefully correct me. #### claude • 3f • Posts: 1781 ##### Re: How to get second reference when using Perturbation Theory? « Reply #6 on: November 08, 2020, 11:56:31 AM » glitched never occur because reference does not escape There are two types (at least) of glitches: 1. Reference escapes early, this type can be avoided by picking a non-escaping reference 2. Too-different dynamics (detected easily by Pauldelbrot's heuristic, detected accurately by gerrit's backwards error analysis, think knighty had another method too) Type 1 can be improved by picking a random glitched pixel as the new reference and retrying Type 2 can sometimes be fixed by using pixel with minimum |z| at the glitch iteration (possibly using derivative for 1 step of Newton's method to make it closer to a miniset), but often picking a random pixel works just as well - the advantage for minimum |z| comes for Mandelbrot set where you don't need to restart iterations from the beginning because minisets are periodic through 0 and the period is the glitch iteration (I use this in my mandelbrot-perturbator thing) KF uses an algorithm I don't really understand for finding the "glitch center" based on pixel regions, but also has options for random choice and minimum |z| (without the fancy stuff from mandelbrot-perturbator). #### Mr Rebooted • Fractal Phenom • Posts: 47 ##### Re: How to get second reference when using Perturbation Theory? « Reply #7 on: November 09, 2020, 10:38:05 PM » There are two types (at least) of glitches: 1. Reference escapes early, this type can be avoided by picking a non-escaping reference 2. Too-different dynamics (detected easily by Pauldelbrot's heuristic, detected accurately by gerrit's backwards error analysis, think knighty had another method too) Type 1 can be improved by picking a random glitched pixel as the new reference and retrying Type 2 can sometimes be fixed by using pixel with minimum |z| at the glitch iteration (possibly using derivative for 1 step of Newton's method to make it closer to a miniset), but often picking a random pixel works just as well - the advantage for minimum |z| comes for Mandelbrot set where you don't need to restart iterations from the beginning because minisets are periodic through 0 and the period is the glitch iteration (I use this in my mandelbrot-perturbator thing) KF uses an algorithm I don't really understand for finding the "glitch center" based on pixel regions, but also has options for random choice and minimum |z| (without the fancy stuff from mandelbrot-perturbator). By code, how would I find a random gliched pixel? (Sorry I'm a beginner to Perturbation theory...) #### Mr Rebooted • Fractal Phenom • Posts: 47 ##### Re: How to get second reference when using Perturbation Theory? « Reply #8 on: November 09, 2020, 11:43:50 PM » Checking for the Glitch just made things look bad How do I even check for the Glitch without it acting up like this? I check it like this: Code: [Select] if (dot(z+dz, z+dz)/dot(z, z) < pow(10, -3)) {            glitchedPixel = true;        } else {            glitchedPixel = false;        }} #### claude • 3f • Posts: 1781 ##### Re: How to get second reference when using Perturbation Theory? « Reply #9 on: November 10, 2020, 01:25:29 PM » how would I find a random gliched pixel? There are at least two ways I can think of, the first is simpler but the second can be parallelized more easily. 1. loop over the image and count the glitched pixels.  if there are any, pick a random number in [0..count).  loop over the image again, counting glitched pixels, stopping when you reach the number you picked.  output the coordinates at hat point. 2. loop over the image.  for each glitched pixel, pick a random fractional number in [0..1). keep track of the coordinates of the point with the smallest number, and output it at the end.  make sure you detect the case of no glitched pixels at all. #### Mr Rebooted • Fractal Phenom • Posts: 47 ##### Re: How to get second reference when using Perturbation Theory? « Reply #10 on: November 10, 2020, 01:56:12 PM » There are at least two ways I can think of, the first is simpler but the second can be parallelized more easily. 1. loop over the image and count the glitched pixels.  if there are any, pick a random number in [0..count).  loop over the image again, counting glitched pixels, stopping when you reach the number you picked.  output the coordinates at hat point. 2. loop over the image.  for each glitched pixel, pick a random fractional number in [0..1). keep track of the coordinates of the point with the smallest number, and output it at the end.  make sure you detect the case of no glitched pixels at all. Just by simplifying that, you've made my life a whole lot easier. Thanks! #### Mr Rebooted • Fractal Phenom • Posts: 47 ##### Re: How to get second reference when using Perturbation Theory? « Reply #11 on: November 10, 2020, 02:03:44 PM » Problem Solved!! ### Similar Topics ###### "Time Span" Started by cricke49 on Fractal Image Gallery 0 Replies 802 Views August 02, 2018, 07:05:21 AM by cricke49 ###### Reference Point for Reference Orbit Started by mrmath on Fractal Mathematics And New Theories 2 Replies 416 Views December 10, 2017, 05:14:03 PM by mrmath ###### Perturbation theory Started by gerrit on Fractal Mathematics And New Theories 213 Replies 12401 Views August 12, 2020, 07:38:03 AM by gerrit ###### The magic behind Perturbation theory Started by FractalAlex on Kalles Fraktaler 11 Replies 316 Views June 11, 2020, 08:39:25 PM by claude ###### Perturbation Theory works! However... Started by Mr Rebooted on Programming 4 Replies 176 Views November 10, 2020, 04:12:18 AM by 3DickUlus
# Theres gotta be an easier way ## Homework Statement Determine the points on the surface xy^2z^3 = 2 that are closest to the origin ## The Attempt at a Solution is there an easier way to do this than to plug it into the distance formula and taking the derivative set to 0. HallsofIvy Homework Helper Use the square of the distance formula. Also, and this may be what you are looking for, instead of using xy2z3= 2 to replace one variable with the other two, use "Lagrange multipliers". If we write f(x,y,z)= x2+ y2+ z2, the square of the distance to the origin, and g(x,y,z)= xy2z3= 2, then max or min values of f, for points that satisfy g(x,y,z)= 2 must have $\nabla f$ parallel to $\nabla g$- one must be a multiple of the other. Setting $\nabla f= \lambda g$ and comparing the components, together with g(x,y,z)= 2, gives 4 equations to solve for x, y, z, and $\lambda$. Tip: since you are not interested in the value of $\lambda$, and $\lambda$ is simply multiplied by the functions of x, y, and z, often a best first thing to do is to divide one equation by another to eliminate $\lambda$. Last edited by a moderator: matt grime Homework Helper I case you need to look that up, Halls meant Lagrange multipliers, not Laplace. HallsofIvy Homework Helper Thanks, matt, I have editted that. ok i got a little stuck, heres my work, BTW k = lagrange multiplier f(x,y,z) = x2 + y2 + z2 g(x,y,z) = xy2z3 = 2 grad f = 2xi + 2yj + 2zk k grad g = k(y2z3i + 2xyz3j + 3z2xy2k k = 2x/y2z3 k = 1/xz3 k = 2/3zxy2 and im lost matt grime y = $$\sqrt{2x^2}$$ x = $$\sqrt{(y^2)/2}$$ z = $$\sqrt{(3y^2)/2}$$
# Construct Triangle by Angle Bisector, Altitude, and Side ### Problem Construct $\Delta ABC,$ given $h_a,$ $l_a,$ and $a.$ ### Analysis It is sufficient to determine the circumcircle of $\Delta ABC.$ ### Construction Step 1: First determine the direction of the angle bisector relative to the line $BC.$ To this end, construct right $\Delta XYZ,$ with $XY=h_a$ and $YZ=l_a.$ $YZ$ defines the required direction. Draw $YW\parallel BC,$ with $W$ on t he perpendiclar bisector of $BC.$ Step 2: Select an arbitary point $P'$ on the perpendicular bisector of $BC$ and find point $P'$ on $YW$ such that $A'P'\parallel YZ.$ Let $O'$ be the intersection of the perpendicular bisectors of $BC$ and $A'P'.$ Consider circle $(O')$ with center $O'$ through $A'$ and $P'.$ If it also passes through $B$ and $C$ the problem is solved: $A=A'.$ Step 3: Let $C'$ be the intersection of $(O')$ and $CW.$ Step 4: Draw $CO\parallel C'O',$ with $O$ on the perpendicular bisector of $BC.$ Step 5: Circle $(O)$ with center $O$ through $B$ and $C$ cuts $YW$ in $A,$ the missing vertex of the sought triangle. ### Proof Note that the circles $(O')$ obtained due to various selections of $P'$ are homothetic at $W$ so that all lines $C'O'$ are parallel. Since $P'$ is always on the perpendicular bisector of $BC$ and thus is the lowest point of $(O'),$ for the circle $(O)$ through $B$ and $C,$ $P$ is the mid point of the arc $BC,$ making $AP$ the bisector of $\angle ABC.$ Naturally, the part of $AP$ between the parallels $BC$ and $YW$ equals $YZ=l_a.$ ### Acknowledgment The construction is due to Prof. Dr. René Sperb.
# How to scan diagrams with Engauge “The Engauge Digitizer tool accepts image files (like PNG, JPEG and TIFF) containing graphs, and recovers the data points from those graphs.” Source ### Step by step guide: 1. Go to File -> Import to import a file or run engauge <Name-Of-The-File> from the command line. 2. From the Background Toolbar (top left) change the option from Filtered image to Original image. 3. Depending on how many curves you wish to digitize from the graph, go to Settings -> Curve List…. From there you can add, remove and rename the separate curves. The names of the curves should correspond to the names of the actual curves drawn. 4. The way the curves are formed ( connected from one point to another) varies and can be smooth, which is the default, or straight. To change that go to Settings -> Curve Properties... -> Connect as and choose Function - Straight. 5. Select the Axis Point Tool from the toolbar. You need to set 3 axis points. 1. The first one will be the start of the graph (bottom left corner) which usually corresponds to (0,0), but not always. 2. The second point should on the X-axis line, but not too close to the first point (0,0) because of the error introduced by hand pointing can affect the results. It should be as far away so the error does not matter that much. 3. The third and last point will be on the Y-axis. Again it should follow the same guidelines as the second point. 6. To start digitizing a curve select Curve Point Tool from the toolbar, select the curve from the Currently selected curve zoom in and start clicking on top of the curve from the start until the end. If you have to do it for more than one curves, change the curve selection and repeat. Also, the settings of step 4 have to be repeated for each line if needed. After the completion, save the file by going to File -> Save… and if you wish to export it as a CSV file go to File -> Export…. Choose a name that provides information about the graph.
Home > Informal Classroom Notes > “Most Likely” is an All or Nothing Proposition “Most Likely” is an All or Nothing Proposition The principle of maximum likelihood estimation is generally not explained well; readers are made to believe that it should be obvious to them that choosing the “most likely outcome” is the most sensible thing to do.  It isn’t obvious, and it need not be the most sensible thing to do. First, recall the statement I made in an earlier paper: The author believes firmly that asking for an estimate of a parameter is, a priori, a meaningless question. It has been given meaning by force of habit. An estimate only becomes useful once it is used to make a decision, serving as a proxy for the unknown true parameter value. Decisions include: the action taken by a pilot in response to estimates from the flight computer; an automated control action in response to feedback; and, what someone decides they hear over a mobile phone (with the pertinent question being whether the estimate produced by the phone of the transmitted message is intelligible). Without knowing the decision to be made, whether an estimator is good or bad is unanswerable. One could hope for an estimator that works well for a large class of decisions, and the author sees this as the context of estimation theory. Consider the following problem.  Assume two coins are tossed, but somehow the outcome of the first coin influences the outcome of the second coin.  Specifically, the possible outcomes (H = heads, T = tails) and their probabilities are: HH $0.35$; HT $0.05$; TH $0.3$; TT $0.3$.  Given these probabilities, what is our best guess as to the outcome?  We have been conditioned to respond by saying that the most likely outcome is the one with the highest probability, namely, HH.  What is our best guess as to the outcome of the first coin only?  Well, there is $0.35 + 0.05 = 0.4$ chance it will be H and $0.3 + 0.3 = 0.6$ chance it will be T, so the most likely outcome is T.  How can it be that the most likely outcome of the first coin is T but the most likely outcome of both coins is HH? The (only) way to understand this sensibly is to think in terms of how the estimate will be used.  What “most likely” really means is that it is the best strategy to use when placing an all-or-nothing bet.  If I must bet on the outcome of the two coins, and I win \$1 if I guess correctly and win nothing otherwise, my best strategy is to bet on HH.  If I must bet on the outcome of the first coin, the best strategy is to bet on T.  This is not a contradiction because betting on the first coin being T is the same as betting on the two coins being either TH or TT.  I can now win in two cases, not just one; it is a different gamble. The above is not an idle example.  In communications, the receiver must estimate what symbols were sent.  A typical mathematical formulation of the problem is estimating the state of a hidden Markov chain.  One can choose to estimate the most likely sequence of states or the most likely state at a particular instance.  The above example explains the difference and helps determine which is the more appropriate estimate to use. Finally, it is noted that an all-or-nothing bet is not necessarily the most appropriate way of measuring the performance of an estimator.  For instance, partial credit might be given for being close to the answer, so if I guess two coins correctly I win \$2, if I guess one coin correctly I win \$1, otherwise I win nothing.  This can be interpreted as “regularising” the maximum likelihood estimate.  Nevertheless, at the end of the day, the only way to understand an estimator is in the broader context of the types of decisions that can be made well by using that estimator.
Managed IT ServicesBusiness ContinuityEMR TransitionTrainingResidential SolutionsSocial Media Services Address PO Box 817, Morgantown, WV 26507 (304) 685-0394 http://www.itrendtechnology.com # mean average square error Core, West Virginia ISBN0-387-98502-6. There are, however, some scenarios where mean squared error can serve as a good approximation to a loss function occurring naturally in an application.[6] Like variance, mean squared error has the Further, while the corrected sample variance is the best unbiased estimator (minimum mean square error among unbiased estimators) of variance for Gaussian distributions, if the distribution is not Gaussian then even The RMSE is directly interpretable in terms of measurement units, and so is a better measure of goodness of fit than a correlation coefficient. The two should be similar for a reasonable fit. **using the number of points - 2 rather than just the number of points is required to account for the fact that A red vertical line is drawn from the x-axis to the minimum value of the MSE function. Contents 1 Definition and basic properties 1.1 Predictor 1.2 Estimator 1.2.1 Proof of variance and bias relationship 2 Regression 3 Examples 3.1 Mean 3.2 Variance 3.3 Gaussian distribution 4 Interpretation 5 Mean Squared Error Example General steps to calculate the mean squared error from a set of X and Y values: Find the regression line. This would be the line with the best fit. That being said, the MSE could be a function of unknown parameters, in which case any estimator of the MSE based on estimates of these parameters would be a function of Subtract the new Y value from the original to get the error. McGraw-Hill. Carl Friedrich Gauss, who introduced the use of mean squared error, was aware of its arbitrariness and was in agreement with objections to it on these grounds.[1] The mathematical benefits of Vernier Software & Technology Caliper Logo Vernier Software & Technology 13979 SW Millikan Way Beaverton, OR 97005 Phone1-888-837-6437 Fax503-277-2440 [email protected] Resources Next Generation Science Standards Standards Correlations AP Correlations IB Correlations ISBN0-495-38508-5. ^ Steel, R.G.D, and Torrie, J. Related TILs: TIL 1869: How do we calculate linear fits in Logger Pro? What does the Mean Squared Error Tell You? Add up the errors. Values of MSE may be used for comparative purposes. In the applet, construct a frequency distribution with at least 5 nonempty classes and and at least 10 values total. The difference occurs because of randomness or because the estimator doesn't account for information that could produce a more accurate estimate.[1] The MSE is a measure of the quality of an Advice Email Print Embed Copy & paste this HTML in your website to link to this page mean squared error Browse Dictionary by Letter: # A B C D E F If the statistic and the target have the same expectation, , then       In many instances the target is a new observation that was not part of the analysis. The goal of experimental design is to construct experiments in such a way that when the observations are analyzed, the MSE is close to zero relative to the magnitude of at Mean squared error is the negative of the expected value of one specific utility function, the quadratic utility function, which may not be the appropriate utility function to use under a In the applet, set the class width to 0.1 and construct a distribution with at least 30 values of each of the types indicated below. Copyright © 2016 Statistics How To Theme by: Theme Horse Powered by: WordPress Back to Top Previous Page | Next Page Previous Page | Next Page Introduction to Statistical Modeling with Statisticshowto.com Apply for $2000 in Scholarship Money As part of our commitment to education, we're giving away$2000 in scholarships to StatisticsHowTo.com visitors. The root mean-square error, RMSE, is the square root of MSE. 3. Because actual rather than absolute values of the forecast errors are used in the formula, positive and negative forecast errors can offset each other; as a result the formula can be If is an unbiased estimator of —that is, if —then the mean squared error is simply the variance of the estimator. That being said, the MSE could be a function of unknown parameters, in which case any estimator of the MSE based on estimates of these parameters would be a function of If we say that the number t is a good measure of center, then presumably we are saying that t represents the entire distribution better, in some way, than other numbers. For example, in models where regressors are highly collinear, the ordinary least squares estimator continues to be unbiased. If the estimator is derived from a sample statistic and is used to estimate some population statistic, then the expectation is with respect to the sampling distribution of the sample statistic. Applications Minimizing MSE is a key criterion in selecting estimators: see minimum mean-square error. The MSE has the units squared of whatever is plotted on the vertical axis. This property, undesirable in many applications, has led researchers to use alternatives such as the mean absolute error, or those based on the median. Further, while the corrected sample variance is the best unbiased estimator (minimum mean square error among unbiased estimators) of variance for Gaussian distributions, if the distribution is not Gaussian then even Also, explicitly compute a formula for the MSE function. 5. Sign up for our FREE newsletter today! © 2016 WebFinance Inc. Mean, Variance and Standard Deviation Recall from Section 2 that the mean, variance, and standard deviation of a distribution are given by The mean is a very natural measure of center, The fourth central moment is an upper bound for the square of variance, so that the least value for their ratio is one, therefore, the least value for the excess kurtosis Estimator The MSE of an estimator θ ^ {\displaystyle {\hat {\theta }}} with respect to an unknown parameter θ {\displaystyle \theta } is defined as MSE ⁡ ( θ ^ ) That is, the n units are selected one at a time, and previously selected units are still eligible for selection for all n draws. Find a Critical Value 7. A unimodal distribution that is skewed right. Mathematical Statistics with Applications (7 ed.). Carl Friedrich Gauss, who introduced the use of mean squared error, was aware of its arbitrariness and was in agreement with objections to it on these grounds.[1] The mathematical benefits of backorder ABC analysis inventory stockout inventory days kitting just in time (J... so that ( n − 1 ) S n − 1 2 σ 2 ∼ χ n − 1 2 {\displaystyle {\frac {(n-1)S_{n-1}^{2}}{\sigma ^{2}}}\sim \chi _{n-1}^{2}} . In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the However, one can use other estimators for σ 2 {\displaystyle \sigma ^{2}} which are proportional to S n − 1 2 {\displaystyle S_{n-1}^{2}} , and an appropriate choice can always give Step 1:Find the regression line. In statistical modelling the MSE, representing the difference between the actual observations and the observation values predicted by the model, is used to determine the extent to which the model fits This definition for a known, computed quantity differs from the above definition for the computed MSE of a predictor in that a different denominator is used. Probability and Statistics (2nd ed.).
Comment Share Q) # If the ratio of mean and median of a certain data is 2:3 then find the ratio of its mean and median ( A ) ratio 5:3 ( B ) ratio is 2:3 ( C ) ratio is 3:5 ( D ) ratio is 3:2
dimensional-1.0.1.3: Statically checked physical dimensions, using Type Families and Data Kinds. Numeric.Units.Dimensional Description Summary In this module we provide data types for performing arithmetic with physical quantities and units. Information about the physical dimensions of the quantities/units is embedded in their types and the validity of operations is verified by the type checker at compile time. The boxing and unboxing of numerical values as quantities is done by multiplication and division of units, of which an incomplete set is provided. We limit ourselves to "Newtonian" physics. We do not attempt to accommodate relativistic physics in which e.g. addition of length and time would be valid. As far as possible and/or practical the conventions and guidelines of NIST's "Guide for the Use of the International System of Units (SI)" [1] are followed. Occasionally we will reference specific sections from the guide and deviations will be explained. Disclaimer Merely an engineer, the author doubtlessly uses a language and notation that makes mathematicians and physicist cringe. He does not mind constructive criticism (or pull requests). The sets of functions and units defined herein are incomplete and reflect only the author's needs to date. Again, patches are welcome. Usage Preliminaries This module requires GHC 7.8 or later. We utilize Data Kinds, TypeNats, Closed Type Families, etc. Clients of the module are generally not required to use these extensions. Clients probably will want to use the NegativeLiterals extension. Examples We have defined operators and units that allow us to define and work with physical quantities. A physical quantity is defined by multiplying a number with a unit (the type signature is optional). v :: Velocity Prelude.Double v = 90 *~ (kilo meter / hour) It follows naturally that the numerical value of a quantity is obtained by division by a unit. numval :: Prelude.Double numval = v /~ (meter / second) The notion of a quantity as the product of a numerical value and a unit is supported by 7.1 "Value and numerical value of a quantity" of [1]. While the above syntax is fairly natural it is unfortunate that it must violate a number of the guidelines in [1], in particular 9.3 "Spelling unit names with prefixes", 9.4 "Spelling unit names obtained by multiplication", 9.5 "Spelling unit names obtained by division". As a more elaborate example of how to use the module we define a function for calculating the escape velocity of a celestial body [2]. escapeVelocity :: (Floating a) => Mass a -> Length a -> Velocity a escapeVelocity m r = sqrt (two * g * m / r) where two = 2 *~ one g = 6.6720e-11 *~ (newton * meter ^ pos2 / kilo gram ^ pos2) The following is an example GHC session where the above function is used to calculate the escape velocity of Earth in kilometer per second. >>> :set +t >>> let me = 5.9742e24 *~ kilo gram -- Mass of Earth. me :: Quantity DMass GHC.Float.Double >>> let re = 6372.792 *~ kilo meter -- Mean radius of Earth. re :: Quantity DLength GHC.Float.Double >>> let ve = escapeVelocity me re -- Escape velocity of Earth. ve :: Velocity GHC.Float.Double >>> ve /~ (kilo meter / second) 11.184537332296259 it :: GHC.Float.Double For completeness we should also show an example of the error messages we will get from GHC when performing invalid arithmetic. In the best case GHC will be able to use the type synonyms we have defined in its error messages. >>> x = 1 *~ meter + 1 *~ second Couldn't match type 'Numeric.NumType.DK.Integers.Zero with 'Numeric.NumType.DK.Integers.Pos1 Expected type: Unit 'Metric DLength a Actual type: Unit 'Metric DTime a In the second argument of (*~)', namely second' In the second argument of (+)', namely 1 *~ second' In other cases the error messages aren't very friendly. >>> x = 1 *~ meter / (1 *~ second) + 1 *~ kilo gram Couldn't match type 'Numeric.NumType.DK.Integers.Zero with 'Numeric.NumType.DK.Integers.Neg1 Expected type: Quantity DMass a Actual type: Dimensional ('Numeric.Units.Dimensional.Variants.DQuantity Numeric.Units.Dimensional.Variants.* 'Numeric.Units.Dimensional.Variants.DQuantity) (DLength / DTime) a In the first argument of (+)', namely 1 *~ meter / (1 *~ second)' In the expression: 1 *~ meter / (1 *~ second) + 1 *~ kilo gram In an equation for x': x = 1 *~ meter / (1 *~ second) + 1 *~ kilo gram It is the author's experience that the usefullness of the compiler error messages is more often than not limited to pinpointing the location of errors. Notes Future work While there is an insane amount of units in use around the world it is reasonable to provide at least all SI units. Units outside of SI will most likely be added on an as-needed basis. There are also plenty of elementary functions to add. The Floating class can be used as reference. Additional physics models could be implemented. See [3] for ideas. Related work Henning Thielemann numeric prelude has a physical units library, however, checking of dimensions is dynamic rather than static. Aaron Denney has created a toy example of statically checked physical dimensions covering only length and time. HaskellWiki has pointers [4] to these. Also see Samuel Hoffstaetter's blog post [5] which uses techniques similar to this library. Libraries with similar functionality exist for other programming languages and may serve as inspiration. The author has found the Java library JScience [6] and the Fortress programming language [7] particularly noteworthy. Synopsis Types Our primary objective is to define a data type that can be used to represent (while still differentiating between) units and quantities. There are two reasons for consolidating units and quantities in one data type. The first being to allow code reuse as they are largely subject to the same operations. The second being that it allows reuse of operators (and functions) between the two without resorting to occasionally cumbersome type classes. The relationship between (the value of) a Quantity, its numerical value and its Unit is described in 7.1 "Value and numerical value of a quantity" of [1]. In short a Quantity is the product of a number and a Unit. We define the *~ operator as a convenient way to declare quantities as such a product. type Unit m = Dimensional (DUnit m) Source # A unit of measurement. A dimensional quantity. Encodes whether a unit is a metric unit, that is, whether it can be combined with a metric prefix to form a related unit. Constructors Metric Capable of receiving a metric prefix. NonMetric Incapable of receiving a metric prefix. Instances Source # Methods Source # Methodsgfoldl :: (forall d b. Data d => c (d -> b) -> d -> c b) -> (forall g. g -> c g) -> Metricality -> c Metricality #gunfold :: (forall b r. Data b => c (b -> r) -> c r) -> (forall r. r -> c r) -> Constr -> c Metricality #dataCast1 :: Typeable (* -> *) t => (forall d. Data d => c (t d)) -> Maybe (c Metricality) #dataCast2 :: Typeable (* -> * -> *) t => (forall d e. (Data d, Data e) => c (t d e)) -> Maybe (c Metricality) #gmapT :: (forall b. Data b => b -> b) -> Metricality -> Metricality #gmapQl :: (r -> r' -> r) -> r -> (forall d. Data d => d -> r') -> Metricality -> r #gmapQr :: (r' -> r -> r) -> r -> (forall d. Data d => d -> r') -> Metricality -> r #gmapQ :: (forall d. Data d => d -> u) -> Metricality -> [u] #gmapQi :: Int -> (forall d. Data d => d -> u) -> Metricality -> u #gmapM :: Monad m => (forall d. Data d => d -> m d) -> Metricality -> m Metricality #gmapMp :: MonadPlus m => (forall d. Data d => d -> m d) -> Metricality -> m Metricality #gmapMo :: MonadPlus m => (forall d. Data d => d -> m d) -> Metricality -> m Metricality # Source # Methods Source # Associated Typestype Rep Metricality :: * -> * # Methods type Rep Metricality Source # type Rep Metricality = D1 (MetaData "Metricality" "Numeric.Units.Dimensional.Variants" "dimensional-1.0.1.3-GLZTpwvd0sh1kPDcg3GWkB" False) ((:+:) (C1 (MetaCons "Metric" PrefixI False) U1) (C1 (MetaCons "NonMetric" PrefixI False) U1)) Physical Dimensions The phantom type variable d encompasses the physical dimension of a Dimensional. As detailed in [5] there are seven base dimensions, which can be combined in integer powers to a given physical dimension. We represent physical dimensions as the powers of the seven base dimensions that make up the given dimension. The powers are represented using NumTypes. For convenience we collect all seven base dimensions in a data kind Dimension. We could have chosen to provide type variables for the seven base dimensions in Dimensional instead of creating a new data kind Dimension. However, that would have made any type signatures involving Dimensional very cumbersome. By encompassing the physical dimension in a single type variable we can "hide" the cumbersome type arithmetic behind convenient type classes as will be seen later. data Dimension Source # Represents a physical dimension in the basis of the 7 SI base dimensions, where the respective dimensions are represented by type variables using the following convention. • l: Length • m: Mass • t: Time • i: Electric current • th: Thermodynamic temperature • n: Amount of substance • j: Luminous intensity For the equivalent term-level representation, see Dimension' Constructors Dim TypeInt TypeInt TypeInt TypeInt TypeInt TypeInt TypeInt Instances (KnownTypeInt l, KnownTypeInt m, KnownTypeInt t, KnownTypeInt i, KnownTypeInt th, KnownTypeInt n, KnownTypeInt j) => HasDimension (Proxy Dimension (Dim l m t i th n j)) Source # Methodsdimension :: Proxy Dimension (Dim l m t i th n j) -> Dimension' Source # Dimension Arithmetic When performing arithmetic on units and quantities the arithmetics must be applied to both the numerical values of the Dimensionals but also to their physical dimensions. The type level arithmetic on physical dimensions is governed by closed type families expressed as type operators. We could provide the Mul and Div classes with full functional dependencies but that would be of limited utility as there is no limited use for "backwards" type inference. Efforts are underway to develop a type-checker plugin that does enable these scenarios, e.g. for linear algebra. type family (a :: Dimension) * (b :: Dimension) where ... infixl 7 Source # Multiplication of dimensions corresponds to adding of the base dimensions' exponents. Equations DOne * d = d d * DOne = d (Dim l m t i th n j) * (Dim l' m' t' i' th' n' j') = Dim (l + l') (m + m') (t + t') (i + i') (th + th') (n + n') (j + j') type family (a :: Dimension) / (d :: Dimension) where ... infixl 7 Source # Division of dimensions corresponds to subtraction of the base dimensions' exponents. Equations d / DOne = d d / d = DOne (Dim l m t i th n j) / (Dim l' m' t' i' th' n' j') = Dim (l - l') (m - m') (t - t') (i - i') (th - th') (n - n') (j - j') type family (d :: Dimension) ^ (x :: TypeInt) where ... infixr 8 Source # Powers of dimensions corresponds to multiplication of the base dimensions' exponents by the exponent. We limit ourselves to integer powers of Dimensionals as fractional powers make little physical sense. Equations DOne ^ x = DOne d ^ Zero = DOne d ^ Pos1 = d (Dim l m t i th n j) ^ x = Dim (l * x) (m * x) (t * x) (i * x) (th * x) (n * x) (j * x) type family Root (d :: Dimension) (x :: TypeInt) where ... Source # Roots of dimensions corresponds to division of the base dimensions' exponents by the order(?) of the root. See sqrt, cbrt, and nroot for the corresponding term-level operations. Equations Root DOne x = DOne Root d Pos1 = d Root (Dim l m t i th n j) x = Dim (l / x) (m / x) (t / x) (i / x) (th / x) (n / x) (j / x) type Recip d = DOne / d Source # The reciprocal of a dimension is defined as the result of dividing DOne by it, or of negating each of the base dimensions' exponents. Term Level Representation of Dimensions To facilitate parsing and pretty-printing functions that may wish to operate on term-level representations of dimension, we provide a means for converting from type-level dimensions to term-level dimensions. A physical dimension, encoded as 7 integers, representing a factorization of the dimension into the 7 SI base dimensions. By convention they are stored in the same order as in the Dimension data kind. Constructors Dim' !Int !Int !Int !Int !Int !Int !Int Instances Source # Methods Source # Methods Source # MethodsshowList :: [Dimension'] -> ShowS # Source # The monoid of dimensions under multiplication. Methodsmconcat :: [Dimension'] -> Dimension' # Source # Methods class HasDimension a where Source # Dimensional values inhabit this class, which allows access to a term-level representation of their dimension. Minimal complete definition dimension Methods dimension :: a -> Dimension' Source # Obtains a term-level representation of a value's dimension. Instances Source # Methods Source # Methods Source # Methods (KnownTypeInt l, KnownTypeInt m, KnownTypeInt t, KnownTypeInt i, KnownTypeInt th, KnownTypeInt n, KnownTypeInt j) => HasDimension (Proxy Dimension (Dim l m t i th n j)) Source # Methodsdimension :: Proxy Dimension (Dim l m t i th n j) -> Dimension' Source # KnownDimension d => HasDimension (Dimensional v d a) Source # Methodsdimension :: Dimensional v d a -> Dimension' Source # type KnownDimension d = HasDimension (Proxy d) Source # A KnownDimension is one for which we can construct a term-level representation. Each validly constructed type of kind Dimension has a KnownDimension instance. While KnownDimension is a constraint synonym, the presence of KnownDimension d in a context allows use of dimension :: Proxy d -> Dimension'. Dimensional Arithmetic (*~) :: Num a => a -> Unit m d a -> Quantity d a infixl 7 Source # Forms a Quantity by multipliying a number and a unit. (/~) :: Fractional a => Quantity d a -> Unit m d a -> a infixl 7 Source # Divides a Quantity by a Unit of the same physical dimension, obtaining the numerical value of the quantity expressed in that unit. (^) :: (Fractional a, KnownTypeInt i, KnownVariant v, KnownVariant (Weaken v)) => Dimensional v d1 a -> Proxy i -> Dimensional (Weaken v) (d1 ^ i) a infixr 8 Source # Raises a Quantity or Unit to an integer power. Because the power chosen impacts the Dimension of the result, it is necessary to supply a type-level representation of the exponent in the form of a Proxy to some TypeInt. Convenience values pos1, pos2, neg1, ... are supplied by the Numeric.NumType.DK.Integers module. The most commonly used ones are also reexported by Numeric.Units.Dimensional.Prelude. The intimidating type signature captures the similarity between these operations and ensures that composite Units are NotPrefixable. (^/) :: (KnownTypeInt n, Floating a) => Quantity d a -> Proxy n -> Quantity (Root d n) a infixr 8 Source # Computes the nth root of a Quantity using **. The Root type family will prevent application of this operator where the result would have a fractional dimension or where n is zero. Because the root chosen impacts the Dimension of the result, it is necessary to supply a type-level representation of the root in the form of a Proxy to some TypeInt. Convenience values pos1, pos2, neg1, ... are supplied by the Numeric.NumType.DK.Integers module. The most commonly used ones are also reexported by Numeric.Units.Dimensional.Prelude. Also available in prefix form, see nroot. (**) :: Floating a => Dimensionless a -> Dimensionless a -> Dimensionless a infixr 8 Source # Raises a dimensionless quantity to a floating power using **. (*) :: (KnownVariant v1, KnownVariant v2, KnownVariant (v1 * v2), Num a) => Dimensional v1 d1 a -> Dimensional v2 d2 a -> Dimensional (v1 * v2) (d1 * d2) a infixl 7 Source # Multiplies two Quantitys or two Units. The intimidating type signature captures the similarity between these operations and ensures that composite Units are NonMetric. (/) :: (KnownVariant v1, KnownVariant v2, KnownVariant (v1 * v2), Fractional a) => Dimensional v1 d1 a -> Dimensional v2 d2 a -> Dimensional (v1 * v2) (d1 / d2) a infixl 7 Source # Divides one Quantity by another or one Unit by another. The intimidating type signature captures the similarity between these operations and ensures that composite Units are NotPrefixable. (+) :: Num a => Quantity d a -> Quantity d a -> Quantity d a infixl 6 Source # Adds two Quantitys. (-) :: Num a => Quantity d a -> Quantity d a -> Quantity d a infixl 6 Source # Subtracts one Quantity from another. negate :: Num a => Quantity d a -> Quantity d a Source # Negates the value of a Quantity. abs :: Num a => Quantity d a -> Quantity d a Source # Takes the absolute value of a Quantity. nroot :: (KnownTypeInt n, Floating a) => Proxy n -> Quantity d a -> Quantity (Root d n) a Source # Computes the nth root of a Quantity using **. The Root type family will prevent application of this operator where the result would have a fractional dimension or where n is zero. Because the root chosen impacts the Dimension of the result, it is necessary to supply a type-level representation of the root in the form of a Proxy to some TypeInt. Convenience values pos1, pos2, neg1, ... are supplied by the Numeric.NumType.DK.Integers module. The most commonly used ones are also reexported by Numeric.Units.Dimensional.Prelude. Also available in operator form, see ^/. sqrt :: Floating a => Quantity d a -> Quantity (Root d Pos2) a Source # Computes the square root of a Quantity using **. The Root type family will prevent application where the supplied quantity does not have a square dimension. sqrt x == nroot pos2 x cbrt :: Floating a => Quantity d a -> Quantity (Root d Pos3) a Source # Computes the cube root of a Quantity using **. The Root type family will prevent application where the supplied quantity does not have a cubic dimension. cbrt x == nroot pos3 x Transcendental Functions atan2 :: RealFloat a => Quantity d a -> Quantity d a -> Dimensionless a Source # The standard two argument arctangent function. Since it interprets its two arguments in comparison with one another, the input may have any dimension. Operations on Collections Here we define operators and functions to make working with homogenuous lists of dimensionals more convenient. We define two convenience operators for applying units to all elements of a functor (e.g. a list). (*~~) :: (Functor f, Num a) => f a -> Unit m d a -> f (Quantity d a) infixl 7 Source # Applies *~ to all values in a functor. (/~~) :: (Functor f, Fractional a) => f (Quantity d a) -> Unit m d a -> f a infixl 7 Source # Applies /~ to all values in a functor. sum :: (Num a, Foldable f) => f (Quantity d a) -> Quantity d a Source # The sum of all elements in a list. mean :: (Fractional a, Foldable f) => f (Quantity d a) -> Quantity d a Source # The arithmetic mean of all elements in a list. dimensionlessLength :: (Num a, Foldable f) => f (Dimensional v d a) -> Dimensionless a Source # The length of the foldable data structure as a Dimensionless. This can be useful for purposes of e.g. calculating averages. Arguments :: (Fractional a, Integral b) => Quantity d a The initial value. -> Quantity d a The final value. -> b The number of intermediate values. If less than one, no intermediate values will result. -> [Quantity d a] Returns a list of quantities between given bounds. Dimension Synonyms Using our Dimension data kind we define some type synonyms for convenience. We start with the base dimensions, others can be found in Numeric.Units.Dimensional.Quantities. The type-level dimensions of dimensionless values. Quantity Synonyms Using the above type synonyms we can define type synonyms for quantities of particular physical dimensions. Again we limit ourselves to the base dimensions, others can be found in Numeric.Units.Dimensional.Quantities. Constants For convenience we define some constants for small integer values that often show up in formulae. We also throw in pi and tau for good measure. _0 :: Num a => Quantity d a Source # The constant for zero is polymorphic, allowing it to express zero Length or Capacitance or Velocity etc, in addition to the Dimensionless value zero. Twice pi. For background on tau see http://tauday.com/tau-manifesto (but also feel free to review http://www.thepimanifesto.com). Constructing Units siUnit :: forall d a. (KnownDimension d, Num a) => Unit NonMetric d a Source # A polymorphic Unit which can be used in place of the coherent SI base unit of any dimension. This allows polymorphic quantity creation and destruction without exposing the Dimensional constructor. The unit one has dimension DOne and is the base unit of dimensionless values. As detailed in 7.10 "Values of quantities expressed simply as numbers: the unit one, symbol 1" of [1] the unit one generally does not appear in expressions. However, for us it is necessary to use one as we would any other unit to perform the "boxing" of dimensionless values. mkUnitR :: Floating a => UnitName m -> ExactPi -> Unit m1 d a -> Unit m d a Source # Forms a new atomic Unit by specifying its UnitName and its definition as a multiple of another Unit. Use this variant when the scale factor of the resulting unit is irrational or Approximate. See mkUnitQ for when it is rational and mkUnitZ for when it is an integer. Note that supplying zero as a definining quantity is invalid, as the library relies upon units forming a group under multiplication. Supplying negative defining quantities is allowed and handled gracefully, but is discouraged on the grounds that it may be unexpected by other readers. mkUnitQ :: Fractional a => UnitName m -> Rational -> Unit m1 d a -> Unit m d a Source # Forms a new atomic Unit by specifying its UnitName and its definition as a multiple of another Unit. Use this variant when the scale factor of the resulting unit is rational. See mkUnitZ for when it is an integer and mkUnitR for the general case. For more information see mkUnitR. mkUnitZ :: Num a => UnitName m -> Integer -> Unit m1 d a -> Unit m d a Source # Forms a new atomic Unit by specifying its UnitName and its definition as a multiple of another Unit. Use this variant when the scale factor of the resulting unit is an integer. See mkUnitQ for when it is rational and mkUnitR for the general case. For more information see mkUnitR. name :: Unit m d a -> UnitName m Source # Extracts the UnitName of a Unit. exactValue :: Unit m d a -> ExactPi Source # Extracts the exact value of a Unit, expressed in terms of the SI coherent derived unit (see siUnit) of the same Dimension. Note that the actual value may in some cases be approximate, for example if the unit is defined by experiment. weaken :: Unit m d a -> Unit NonMetric d a Source # Discards potentially unwanted type level information about a Unit. strengthen :: Unit m d a -> Maybe (Unit Metric d a) Source # Attempts to convert a Unit which may or may not be Metric to one which is certainly Metric. exactify :: Unit m d a -> Unit m d ExactPi Source # Forms the exact version of a Unit. Pretty Printing showIn :: (KnownDimension d, Show a, Fractional a) => Unit m d a -> Quantity d a -> String Source # Shows the value of a Quantity expressed in a specified Unit of the same Dimension. On Functor, and Conversion Between Number Representations We intentionally decline to provide a Functor instance for Dimensional because its use breaks the abstraction of physical dimensions. If you feel your work requires this instance, it is provided as an orphan in Numeric.Units.Dimensional.Functor. class KnownVariant v where Source # A physical quantity or unit. We call this data type Dimensional to capture the notion that the units and quantities it represents have physical dimensions. The type variable a is the only non-phantom type variable and represents the numerical value of a quantity or the scale (w.r.t. SI units) of a unit. For SI units the scale will always be 1. For non-SI units the scale is the ratio of the unit to the SI unit with the same physical dimension. Since a is the only non-phantom type we were able to define Dimensional as a newtype, avoiding boxing at runtime. Minimal complete definition extractValue, extractName, injectValue, dmap Associated Types data Dimensional v :: Dimension -> * -> * Source # A dimensional value, either a Quantity or a Unit, parameterized by its Dimension` and representation. Methods dmap :: (a1 -> a2) -> Dimensional v d a1 -> Dimensional v d a2 Source # Maps over the underlying representation of a dimensional value. The caller is responsible for ensuring that the supplied function respects the dimensional abstraction. This means that the function must preserve numerical values, or linearly scale them while preserving the origin. Instances Source # Associated Typesdata Dimensional (DQuantity :: Variant) (a :: Dimension) b :: * Source # MethodsextractValue :: Dimensional DQuantity d a -> (a, Maybe ExactPi)injectValue :: Maybe (UnitName NonMetric) -> (a, Maybe ExactPi) -> Dimensional DQuantity d admap :: (a1 -> a2) -> Dimensional DQuantity d a1 -> Dimensional DQuantity d a2 Source # Source # Associated Typesdata Dimensional (DUnit m :: Variant) (a :: Dimension) b :: * Source # MethodsextractValue :: Dimensional (DUnit m) d a -> (a, Maybe ExactPi)extractName :: Dimensional (DUnit m) d a -> Maybe (UnitName NonMetric)injectValue :: Maybe (UnitName NonMetric) -> (a, Maybe ExactPi) -> Dimensional (DUnit m) d admap :: (a1 -> a2) -> Dimensional (DUnit m) d a1 -> Dimensional (DUnit m) d a2 Source # changeRep :: (KnownVariant v, Real a, Fractional b) => Dimensional v d a -> Dimensional v d b Source # Convenient conversion between numerical types while retaining dimensional information. changeRepApproximate :: (KnownVariant v, Floating b) => Dimensional v d ExactPi -> Dimensional v d b Source # Convenient conversion from exactly represented values while retaining dimensional information.
SDL2: Some functions crash the program This topic is 408 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. Recommended Posts Hey guys, So I've been messing with SDL this weekend, and found some great tutorials on YouTube, but I've been running into a bizarre problem that Google has been no help in solving.  Some functions, for no good reason (there is a reason, but it has been obfuscated to oblivion), will crash my program.  Here's my code so far: #include <iostream> #include "SDL.h" #define WINDOW_WIDTH 800; #define WINDOW_HEIGHT 600 #define FPS 60 void DrawChessBoard(SDL_Renderer *renderer) { int row = 0,column = 0,x = 0; SDL_Rect rect, screen_size; /* Get the Size of drawing surface */ SDL_RenderGetViewport(renderer, &screen_size); for ( ; row < 8; row++) { column = row % 2; x = column; for ( ; column < 4 + (row % 2); column++) { SDL_SetRenderDrawColor(renderer, 0, 0, 0, 0xFF); rect.w = screen_size.w/8; rect.h = screen_size.h/8; rect.x = x * rect.w; rect.y = row * rect.h; x = x + 2; SDL_RenderFillRect(renderer, &rect); } } } int WinMain() // SDL needs WinMain, for some reason, not main { // Fire up SDL SDL_Init(SDL_INIT_EVERYTHING); // Create the window SDL_Window *window = SDL_CreateWindow("You Geek!", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 800, 600, SDL_WINDOW_RESIZABLE); if (window == NULL) { SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Window creation fail : %s\n",SDL_GetError()); } // Declare variables SDL_Event *event; bool running = true; SDL_Renderer *renderer; SDL_Surface *surface = SDL_GetWindowSurface(window); renderer = SDL_CreateSoftwareRenderer(surface); if (!renderer) { SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Render creation for surface fail : %s\n",SDL_GetError()); return 1; } // Main loop while (running) { // Draw stuff DrawChessBoard(renderer);    // This worked, ironically SDL_UpdateWindowSurface(window); /* Clear the rendering surface with the specified color */ SDL_SetRenderDrawColor(renderer, 255, 0, 0, 255); SDL_RenderClear(renderer); // Handle events while (SDL_PollEvent(event)) { if (event->type == SDL_QUIT) { running = false; break; } } } // Clean up SDL_DestroyRenderer(renderer); SDL_DestroyWindow(window); SDL_Quit(); return 0; } And some of the functions that cause the epic meltdown are: SDL_GetTicks SDL_GetWindowSurface (though that one has stopped after a few hours of tinkering, reason still unknown) And here's what my debugger has to say: #0 0x768ba9f2    RaiseException() (C:\Windows\SysWOW64\KernelBase.dll:??) #1 0x6c81b14c    SDL_LogCritical() (C:\Media\Code\Projects\EXPERI~1\SDLTES~1\bin\Debug\SDL2.dll:??) #2 0x406d1388    ?? () (??:??) #3 ??    ?? () (??:??) So apparently, whatever is happening has even stumped the debugger (lol).  I've never seen a bunch of question marks like that, but I'm guessing they're bad lol.  Seriously...? Anyway, here are some other details: My IDE is CodeBlocks with MinGW. My OS is Windows 10 Anniversary Edition, 64-bit I'm using SDL 2.0, and I've had to use the 32-bit version of the DLL because that seems to be how MinGW wants it (or so said Google when I was battling a different epic crash problem - using the 32-bit DLL allowed my program to run correctly - could that be what's foobarring every other function I try?). I've been all over the documentation, their Bugzilla page, and lots and lots of Googlefishing, but no dice.  Is SDL 2.0 just unstable or something?  lol idk but I'm seriously considering trying Unreal again (though getting that to startup has been a way bigger struggle lol - seems the big C++ frameworks like to pile on the challenge-factor for us Windows guys lol, but anywayz).  Obviously, I'm stuck outta luck here, so I'd sure appreciate any info on what this hair-jerker of a bug could be.  tyvm. :) Share on other sites Googled with "sdl2 mingw raiseexception" and it says that you should add this line before any calls to SDL: SDL_SetHint(SDL_HINT_WINDOWS_DISABLE_THREAD_NAMING, "1"); Share on other sites You're polling for events and writing them into uninitialised memory. You're lucky it ever gets past the first frame. Turn your warning level up to maximum because the compiler should probably warn you about that. Then start passing the address of a real SDL_Event object into SDL_PollEvent. (Docs with example here.) Share on other sites Wow, thanks for the quick replies! In the end, it was Kylotan's answer that solved the problem.  What you said makes perfect sense, and so does the example in the docs.  idk why I didn't see that, but I should've guessed it when the program returned 0xC00005 (may be off a zero - it's the exit code that roughly translates to "I can't dereference a null pointer, you geek!"). :D But all joking aside, thanks for catching that!  All those question marks in the stack trace, and no useful info from Google, had made it seem infinitely more complicated than it really was.  Now I can get back to building stuff. :) Share on other sites returned 0xC00005 (may be off a zero - it's the exit code that roughly translates to "I can't dereference a null pointer, you geek!"). Nullptr is considering yourself lucky. Uninitialized variables don't default to zero, though in debug it can do so for you. Also, warnings as errors is something I'd strongly recommend as well 1. 1 Rutin 23 2. 2 3. 3 JoeJ 20 4. 4 5. 5 • 29 • 40 • 23 • 13 • 13 • Forum Statistics • Total Topics 631739 • Total Posts 3001957 ×
# Writing bold small caps with mathpazo package The URW Palldio font (mathpazo package) does not provide bold small caps. To get round this issue, I'd want to make a macro to use small caps usually and normal caps in bold text. I tried this code: \documentclass{minimal} \usepackage[sc,osf]{mathpazo} % Use small caps normally except in a bold font: switch to uppercase instead. % This macro does not work: the \ifx\f@series\bfdefault test always fails. \makeatletter \DeclareRobustCommand{\mytextsc}[1]{% \ifx\f@series\bfdefault% \uppercase{#1}% \else {\scshape #1}% \fi } \makeatother % An other macro, where the same test is ok here !? \newcommand\normal{\fontseries{\ifx\f@series\bfdefault\then m \fi}\selectfont} \begin{document} % this works OK This is a \mytextsc{small caps} text. % this fails \textbf{This is a bold \mytextsc{upper case} text.} % here the normal macro works \textbf{This is a bold \mytextsc{upper \normal case} text.} \end{document} For some strange reason, the test \ifx\f@series\bfdefault always fails in the \mytestsc macro, although it works well in the \normal macro. Any ideas how to correct the \mytextsc macro? - ## migrated from stackoverflow.comAug 1 '11 at 12:44 This question came from our site for professional and enthusiast programmers. Try putting \makeatletter\show\f@series\show\bfdefault\makeatother within the body of the \textbf. What does Latex say? – Charles Stewart Apr 10 '10 at 12:34 this don't work, the latex file don't compile with show macro. – user312728 Apr 12 '10 at 19:49 The \show command interrupts the compilation to show you the definition of commands, you must continue it by typing enter. – Ulrike Fischer Aug 1 '11 at 13:01 \bfdefault is a long macro, \f@series not, so both are different and the test always gives false. This also happen in your "working" normal command which always gives \fontseries{m}. Expand the macros before the test: \DeclareRobustCommand{\mytextsc}[1]{% \edef\@tempa{\f@series}\edef\@tempb{\bfdefault}% \ifx\@tempa\@tempb% \uppercase{#1}% \else {\scshape #1}% \fi } - I don't know why \bfdefault isn't expanding correctly, but you can define a new macro that does what you want: \makeatletter \def\boldseriesname{bx} \DeclareRobustCommand{\mytextsc}[1]{% \ifx\f@series\boldseriesname\uppercase{#1}% \else{\scshape #1}% \fi} \makeatother - There doesn't appear to be any bold small caps for mathpazo, according to Will Robertson: You should check the Latex warning to be sure. - Sorry, I edited the post to clarify my issue. It's not the missing font, but the macro. – user312728 Apr 9 '10 at 19:35
Thermal Spectral Function and Deconfinement Temperature in Bulk Holographic AdS/QCD with Back Reaction of Bulk Vacuum # Thermal Spectral Function and Deconfinement Temperature in Bulk Holographic AdS/QCD with Back Reaction of Bulk Vacuum ## Abstract Based on the IR-improved bulk holographic AdS/QCD model which provides a consistent prediction for the mass spectra of resonance scalar, pseudoscalar, vector and axial vector mesons, we investigate its finite temperature behavior. By analyzing the spectral function of mesons and fitting it with a Breit-Wigner form, we perform an analysis for the critical temperature of mesons. The back-reaction effects of bulk vacuum are considered, the thermal mass spectral function of resonance mesons is calculated based on the back-reaction improved action. A reasonable melting temperature is found to be MeV, which is consistent with the recent results from lattice QCD simulations. ###### pacs: 12.38.Aw,12.38.Lg,11.15.Tk,11.10.Wx ## I Introduction The property of asymptotic freedom of quantum chromodynamics (QCD)(1) and the treatment of non-perturbative QCD have led to the QCD string approach, which has eventually initiated the motivation of string theory. with the development of string theory, it further motivated the advent of the AdS/CFT conjecture (2); (3); (4); (5), which provides an alternative tool to access the gloomy non-perturbative region of QCD, that is so-called holographic QCD or AdS/QCD model based on the AdS/CFT. These models are not perfect with some problems in its deep root of AdS/CFT as QCD is not a conformal field theory at low energy. There are different holographic QCD models due to different realizations and objectives. It has mainly been divided into two classes, namely top-down model and bottom-up model. The top-down models are directly constructed from string theory, the popular ones like D3/D7, D4/D6 and D4/D8 model (6); (7); (8). while bottom-up models such as hard-wall model (9) and soft-wall model (10) are constructed according to properties of QCD itself from which the corresponding bulk gravity is determined. In the hard-wall model, a sharp cutoff of the fifth dimension which corresponds to the inverse of the QCD scale is given to realize the QCD confinement. It contains chiral symmetry breaking but fails to give a correct Regge behaviour for the mass spectra of hadrons. To remedy this problem, in the soft-wall model, a dilaton term is put into the bulk action to replace the sharp IR cutoff of the hard-wall model. However, the resulting model cannot realize chiral symmetry breaking phenomenon consistently. Several models have been constructed to incorporate these QCD behaviors(12); (13); (14); (15); (18); (16); (17); (19); (20); (21), these models have made numerical predictions for the mass spectra of light mesons, such as scalar, pseudoscalar, vector and axial-vector mesons. Especially, in the recent paper(15), we have constructed an alternative model in which the metric remains conformal invariance and satisfying Einstein equation, while the bulk mass and bulk coupling of the quartic scalar interaction have a bulk coordinate z-dependence, so that the ultraviolet (UV) behavior of the model corresponds to AdS/CFT, while the infrared (IR) behavior is required from low energy QCD features which are compatible with the leading chiral dynamic model of spontaneous chiral symmetry breaking(22); (23). As a consequence, we have arrived at a more consistent model with better predictions for the mass spectra of both ground and resonance states of scalar, pseudoscalar, vector and axial-vector mesons. The finite temperature effects of holographic QCD has attracted lots of attention. The finite temperature effects in hard-wall AdS/QCD were studied in (24). In (25); (26); (27); (28), the thermal spectrum of glueballs or mesons in the soft-wall AdS/QCD model was investigated. In (29), a soft-wall model for charmonium was built. The deconfinement temperature of soft-wall AdS/QCD models was calculated in (30) and found to be . In Ref.(31); (32); (33), the scalar glueball and light mesons spectral have been analyzed in the soft-wall AdS/QCD model and the critical temperature at which the meson states dissociation was found to be about . Such a low temperature is far from the deconfinement transition. It indicates that the meson states dissociation occurs in the confined QCD phase and it is inconsistent with the real QCD. To remedy this problem, we have investigated in (34); (35) the finite temperature effects for the metric IR-improved soft wall AdS/QCD models (13). The critical temperature of meson dissociation was found to be around . Where the metric is modified at IR region, so the Hawking temperature of the black hole is not exactly defined as it dose not satisfy Einstein equation. Thus it is interesting to analyze the critical temperature of the bulk holographic AdS/QCD model built recently in(15), where the model incorporates both chiral symmetry breaking and linear confinement with the better predictions on the mass spectra of meson states. The paper is organized as follows: In Sec.II, by briefly reviewing the IR-improved bulk holographic AdS/QCD model constructed recently in(15) , we extent it to an action with finite temperature. In Sec.III, we analyze the thermal spectral function and carry out calculations for the meson thermal mass spectra. The corresponding melting temperature is obtained. In Sec.IV, the back-reaction effort of bulk vacuum is considered to yield an improved metric of background gravity, the thermal mass spectra are investigated in detail based on the back-reaction improved action. A reasonable melting temperature is obtained. Our conclusions and remarks are presented in the final section. ## Ii IR-Improved Bulk Holographic AdS/QCD Model with Finite Temperature In this section, we will investigate the finite temperature behavior of the IR-improved bulk holographic AdS/QCD model(15). Here the AdS black hole is chosen as the background to describe temperature in boundary theory, ds2=R2z2(f(z)dt2−d→x2−dz2f(z)), (1) with f(z)=1−z4z4h, (2) where is the location of the outer horizon of the black-hole. We will set the AdS radius as unity in this paper for the boundary theories. The Hawking temperature which corresponds to the temperature in boundary theory is defined as follow: TH=14π∣∣∣dfdz∣∣∣z→zh=1πzh (3) The action with finite temperature is based on the IR-improved bulk holographic AdS/QCD model(15). S=∫d5x√ge−Φ(z)Tr[|DX|2−m2X|X|2−λX|X|4−14g25(F2L+F2R)], (4) with , and . The gauge coupling is fixed to be with the color number (9). The complex bulk field will be written into the scalar and pseudoscalar mesons, the combination of chiral gauge fields and will be identified to the vector and axial-vector mesons. The dilaton field, the bulk scalar mass and quartic interaction coupling have been shown to be reasonable to take the following IR-modified forms(15): Φ(z) = μ2gz2−λ4gμ4gz4(1+μ2gz2)3. (5) m2X(z) = −3−λ21μ2gz2+λ42μ4gz41+μ2gz2+~m2X(z) (6) λX(z) = μ2gz21+μ2gz2λ (7) with . The expectation value of bulk scalar field has a z-dependent form for two flavor case: ⟨X⟩=12v(z)(1001) (8) The bulk vecuum expectation value (bVEV) with proper IR and UV boundary conditions has been taken the following simple form (15): v(z)=Az+Bz31+Cz2. (9) with A=mqζ,B=σζ+mqζC,C=μ2c/ζ (10) and the coupling constant is related to the vacuum expectation value via the equation of motion vq≡√(2μg)2λ=BC=σμ2c+mqζ (11) The involving five parameters have been fixed from the low energy parameters of mesons(15) and their values are represented in Table 1 . ## Iii Thermal Spectral Function The bulk scalar field can be decomposed as , where is the scalar meson field and the pseudo-scalar field. The chiral gauge fields can be combined into vector field and axial-vector field as VaM≡12(AaL,M+AaR,M)andAaM≡12(AaL,M−AaR,M). (12) The equations of motion for the meson fields are given as follows in momentum space by performing the Fourier transformation. V : V′′x(z)+(a′(z)a(z)+f′(z)f(z)−Φ′(z))V′x(z)+ω2Vx(z)f2(z)=0, (13) AV : A′′x(z)+(a′(z)a(z)+f′(z)f(z)−Φ′(z))A′x(z)+ω2Ax(z)f2(z)+g25v2(z)z2f(z)Ax(z)=0 (14) S : S′′(z)+S′(z)(3a′(z)a(z)+f′(z)f(z)−Φ′(z)) (15) +S(z)(ω2f(z)2−a(z)2m2X(z)f(z)−3λX(z)a(z)2v(z)22f(z))=0, PS : π′′(z)+π′(z)(3a′(z)a(z)+f′(z)f(z)+2v′(z)v(z)−Φ′(z))+ω2π(z)f(z)2=0, (16) Note that with the temperature increaseing, the horizon of black hole moves from infinity to boundary side. Thus the solutions of equations of motion will drop into black hole before they vanish, so that one cannot use the method of finding eigenmodes. Alternatively, we shall consider spectral function which is the imaginary part of the retarded Green’s function. In the above equations, we have put three-momentum to zero:, which leads the retarded Green’s function to be simplified as: , . For equation of pseudo-scalar field, we have ignored the mixing between axial-vector field and pseudo-scalar field for a simple consideration as it will not affect the finite temperature behavior discussed in(35). Let us first check the boundary behavior of the solution. Near the UV boundary, one can extract the asymptotic solutions for above four equations Eq.(13-16). For convenience, we replace the radial coordinate by the dimensionless variable with . The two linear independent solutions are found to be: V : V1→uY1(uzhω),V2→uJ1(uzhω) (17) AV : Missing or unrecognized delimiter for \right (18) S : S1→u2J1(uzh√2μ2g+ω2),S2→u2Y1(uzh√2μ2g+ω2) (19) PS : π1→uJ1(uωzh),π2→uY1(uωzh) (20) Here and are the first-kind Bessel function and second-kind Bessel function respectively. As discussed in (36), in the Minkowski space-time, the choice of in-falling boundary condition at the horizon selects the retarded Green’s function: K−→(1−u)−izhω4 (21) The solutions of equations of motion can be expressed by the combination of the two independent asymptotic solutions: and K(u)=A(ω,q)K1(ω,q,u)+B(ω,q)K2(ω,q,u)⟶(1−u)−izhω4 (22) where the coefficients and are fixed by the IR in-falling boundary condition at the horizon. The retarded Green’s function can be obtained from the dual bulk fields. As an illustration, for scalar fields, one writes the on shell action which reduces to surface terms: S=∫d4p(2π)4e−Φ(z)f(z)a(z)32S(p,z)∂zS(p,z)∣∣z=zhz=0, (23) Following the prescription in (36), after substitute Eq.(22) into surface terms of the on shell action, one can find that the spectral function which is related to the imaginary part of two point retarded Green’s function is proportional to the imaginary part of . ρ(ω,q)=−1πImG(ω,q)θ(ω2−q2)∝ImB(ω,q)A(ω,q), (24) The numerical results of spectral function for scalar, pseudo-scalar, vector and axial-vector mesons are shown in Fig.1. It can be seen from the results that in low temperature region the peaks which correspond to the poles of the Green’s function represent resonance mesons with their masses coinciding to the ones given at zero temperature(15). As the temperature increases, the meson states become unstable. It can be seen from the peaks which are shifted towards smaller values and the widths which become broader. Quantitatively, we can get more information by fitting the spectral function with a Breit-Wigner form: aωb(ω2−m2)2+Γ2+P(ω2). (25) where and are the location and width of the peak respectively. is representing a continuum which is taken the form The melting temperature or the critical temperature can be defined from the Breit-Wigner form. That is, if the width of the peak is larger than its height, we can say that no peak can be distinguished anymore. The condition is shown as follow: h=aωb(ω2−m2)2+Γ2∣∣∣ω→m,h<Γ. (26) Note that this definition of critical temperature is vague and subjective. In this paper, we will give the range of critical temperature by the condition: . The range of critical temperatures of scalar, pseudo-scalar, vector and axial-vector mesons are shown in Table.2. The results of melting temperature imply that the mesonic quasiparticle state is dissolved around in above considerations. It is noted that the bulk coordinate plays the role of the running energy scale in boundary theory. As the Hawking temperature increases to around , the allowed value for is given by . Such a small value of z will cause the bVEV with approaches to zero as the power for the condensation . It can be understood that the vanishing bVEV which corresponds to the chiral condensation plays an important role in the dissolving of mesonic bound state. It can be deduced that these critical behaviors could be the sign of chiral symmetry restoration. From the Breit-Wigner form, we can determine quantitatively the relation between the mass of mesons and the temperature. The results are shown in Fig.2. It can be seen explicitly that as temperature increases the masses of mesons decrease linearly in low temperature region (). Note that around critical temperature , the spectral function becomes so flat that the numerical fitting has a big ambiguity. It is believed that the mass of scalar and pseudo-scalar mesons will increase slightly around critical temperature, though we can not see here for the large ambiguity. While for vector and axial-vector, the decreasing of mass in medium agrees with other analysis (37); (38); (39). The more precise way to study the dependences of temperature is to calculate the quasinormal modes of mesons. We leave it for future study. ## Iv back-reaction effects of bulk vacuum In this section, we will investigate the back-reaction effects of bulk vacuum which includes the quark mass and condensate. In (40), a fully back-reacted holographic QCD has been constructed. It was found that the back reaction has only small effects on meson spectra. It is interesting to check its influence on the mass spectra with finite temperature. Let us begin with the following 5-dimensional action, S=∫d5x√^g(−^R+Tr[|DX|2+V(X)]), (27) For simplicity we do not take the dilaton field into account in the action. is the five dimensional Ricci scalar. is the bulk scalar field in Eq.(4) with the bulk vacuum expectation form . The bVEV relates to quark mass and condensates in Eq.(9) and Eq.(10). After taking the trace, the action is rewritten as follow: S=∫d5x√^g(−^R+12∂Mv∂Mv+V(v)), (28) with . To obtain the black hole solution, we consider the deformed AdSBH background, ds2=e2A(z)z2(f(z)dt2−d→x2−dz2f(z)). (29) The equations of motion are 12^gMN(−^R+12∂Pv∂Pv+V(v))+^RMN−12∂Mv∂Nv = 0 (30) ∂V(v)∂v−1√^g∂M(√^g^gMN∂Nv) = 0 (31) The , and components of the gravitational field equations are respectively: A′′+A′(f′2f−2z)+A′2+2z2+v′212−f′2zf−e2AV(v)6z2f = 0 (32) f′′+f′(6A′−6z)+f(6A′′+6A′2+12v′2+12z2−12A′z)−e2AV(v)z2 = 0 (33) A′2+A′(f′4f−2z)+(1z2−e2AV(v)+3zf′12z2f−v′224) = 0 (34) From Eq.(33) and Eq.(34), we can obtain the equation of the warped factor A′′−A′2+2zA′+16v′2=0 (35) This equation cannot be analytically solved with the bVEV given in Eq.(9). We then numerically solve by using the UV boundary condition and its derivative vanishes for a general situation. While from Eq.(32) and Eq.(33), one can analytically solve as: f(z)=C1+C2∫z0e−3A(z)z3dz (36) where and are integral constants. Near the boundary , we require the metric to be asymptotic to : f(0)=1 (37) Near the horizon , we require f(zh)=0 (38) Solution of can be expressed as f(z)=1−∫z0x3e−3A(x)dx∫zh0x3e−3A(x)dx (39) One can expand at the UV boundary with requiring , f(z→0)=1−z44∫zh0e−3A(t)t3dt+⋯ (40) Comparing with AdS black-hole solution, it can be seen that the correction of back-reaction contributes to the higher order terms of . The numerical results of and are presented in Fig.3. It is easy to obtain the Hawking temperature, TH=−14π∂f∂z∣∣∣z→zh=z3he−3A(zh)4π∫zh0e−3A(x)x3dx (41) We plot the temperature v.s. horizon in Fig.4. The monotonous behavior indicates that such a black hole solution is stable. With the above analysis, we are now in the position to investigate the finite temperature behavior of mesons after considering the back-reaction effects of bulk vacuum. The action has the same form as Eq.(4) except for the background metric, which has been replaced by the back-reaction improved one : S=∫d5x√^ge−Φ(z)Tr[|DX|2−m2X|X|2−λX|X|4−14g25(F2L+F2R)], (42) Making a similar calculation as the one in section III, we can obtain the mesons’ thermal spectral function with back-reaction improved gravity background. The numerical results are shown in Fig.5. It can be seen that in low temperature region the locations of the peaks are nearly the same as the ones without back-reaction effects in section III. Such phenomena agree well with the conclusion in (40). It is found that the warped factor shown in Fig.3 can well be fitted by a simple form with around . It is noticed that in zero temperature region and the back-reaction correction of quark mass and condensate provides very little effects on mass spectra. While in high temperature region, it is seen that the melting temperatures have increased about . By fitting the spectral function with the Breit-Wigner form in Eq.(25), we can obtain the critical temperature with including the back-reaction effects of bulk vacuum. The results are presented in Table.3 It should be pointed out that in the above calculation the dilaton field in the action Eq.(28) is still taken as a background field. The back reaction effects of bulk vacuum which includes quark mass and quark condensate have increased the melting temperature to be around Tc≃150±7MeV (43) Such a result is consistent with the ones yielded from lattice QCD simulations. In (41), the chiral and deconfinement critical temperatures were found to be . In (42), the chiral transition temperature of two massless flavors was shown to be . For physics masses of three flavor quarks, the chiral transition temperature was found to be MeV(43). ## V Conclusions and Remarks We have investigated the finite temperature behavior of IR-improved bulk holographic AdS/QCD model built recently in(15). The spectral function of mesons has been analyzed following the prescription in (36). By fitting the spectral function with a Breit-Wigner form, the critical temperature of mesons is found to be around . It has been noticed that in low temperature region, the peaks which correspond to the poles of the Green’s function are consistent with the masses calculated in zero temperature case (15). We would like to point out that there exists the vagueness of the critical temperature criterion. In obtaining the critical temperature, we have to take a range of the melting temperature with the condition between the hight () and width () of peak that: . In this paper, we have considered the back-reaction effects of bulk vacuum and yielded an improved metric of background gravity. The mesons’ thermal mass spectral function has been calculated based on the back-reaction improved action, which can lead the critical temperature to be increased about MeV. A reasonable melting temperature has been found to be MeV, which is consistent with the recent results obtained from lattice QCD simulations. ## Acknowledgements This work is supported in part by the National Nature Science Foundation of China (NSFC) under Grants No.10975170, No.10905084, No.10821504; and the Project of Knowledge Innovation Program (PKIP) of the Chinese Academy of Science. ### References 1. D. J. Gross and F. Wilczek, Phys. Rev. Lett. 30, 1343 (1973); H. D. Politzer, Phys. Rev. Lett. 30, 1346 (1973). 2. J. M. Maldacena, Adv. Theor. Math. Phys. 2, 231 (1998) [hep-th/9711200]. 3. S. S. Gubser, I. R. Klebanov and A. M. Polyakov, Phys. Lett. B 428, 105 (1998) [hep-th/9802109]. 4. E. Witten, Adv. Theor. Math. Phys. 2, 253 (1998) [hep-th/9802150]. 5. J. Polchinski and M. J. Strassler, Phys. Rev. Lett. 88, 031601 (2002) [hep-th/0109174]. 6. M. Kruczenski, D. Mateos, R. C. Myers and D. J. Winters, JHEP 0405, 041 (2004) [hep-th/0311270]. 7. T. Sakai and S. Sugimoto, Prog. Theor. Phys. 113, 843 (2005) [hep-th/0412141]. 8. T. Sakai and S. Sugimoto, Prog. Theor. Phys. 114, 1083 (2005) [hep-th/0507073]. 9. J. Erlich, E. Katz, D. T. Son and M. A. Stephanov,   Phys. Rev. Lett. 95, 261602 (2005)   [hep-ph/0501128].   10. A. Karch, E. Katz, D. T. Son and M. A. Stephanov,   Phys. Rev. D 74, 015005 (2006)   [hep-ph/0602229].   11. P. Colangelo, F. De Fazio, F. Giannuzzi, F. Jugeau and S. Nicotri, Phys. Rev. D 78, 055009 (2008) [arXiv:0807.1054 [hep-ph]]. 12. T. Gherghetta, J. I. Kapusta and T. M. Kelley, Phys. Rev. D 79, 076003 (2009) [arXiv:0902.1998 [hep-ph]]. 13. Y. Q. Sui, Y. L. Wu, Z. F. Xie and Y. B. Yang, Phys. Rev. D 81, 014024 (2010) [arXiv:0909.3887 [hep-ph]]. 14. Y. Q. Sui, Y. L. Wu and Y. B. Yang, Phys. Rev. D 83, 065030 (2011) [arXiv:1012.3518 [hep-ph]]. 15. L. -X. Cui, Z. Fang and Y. -L. Wu, arXiv:1310.6487 [hep-ph]. 16. A. Vega and I. Schmidt, Phys. Rev. D 82, 115023 (2010) [arXiv:1005.3000 [hep-ph]]. 17. A. Vega and I. Schmidt, Phys.Rev. D84 (2011) 017701 [e-Print: arXiv:1104.4365 ] 18. D. Li, M. Huang and Q. -S. Yan, arXiv:1206.2824 [hep-th]. 19. S. J. Brodsky and G. F. de Teramond, Phys. Rev. Lett. 96 (2006) 201601 [arXiv:hep-ph/0602252]. 20. S. J. Brodsky and G. F. de Teramond, Phys. Rev. D 77, 056007 (2008) [arXiv:0707.3859 [hep-ph]]. 21. S. J. Brodsky and G. F. de Teramond, arXiv:0909.3899 [hep-ph]; G. F. de Teramond and S. J. Brodsky, arXiv:0909.3900 [hep-ph] and references therein. 22. Y. Nambu, Phys. Rev. Lett. 4 (1960) 380. 23. Y. B. Dai and Y. L. Wu, Eur. Phys. J. C 39 (2005) S1 [arXiv:hep-ph/0304075]. 24. K. Ghoroku, M. Yahiro, Phys. Rev. D73, 125010 (2006). [hep-ph/0512289]. 25. M. Fujita, K. Fukushima, T. Misumi and M. Murata, Phys. Rev. D 80, 035001 (2009) [arXiv:0903.2316 [hep-ph]]. 26. M. Fujita, T. Kikuchi, K. Fukushima, T. Misumi and M. Murata, Phys. Rev. D 81, 065024 (2010) [arXiv:0911.2298 [hep-ph]]. 27. A. S. Miranda, C. A. Ballon Bayona, H. Boschi-Filho and N. R. F. Braga, JHEP 0911, 119 (2009) [arXiv:0909.1790 [hep-th]]. 28. P. Colangelo, F. Giannuzzi and S. Nicotri, Phys. Rev. D 80, 094019 (2009) [arXiv:0909.1534 [hep-ph]]. 29. H. R. Grigoryan, P. M. Hohler and M. A. Stephanov, Phys. Rev. D 82, 026005 (2010) [arXiv:1003.1138 [hep-ph]]. 30. C. P. Herzog, Phys. Rev. Lett. 98, 091601 (2007) [hep-th/0608151]. 31. A. S. Miranda, C. A. Ballon Bayona, H. Boschi-Filho and N. R. F. Braga, JHEP 0911, 119 (2009) [arXiv:0909.1790 [hep-th]]. 32. P. Colangelo, F. De Fazio, F. Jugeau, S. Nicotri, Phys. Lett. B652, 73-78 (2007). [hep-ph/0703316]. 33. P. Colangelo, F. Giannuzzi, S. Nicotri, Phys. Rev. D80, 094019 (2009). [arXiv:0909.1534 [hep-ph]]. 34. L. -X. Cui, S. Takeuchi and Y. -L. Wu, JHEP 1204, 144 (2012) [arXiv:1112.5923 [hep-ph]]. 35. L. -X. Cui and Y. -L. Wu, Mod. Phys. Lett. A, Vol. 28, No. 34, 1350132 (2013) [arXiv:1302.4828 [hep-ph]]. 36. D. T. Son and A. O. Starinets, JHEP 0209, 042 (2002) [hep-th/0205051]. 37. E. Santini, M. D. Cozma, A. Faessler, C. Fuchs, M. I. Krivoruchenko and B. Martemyanov, Phys. Rev. C 78, 034910 (2008) [arXiv:0804.3702 [nucl-th]]. 38. M. Post, S. Leupold and U. Mosel, Nucl. Phys. A 741, 81 (2004) [nucl-th/0309085]. 39. A. K. Dutt-Mazumder, R. Hofmann and M. Pospelov, Phys. Rev. C 63, 015204 (2001) [hep-ph/0005100]. 40. J. P. Shock, F. Wu, Y. -L. Wu and Z. -F. Xie, JHEP 0703, 064 (2007) [hep-ph/0611227]. 41. S. Borsanyi et al. [Wuppertal-Budapest Collaboration], JHEP 1009, 073 (2010) [arXiv:1005.3508 [hep-lat]]. 42. A. Bazavov, T. Bhattacharya, M. Cheng, C. DeTar, H. T. Ding, S. Gottlieb, R. Gupta and P. Hegde et al., Phys. Rev. D 85, 054503 (2012) [arXiv:1111.1710 [hep-lat]]. 43. T. Bhattacharya, M. I. Buchoff, N. H. Christ, H. -T. Ding, R. Gupta, C. Jung, F. Karsch, Z. J. Lin, R.D. Mawhinney, G. McGlynn et al., e-Print: arXiv:1402.5175 [hep-lat] You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
# If irrational numbers are uncountable, then why did I find this? [closed] I understand that irrational numbers are uncountable. I've seen the proof and it makes perfect sense. However, I came up with this (most likely false) proof that says that they're countable. Chances are, I made a mistake and the proof doesn't mean anything, but I just want to be sure. Here is the proof: You can't find a solid chunk (range of numbers) that does not contain a rational number. Which means that irrational numbers are just points on the number line, not lines. Which means you just need to name all the points to count the irrationals. What is the mistake here? • How do you propose to name them? You'll run out of names long before you run out of numbers. That's the essence of uncountability. (note that what I said is quite imprecise, but it should make you think). – Deepak Dec 31 '18 at 3:39 • There are uncountable sets which do not contain any continuous segment of the number line. Another famous example is the Cantor set: en.wikipedia.org/wiki/Cantor_set – Noble Mushtak Dec 31 '18 at 3:40 • Geometric intuition does not give much help when thinking about cardinality. Both the irrationals and the rationals "look like" sprinkled dust, but somehow the irrationals have "more dust" than the rationals. What you need to do is start thinking precisely about functions and their behavior. How exactly do you propose to build a surjection from the naturals to the irrationals, just by using the fact that the irrationals are totally disconnected? Along the samelines, a seemingly-promising observation is that the rationals are dense, so they should be "equivalent to" the irrationals somehow; – Noah Schweber Dec 31 '18 at 3:46 • but that's also wrong. It's totally plausible at first, but all the density of the rationals tells you is that the set of irrationals can be approximated by (in some sense) a countable set, and you can't actually close the gap and get true countability. – Noah Schweber Dec 31 '18 at 3:47 • It is a common misconception that every point of a totally disconnected space is an isolated point. Had this been true, then your proof can actually be made rigorous. However, counterexamples are abundant. – Shalop Dec 31 '18 at 22:23 In mathematical language, the claim that you can't find a solid chunk of the line (meaning an interval) that does not contain a rational is correct, and it is usually stated that the rationals are dense in the line. I think your intuition is then that there is one irrational between "neighboring" rationals, which is the sense of saying that the irrationals are points on the line instead of lines. This is the problem. Between any two rationals there are an uncountable infinite number of irrationals and a countable infinite number of rationals. When you think of dense sets you need to banish the thought of "neighboring elements" from your mind. They do not exist because between any two there is another. The difference between the rationals and the reals as a field is that the reals are complete. If you think of the interval $$[0,1]$$ you can make a binary expansion of a number. The expansion will converge, but without the completeness axiom you don't know that it will converge to a number in your field. A classic example is $$\sqrt 2 -1$$. If you make its binary expansion the approximation by increasing numbers of bits is a Cauchy sequence. The reals have an axiom that says it converges to a real, while the rationals do not. Now the classic Cantor proof applies to say the reals are not countable. Let's go through this sentence-by-sentence. You can't find a solid chunk (range of numbers) that does not contain a rational number. This is true: The rational numbers are dense in the reals, so every segment $$[a, b]$$ in the real numbers contains some rational number. Which means that irrational numbers are just points on the number line, not lines. This is half-true: Because of the first statement, there is no segment $$[a, b]$$ which is a subset of the irrationals, since all such segments contain at least one rational number. However, on the other hand, these irrational numbers are not just isolated points, meaning that in any segment $$[a, b]$$ which contains an irrational number $$q$$, there will always be another irrational number $$p\neq q$$ such that $$p$$ is also in the segment $$[a, b]$$. Thus, it is wrong to think about the irrational numbers as just isolated points on the number line, because they aren't: Even though there is no segment consisting of entirely irrational numbers, there are still irrational numbers which are arbitrarily close to each other. Now, you could also say the same thing about the rationals: Even though there is no segment consisting of entirely rational numbers, there are still rational numbers which are arbitrarily close to each other. Thus, whether or not a set can be thought of as a bunch of isolated points or not doesn't really have anything to do with if the set is countable or not. This gets us to the third statement: Which means you just need to name all the points to count the irrationals. This statement is false, and does not follow from the above two statements because of the argument I have presented above. Just because there is no continuous segment which is a subset of the irrationals does not automatically mean you can make a list all of the irrational numbers by just listing all of the points. This sentence here is pretty vague and does not highlight any way to actually create a bijection between the set of natural numbers and the set of irrational numbers, which is what it really means to be a "countable set." Thus, this proof is invalid because of this vague third statement, which does not actually construct a clear way to make a list of all the irrational numbers. This isn't really an answer, but it might help... You seem to be thinking of the rationals and irrationals interleaved 1-to-1, so we could "name" the irrationals using the rational to their left (for irrational $$j$$, the rational just less than $$j$$). What happens when we do this for $$\sqrt{2} = 1.414\dots$$? With which rational do we name it? We might try to sneak up on the name through $$1$$, $$\frac{14}{10}$$, $$\frac{141}{100}$$, $$\frac{1414}{1000}$$, $$\dots$$, but this never settles down to a single fraction. (That it does not settle down to a single fraction is what it means when we say $$\sqrt{2}$$ is irrational.) Between any of these rational numbers and $$\sqrt{2}$$, there are infinitely many rationals and infinitely many irrationals, and there is no way to pick the rational with which to "name" $$\sqrt{2}$$. One way to see where you are misled is the phrase "Between any of these rational numbers and $$\sqrt{2}$$, there are infinitely many rationals and infinitely many irrationals". This says there are infinitely many rationals and infinitely many irrationals in every interval (chunk of the real numbers), but it cannot possibly tell you if there are more (with respect to cardinality) of one than the other -- "infinitely many" is not precise enough to do so. "You can't find a solid chunk (range of numbers) that does not contain a rational number." - True, this is just the statements that the rationals are dense in the reals (and also irrationals). "Which means that irrational numbers are just points on the number line, not lines." - True "Which means you just need to name all the points to count the irrationals." - Basically saying "we need to give a name to everything in this bucket". As @Deepak noted, this is the hard part. Note that whatever (finite!) "name" you come up with, it can be mapped to the naturals via eg dictionary sorting. This is basically the statement that the computable irrationals are still countable - whether "sqrt of pi" or "8th root of polynomial: ___". Beyond these are an uncountable sea of irrationals that cannot be computed/described/etc by any finite means. How do you plan to name these? Q: "What is the mistake here?" A: I believe the mistake is assuming that there is a countable number of segments. In your 'proof', you say each segment (which contained only rational numbers) is terminated at an irrational number. Since there are an uncountable number of segments, there are an uncountable number of irrational numbers (each one being at the end of its 'companion' segment). [EDIT: Removed trailing garbage from earlier edit]
Torchlight Infinite Fate Card List of Torchlight Infinite Fluorescent Memory Shards IconNameDescription Miner's Surprise Collect 5 to exchange for random Ember x3 from the Spacetime Wanderer Secret of the Glacial Abyss Collect 3 to exchange for random Flame x5 from Spacetime Wanderer Crow's Wail Collect 4 to exchange for Ominous Ember or Restless Ember x1 from the Spacetime Wanderer Maud's Trick Collect 4 to exchange for Flame Elementium x1 from the Spacetime Wanderer Burning Furnace Collect 3 to exchange for Flame Sand x2 from the Spacetime Wanderer Sanguine Moon Collect 7 to exchange for 3-6 random Ominous Embers or Restless Embers from the Spacetime Wanderer Lost Secrets Collect 7 to exchange for Flame Elementium x9 from the Spacetime Wanderer A Terrible Price Collect 3 to exchange for 5-10 random Fossils from the Spacetime Wanderer Windfall Collect 6 to exchange for random Fossil x3 from the Spacetime Wanderer Collect 4 to exchange for random Advanced Ember x5 from the Spacetime Wanderer Collect 1 to exchange for random Map Material x1 from the Spacetime Wanderer Cartographer's Bag Collect 2 to exchange for a random Map from the Spacetime Wanderer Cartographer's Journal Collect 3 to exchange for the rarest Map Material x3 from the Spacetime Wanderer Useless Prophecy Collect 3 to exchange for a random Fluorescent Memory Fragment from the Spacetime Wanderer Forbidden Prophecy Collect 1 to exchange for random Fluorescent Memory Fragment x3 from the Spacetime Wanderer Collect 6 to exchange for Elixir of Oblivion x3 from the Spacetime Wanderer Six Gods' Boon Collect 6 to exchange for a random Legendary item from the Spacetime Wanderer Gambler's Dice Helio Rock Collect 12 to exchange for a random Lv.21 Active Skill from the Spacetime Wanderer Conquest's End Collect 2 to exchange for a random World Boss Exclusive Item from the Spacetime Wanderer Queen's Makeup Kit Collect 5 to exchange for a random Legendary Necklace from the Spacetime Wanderer Queen's Jewelry Box Collect 4 to exchange for a random Legendary Ring from the Spacetime Wanderer Fate Binding Collect 5 to exchange for a random Legendary Belt from the Spacetime Wanderer Six Gods' Blessing Collect 6 to exchange for a random Legendary Armor from the Spacetime Wanderer Confusing Memories Collect 3 to exchange for a random high-rarity Fluorescent Memory Fragment from the Spacetime Wanderer Antonius's Help Collect 12 to exchange for a random Lv.21 Support Skill from the Spacetime Wanderer Aureole Rock Collect 12 to exchange for a random Lv.21 Precise Aura/Imbue Skill from the Spacetime Wanderer Uros' Legacy Collect 7 to exchange for a random Legendary Weapon from the Spacetime Wanderer Treasure of the Frozen Collect 8 to exchange for a random Glacial Abyss Exclusive Legendary item from the Spacetime Wanderer Treasure of the Thunder Collect 8 to exchange for a Thunder Wastes Exclusive Legendary item from the Spacetime Wanderer Treasure of the Forge Collect 8 to exchange for a Steel Forge Exclusive Legendary item from the Spacetime Wanderer Treasure of the Lava Sea Collect 8 to exchange for a Blistering Lava Sea Exclusive Legendary item from the Spacetime Wanderer Treasure of the Voidlands Collect 8 to exchange for a Voidlands Exclusive Legendary item from the Spacetime Wanderer Lost History Collect 9 to exchange for Wheel of Time from the Spacetime Wanderer Birth Baptism Collect 8 to exchange for Origin of Everything from the Spacetime Wanderer Adventure to the Core Collect 7 to exchange for A Terrible Price from the Spacetime Wanderer Endless Night Collect 5 to exchange for Sanguine Moon from the Spacetime Wanderer Sightless Guide Collect 3 to exchange for Lost Secrets from the Spacetime Wanderer Destiny Collect 9 to exchange for Fixed Destiny from the Spacetime Wanderer Wish of Despair Collect 8 to exchange for Blood Shield of Black Gorge from the Spacetime Wanderer False Expectations Collect 8 to exchange for Fury Shadow of Thunderlight from the Spacetime Wanderer Burning Heart Collect 7 to exchange for Valerie's Night Stroll from the Spacetime Wanderer. Origin of Everything Collect 7 to exchange for Infinity from the Spacetime Wanderer Wheel of Time Collect 11 to exchange for Exquisite Box from the Spacetime Wanderer Ouroboros Collect 6 to exchange for Ring of Slaughter from the Spacetime Wanderer Dragonslayer's Notes Collect 8 to exchange for Fiend Crown from the Spacetime Wanderer Echo of Despair Collect 8 to exchange for King Lionheart's Yearning from the Spacetime Wanderer Fixed Destiny Collect 8 to exchange for Lighthunter's Belt from the Spacetime Wanderer Ashen Legacy Collect 9 to exchange for Sage's Insight from the Spacetime Wanderer Master's Unfinished Piece (One-Handed Weapon) Collect 8 to exchange for a random Lv.85 Normal One-Handed Weapon from the Spacetime Wanderer Master's Unfinished Piece (Two-Handed Weapon) Collect 12 to exchange for a random Lv.85 Normal Two-Handed Weapon from the Spacetime Wanderer Master's Unfinished Piece (Armor/Shield) Collect 9 to exchange for a random Lv.85 Normal Armor or Shield from the Spacetime Wanderer Master's Unfinished Piece (Trinket) Collect 12 to exchange for a random Lv.85 Normal Trinket from the Spacetime Wanderer Clergy's Thinking Collect 12 to exchange for a random Lv.21 Special Skill from the Spacetime Wanderer Antonius' Research Collect 12 to exchange for a random Lv.1 Special Skill from the Spacetime Wanderer Maud's Guess Collect 5 to exchange for a random Lv.85 Sleepless/Ominous Magic Gear from the Spacetime Wanderer Maud's Resolve Collect 8 to exchange for a random Lv.85 Sleepless/Ominous Rare Gear from the Spacetime Wanderer Ichi Sandstorm Collect 10 to exchange for Legendary Soldier Goggles from the Spacetime Wanderer
# Homework Help: Financial math problems 1. Aug 11, 2010 ### dragonfly1 calculating returns: suppose you bought a 7 percent coupon bond one year ago for $893. the bond sells for$918 today. a. assuming a \$1000 face value, what was your total dollar return on this investment over the past year. b.what was your total nominal rate of return on this investment over the past yr. c.if inflation rate last year was 4 percent, what was your total real rate of return on this investment? can someone help solve this problem 2. Aug 12, 2010 ### Staff: Mentor Being that you are new to this forum, you might not have had a chance to look at the Rules. In the section on homework help, it says that you must make an effort at solving the problem you post before anyone can give you any help. You should also include relevant formulas.
# Forecasting accuracy? Any thoughts on how far into the future to forecast? I'm working on forecasting our web monthly traffic in R. I'm using Holt-Winters and the dates Jan 2015 - Sep 2017. I want to forecast the next 12 months. I know the further I get out, the less accurate I am. Is there a good rule/ratio to follow? Thanks! Probably your forecast should include both a point forecast and a variability/confidence band around the point. The variability band generally should be increasing as you forecast farther out into the future. If your forecast incorporates this band there should be a point beyond which your forecast becomes not useful because the variability becomes too high. Long term forecast for web traffic is really quite difficult with full generality and I don't know of efficient method for long term prediction (several months). In many cases Holt-Winters can't even extract useful yearly cycles because you need several years, and anyway the world, trends and fashions change from one year to the next. Holt-Winters works very well on weekly cycles but not further as far as I have observed. Also, it looks like every website or webpage behave quite differently, there is a lot of noise and a lot of impredictability. Basically, the error will increase as you go forward, a bit like a basic random walk prediction (forecasted as $\hat y_{t+h}=y_t$). If you have many time-series, you can calculate an average error for the method that you use, using predicted vs real (with a burn-in safety to ignore HW initializing). However there is no reason that the observations of errors on the past of a single time-series will generalize to the future. Maybe you can compare it to the standard error curve to the one obtained by "silly" forecast $\hat y_{t+h}=y_t$ and see how better you are. Given that your coefficients are statistically significant and your model's errors have passed multiple "model specification tests" then I would obtain confidence limits by re-samplimg the model's errors via Monte Carlo while also incorporating the psi weights to possibly inflate the forecast variance due to auto-projection. The width of the limits will suggest to you "how far ahead you can safely forecast"
## Populations In statistics, a population includes all members of a defined group that we are studying for data driven decisions. ### Learning Objectives Give examples of a statistical populations and sub-populations ### Key Takeaways #### Key Points • It is often impractical to study an entire population, so we often study a sample from that population to infer information about the larger population as a whole. • Sometimes a government wishes to try to gain information about all the people living within an area with regard to gender, race, income, and religion. This type of information gathering over a whole population is called a census. • A subset of a population is called a sub-population. #### Key Terms • heterogeneous: diverse in kind or nature; composed of diverse parts • sample: a subset of a population selected for measurement, observation, or questioning to provide statistical information about the population ### Populations When we hear the word population, we typically think of all the people living in a town, state, or country. This is one type of population. In statistics, the word takes on a slightly different meaning. A statistical population is a set of entities from which statistical inferences are to be drawn, often based on a random sample taken from the population. For example, if we are interested in making generalizations about all crows, then the statistical population is the set of all crows that exist now, ever existed, or will exist in the future. Since in this case and many others it is impossible to observe the entire statistical population, due to time constraints, constraints of geographical accessibility, and constraints on the researcher’s resources, a researcher would instead observe a statistical sample from the population in order to attempt to learn something about the population as a whole. Sometimes a government wishes to try to gain information about all the people living within an area with regard to gender, race, income, and religion. This type of information gathering over a whole population is called a census. Census: This is the logo for the Bureau of the Census in the United States. ### Sub-Populations A subset of a population is called a sub-population. If different sub-populations have different properties, so that the overall population is heterogeneous, the properties and responses of the overall population can often be better understood if the population is first separated into distinct sub-populations. For instance, a particular medicine may have different effects on different sub-populations, and these effects may be obscured or dismissed if such special sub-populations are not identified and examined in isolation. Similarly, one can often estimate parameters more accurately if one separates out sub-populations. For example, the distribution of heights among people is better modeled by considering men and women as separate sub-populations. ## Samples A sample is a set of data collected and/or selected from a population by a defined procedure. ### Learning Objectives Differentiate between a sample and a population ### Key Takeaways #### Key Points • A complete sample is a set of objects from a parent population that includes all such objects that satisfy a set of well-defined selection criteria. • An unbiased (representative) sample is a set of objects chosen from a complete sample using a selection process that does not depend on the properties of the objects. • A random sample is defined as a sample where each individual member of the population has a known, non-zero chance of being selected as part of the sample. #### Key Terms • population: a group of units (persons, objects, or other items) enumerated in a census or from which a sample is drawn • unbiased: impartial or without prejudice • census: an official count of members of a population (not necessarily human), usually residents or citizens in a particular region, often done at regular intervals ### What is a Sample? In statistics and quantitative research methodology, a data sample is a set of data collected and/or selected from a population by a defined procedure. Typically, the population is very large, making a census or a complete enumeration of all the values in the population impractical or impossible. The sample represents a subset of manageable size. Samples are collected and statistics are calculated from the samples so that one can make inferences or extrapolations from the sample to the population. This process of collecting information from a sample is referred to as sampling. ### Types of Samples Samples: Online and phone-in polls produce biased samples because the respondents are self-selected. In self-selection bias, those individuals who are highly motivated to respond– typically individuals who have strong opinions– are over-represented, and individuals who are indifferent or apathetic are less likely to respond. A complete sample is a set of objects from a parent population that includes all such objects that satisfy a set of well-defined selection criteria. For example, a complete sample of Australian men taller than 2 meters would consist of a list of every Australian male taller than 2 meters. It wouldn’t include German males, or tall Australian females, or people shorter than 2 meters. To compile such a complete sample requires a complete list of the parent population, including data on height, gender, and nationality for each member of that parent population. In the case of human populations, such a complete list is unlikely to exist, but such complete samples are often available in other disciplines, such as complete magnitude-limited samples of astronomical objects. An unbiased (representative) sample is a set of objects chosen from a complete sample using a selection process that does not depend on the properties of the objects. For example, an unbiased sample of Australian men taller than 2 meters might consist of a randomly sampled subset of 1% of Australian males taller than 2 meters. However, one chosen from the electoral register might not be unbiased since, for example, males aged under 18 will not be on the electoral register. In an astronomical context, an unbiased sample might consist of that fraction of a complete sample for which data are available, provided the data availability is not biased by individual source properties. The best way to avoid a biased or unrepresentative sample is to select a random sample, also known as a probability sample. A random sample is defined as a sample wherein each individual member of the population has a known, non-zero chance of being selected as part of the sample. Several types of random samples are simple random samples, systematic samples, stratified random samples, and cluster random samples. A sample that is not random is called a non-random sample, or a non-probability sampling. Some examples of nonrandom samples are convenience samples, judgment samples, and quota samples. ## Random Sampling A random sample, also called a probability sample, is taken when each individual has an equal probability of being chosen for the sample. ### Learning Objectives Categorize a random sample as a simple random sample, a stratified random sample, a cluster sample, or a systematic sample ### Key Takeaways #### Key Points • A simple random sample (SRS) of size $\text{n}$ consists of $\text{n}$ individuals from the population chosen in such a way that every set on $\text{n}$ individuals has an equal chance of being in the selected sample. • Stratified sampling occurs when a population embraces a number of distinct categories and is divided into sub-populations, or strata. At this stage, a simple random sample would be chosen from each stratum and combined to form the full sample. • Cluster sampling divides the population into groups, or clusters. Some of these clusters are randomly selected. Then, all the individuals in the chosen cluster are selected to be in the sample. • Systematic sampling relies on arranging the target population according to some ordering scheme and then selecting elements at regular intervals through that ordered list. #### Key Terms • population: a group of units (persons, objects, or other items) enumerated in a census or from which a sample is drawn • cluster: a significant subset within a population • stratum: a category composed of people with certain similarities, such as gender, race, religion, or even grade level ### Simple Random Sample (SRS) There is a variety of ways in which one could choose a sample from a population. A simple random sample (SRS) is one of the most typical ways. Also commonly referred to as a probability sample, a simple random sample of size n consists of n individuals from the population chosen in such a way that every set of n individuals has an equal chance of being in the selected sample. An example of an SRS would be drawing names from a hat. An online poll in which a person is asked to given their opinion about something is not random because only those people with strong opinions, either positive or negative, are likely to respond. This type of poll doesn’t reflect the opinions of the apathetic. Online Opinion Polls: Online and phone-in polls also produce biased samples because the respondents are self-selected. In self-selection bias, those individuals who are highly motivated to respond– typically individuals who have strong opinions– are over-represented, and individuals who are indifferent or apathetic are less likely to respond. Simple random samples are not perfect and should not always be used. They can be vulnerable to sampling error because the randomness of the selection may result in a sample that doesn’t reflect the makeup of the population. For instance, a simple random sample of ten people from a given country will on average produce five men and five women, but any given trial is likely to over-represent one sex and under-represent the other. Systematic and stratified techniques, discussed below, attempt to overcome this problem by using information about the population to choose a more representative sample. In addition, SRS may also be cumbersome and tedious when sampling from an unusually large target population. In some cases, investigators are interested in research questions specific to subgroups of the population. For example, researchers might be interested in examining whether cognitive ability as a predictor of job performance is equally applicable across racial groups. SRS cannot accommodate the needs of researchers in this situation because it does not provide sub-samples of the population. Stratified sampling, which is discussed below, addresses this weakness of SRS. ### Stratified Random Sample When a population embraces a number of distinct categories, it can be beneficial to divide the population in sub-populations called strata. These strata must be in some way important to the response the researcher is studying. At this stage, a simple random sample would be chosen from each stratum and combined to form the full sample. For example, let’s say we want to sample the students of a high school to see what type of music they like to listen to, and we want the sample to be representative of all grade levels. It would make sense to divide the students into their distinct grade levels and then choose an SRS from each grade level. Each sample would be combined to form the full sample. ### Cluster Sample Cluster sampling divides the population into groups, or clusters. Some of these clusters are randomly selected. Then, all the individuals in the chosen cluster are selected to be in the sample. This process is often used because it can be cheaper and more time-efficient. For example, while surveying households within a city, we might choose to select 100 city blocks and then interview every household within the selected blocks, rather than interview random households spread out over the entire city. ### Systematic Sample Systematic sampling relies on arranging the target population according to some ordering scheme and then selecting elements at regular intervals through that ordered list. Systematic sampling involves a random start and then proceeds with the selection of every $\text{k}$th element from then onward. In this case, $\text{k} = \frac{\text{population size}}{\text{sample size}}$. It is important that the starting point is not automatically the first in the list, but is instead randomly chosen from within the first to the $\text{k}$th element in the list. A simple example would be to select every 10th name from the telephone directory (an ‘every 10th‘ sample, also referred to as ‘sampling with a skip of 10’). ## Random Assignment of Subjects Random assignment helps eliminate the differences between the experimental group and the control group. ### Learning Objectives Discover the importance of random assignment of subjects in experiments ### Key Takeaways #### Key Points • Researchers randomly assign participants in a study to either the experimental group or the control group. Dividing the participants randomly reduces group differences, thereby reducing the possibility that confounding factors will influence the results. • By randomly assigning subjects to groups, researchers are able to feel confident that the groups are the same in terms of all variables except the one which they are manipulating. • A randomly assigned group may statistically differ from the mean of the overall population, but this is rare. • Random assignment became commonplace in experiments in the late 1800s due to the influence of researcher Charles S. Peirce. #### Key Terms • null hypothesis: A hypothesis set up to be refuted in order to support an alternative hypothesis; presumed true until statistical evidence in the form of a hypothesis test indicates otherwise. • control: a separate group or subject in an experiment against which the results are compared where the primary variable is low or nonexistence ### Importance of Random Assignment When designing controlled experiments, such as testing the effects of a new drug, statisticians often employ an experimental design, which by definition involves random assignment. Random assignment, or random placement, assigns subjects to treatment and control (no treatment) group(s) on the basis of chance rather than any selection criteria. The aim is to produce experimental groups with no statistically significant characteristics prior to the experiment so that any changes between groups observed after experimental activities have been completed can be attributed to the treatment effect rather than to other, pre-existing differences among individuals between the groups. Control Group: Take identical growing plants, randomly assign them to two groups, and give fertilizer to one of the groups. If there are differences between the fertilized plant group and the unfertilized “control” group, these differences may be due to the fertilizer. In experimental design, random assignment of participants in experiments or treatment and control groups help to ensure that any differences between or within the groups are not systematic at the outset of the experiment. Random assignment does not guarantee that the groups are “matched” or equivalent; only that any differences are due to chance. Random assignment is the desired assignment method because it provides control for all attributes of the members of the samples —in contrast to matching on only one or more variables—and provides the mathematical basis for estimating the likelihood of group equivalence for characteristics one is interested in, both for pretreatment checks on equivalence and the evaluation of post treatment results using inferential statistics. ### Random Assignment Example Consider an experiment with one treatment group and one control group. Suppose the experimenter has recruited a population of 50 people for the experiment—25 with blue eyes and 25 with brown eyes. If the experimenter were to assign all of the blue-eyed people to the treatment group and the brown-eyed people to the control group, the results may turn out to be biased. When analyzing the results, one might question whether an observed effect was due to the application of the experimental condition or was in fact due to eye color. With random assignment, one would randomly assign individuals to either the treatment or control group, and therefore have a better chance at detecting if an observed change were due to chance or due to the experimental treatment itself. If a randomly assigned group is compared to the mean, it may be discovered that they differ statistically, even though they were assigned from the same group. To express this same idea statistically–if a test of statistical significance is applied to randomly assigned groups to test the difference between sample means against the null hypothesis that they are equal to the same population mean (i.e., population mean of differences = 0), given the probability distribution, the null hypothesis will sometimes be “rejected”–that is, deemed implausible. In other words, the groups would be sufficiently different on the variable tested to conclude statistically that they did not come from the same population, even though they were assigned from the same total group. In the example above, using random assignment may create groups that result in 20 blue-eyed people and 5 brown-eyed people in the same group. This is a rare event under random assignment, but it could happen, and when it does, it might add some doubt to the causal agent in the experimental hypothesis. ### History of Random Assignment Randomization was emphasized in the theory of statistical inference of Charles S. Peirce in “Illustrations of the Logic of Science” (1877–1878) and “A Theory of Probable Inference” (1883). Peirce applied randomization in the Peirce-Jastrow experiment on weight perception. Peirce randomly assigned volunteers to a blinded, repeated-measures design to evaluate their ability to discriminate weights. His experiment inspired other researchers in psychology and education, and led to a research tradition of randomized experiments in laboratories and specialized textbooks in the nineteenth century. ## Surveys or Experiments? Surveys and experiments are both statistical techniques used to gather data, but they are used in different types of studies. ### Learning Objectives Distinguish between when to use surveys and when to use experiments ### Key Takeaways #### Key Points • A survey is a technique that involves questionnaires and interviews of a sample population with the intention of gaining information, such as opinions or facts, about the general population. • An experiment is an orderly procedure carried out with the goal of verifying, falsifying, or establishing the validity of a hypothesis. • A survey would be useful if trying to determine whether or not people would be interested in trying out a new drug for headaches on the market. An experiment would test the effectiveness of this new drug. #### Key Terms • placebo: an inactive substance or preparation used as a control in an experiment or test to determine the effectiveness of a medicinal drug ### What is a Survey? Survey methodology involves the study of the sampling of individual units from a population and the associated survey data collection techniques, such as questionnaire construction and methods for improving the number and accuracy of responses to surveys. Statistical surveys are undertaken with a view towards making statistical inferences about the population being studied, and this depends strongly on the survey questions used. Polls about public opinion, public health surveys, market research surveys, government surveys, and censuses are all examples of quantitative research that use contemporary survey methodology to answers questions about a population. Although censuses do not include a “sample,” they do include other aspects of survey methodology, like questionnaires, interviewers, and nonresponse follow-up techniques. Surveys provide important information for all kinds of public information and research fields, like marketing research, psychology, health, and sociology. Since survey research is almost always based on a sample of the population, the success of the research is dependent on the representativeness of the sample with respect to a target population of interest to the researcher. ### What is an Experiment? An experiment is an orderly procedure carried out with the goal of verifying, falsifying, or establishing the validity of a hypothesis. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated. Experiments vary greatly in their goal and scale, but always rely on repeatable procedure and logical analysis of the results in a method called the scientific method. A child may carry out basic experiments to understand the nature of gravity, while teams of scientists may take years of systematic investigation to advance the understanding of a phenomenon. Experiments can vary from personal and informal (e.g. tasting a range of chocolates to find a favorite), to highly controlled (e.g. tests requiring a complex apparatus overseen by many scientists that hope to discover information about subatomic particles). Uses of experiments vary considerably between the natural and social sciences. Scientific Method: This flow chart shows the steps of the scientific method. In statistics, controlled experiments are often used. A controlled experiment generally compares the results obtained from an experimental sample against a control sample, which is practically identical to the experimental sample except for the one aspect whose effect is being tested (the independent variable). A good example of this would be a drug trial, where the effects of the actual drug are tested against a placebo. ### When is One Technique Better Than the Other? Surveys and experiments are both techniques used in statistics. They have similarities, but an in depth look into these two techniques will reveal how different they are. When a businessman wants to market his products, it’s a survey he will need and not an experiment. On the other hand, a scientist who has discovered a new element or drug will need an experiment, and not a survey, to prove its usefulness. A survey involves asking different people about their opinion on a particular product or about a particular issue, whereas an experiment is a comprehensive study about something with the aim of proving it scientifically. They both have their place in different types of studies.
# Help with free body diagram for a pendulum 1. Apr 5, 2015 ### theone 1. The problem statement, all variables and given/known data I want to sum the forces perpendicular to the pendulum and sum the moments about the pendulums center of gravity. 2. Relevant equations 3. The attempt at a solution $P\sin \theta - mg\cos \theta - N\cos \theta = -m\ddot x\cos \theta + ml\ddot \theta$ $-Pl\sin \theta - Nl\cos \theta + K_t\theta= I\ddot\theta$ I want to know if these are correct, especially the plus/minus File size: 70.4 KB Views: 97 2. Apr 7, 2015 ### haruspex You have a cosine and sine swapped in the first equation. Your second equation looks doubtful in several respects. What point are you taking moments about? Is it ok to take moments about a point that is accelerating? Do you even need to take moments, i.e. does the rod have mass? 3. Apr 8, 2015 ### theone The rod does have mass. The point was about the center of gravity of the combined rod/ball thing. l is the the distance from the pivot to the center of gravity. I needed a moment equation because I wanted to eliminate the $$Psin\theta-Ncos\theta$$ in the first equation. I dont really know if its ok to take moments about that point. I also don't understand the $$ml\ddot\theta$$ term in the first equation... 4. Apr 9, 2015 ### theone for the sum of moments about the pendulum's COG, is the moment due to P negative and the moment due to N positive? 5. Apr 9, 2015 ### haruspex With the usual anticlockwise convention, yes. 6. Apr 9, 2015 ### theone For the first equation, which two terms did you identify as wrong? Is it the P and N terms? 7. Apr 9, 2015 ### haruspex In each of the terms involving a trig function, the other factor is related to acceleration or force in either the vertical or horizontal direction. You should expect that all of one direction take cos, and all of the other direction take sin. 8. Apr 9, 2015 ### theone But for P and mg, which are in opposite directions, I am getting the result that both are sin File size: 23.4 KB Views: 55 9. Apr 9, 2015 ### haruspex They're both vertical, so should match the same trig function. 10. Apr 9, 2015 ### theone so is the final answer including the signs correct: $$Psin\theta-mgsin\theta-Ncos\theta=m\ddot x cos\theta+ml\ddot\theta$$ 11. Apr 9, 2015 ### haruspex Not quite. The left hand side appears to be taking up and.left as positive, the right hand side down and right as positive. 12. Apr 9, 2015 ### theone I meant for the $$mg\ddot x cos\theta$$ to be negative. Is that the only mistake 13. Apr 9, 2015 ### haruspex What about the $ml\ddot \theta$ term? Also, you said the rod has mass, so that should be in the equation, and you need a contribution from the torsion spring. 14. Apr 9, 2015 ### theone for the rod mass, I considered the rod and ball a single object. Is that ok? I don't know what the contribution from the torsion spring is. and like I said earlier, I don't really understand the $ml\ddot \theta$ term 15. Apr 9, 2015 ### haruspex The $ml\ddot \theta$ term is the acceleration of the mass (in the tangential direction) relative to the acceleration of the block. The torsion spring applies a torque. What is the relationship between force and torque? If you are considering the ball and mass as one object, then you need to figure out the mass centre of the combination and see how that affects the $ml\ddot \theta$ term and the torsion spring tem. 16. Apr 9, 2015 ### theone The spring applies ${K_t\theta}$ of torque , so then the force it applies is $\frac{K_t\theta}{l}$, with l being the length of the rod from the pivot to the combined COG? 17. Apr 9, 2015 ### haruspex That's the force applied to the mass, if the rod is massless. But as I wrote, if you consider the mass and rod as a single unit, you need to deal in terms of their common mass centre, and how far that will be from the joint. 18. Apr 9, 2015 ### theone So is it... $$Psin\theta-mgsin\theta-Ncos\theta+\frac{K_t \theta}{l}=m\ddot x cos\theta-ml\ddot \theta cos\theta$$ with l being the distance from the base of the pivot to the combined object's COG? The force due to the torsional spring acts perpendicular to the length of the rod, right? Last edited: Apr 9, 2015 19. Apr 9, 2015 ### haruspex Yes, except that you've made the $m\ddot x$ term positive again (you said it was meant to be negative), and you've introduced a cos factor on the $\ddot \theta$ term. 20. Apr 9, 2015 ### theone thanks and finally, does the force due to the torsional spring act up and left or down and right? In the equation above, I put it as up and left. And also, if I were to take a force balance on the cart, would I need to include anything other than P and N to account for the torsional spring?
# audioPackFormat (HOA) The audioPackFormat with typeDefinition 'HOA' is used for scene-based audio (such as Higher Order Ambisonics). Many audioPackFormat definitions with the 'HOA' type are already defined in the Common Definitions, so they do not need to be defined explicitly when generating ADM metadata. It is common that a pack of HOA components/signals will share the same normalization, NFC compensation and/or screen-relation. However, when the parameters are specified within an audioBlockFormat, these values overwrite those given in the audioPackFormat. ## Attributes There are no additional attributes defined for the audioPackFormat with typeDefinition 'HOA'. See Common Attributes for the list of common ones. ## Sub-elements In addition to the common sub-elements the following sub-elements are defined for the audioPackFormat with typeDefinition 'HOA'. Sub-element Description Example Quantity Default audioChannelFormatIDRef Reference to an audioChannelFormat AC_00040001 0...* - audioPackFormatIDRef Reference to an audioPackFormat AP_00040002 0...* - absoluteDistance Absolute distance in metres 4.5 0 or 1 - normalization Indicates the normalization scheme of the HOA content (N3D, SN3D, FuMa). N3D 0 or 1 SN3D nfcRefDist Indicates the reference distance (in metres) of the loudspeaker setup for near-field compensation (NFC). If no nfcRefDist is defined or the value is 0, NFC is not necessary. 2 0 or 1 0 screenRef Indicates whether the content is screen-related (flag is equal to 1) or not (flag is equal to 0) 0 0 or 1 0 ## Example <audioPackFormat audioPackFormatID="AP_00040001" audioPackFormatName="3D_order1_SN3D_ACN" typeLabel="0004" typeDefinition="HOA"> <audioChannelFormatIDRef>AC_00040001</audioChannelFormatIDRef> <audioChannelFormatIDRef>AC_00040002</audioChannelFormatIDRef> <audioChannelFormatIDRef>AC_00040003</audioChannelFormatIDRef> <audioChannelFormatIDRef>AC_00040004</audioChannelFormatIDRef> </audioPackFormat>
# Integrate (sin x^4) 1. Jan 2, 2006 ### teng125 may i know how to integ (sin x^4) ?? the answer is 1/32(12x - 8sin 2x + sin4x) 2. Jan 2, 2006 ### VietDao29 Again, you can use Power-reduction formulas. Then use some Product-to-sum identities, your goal is convert that sin(x) to the power of 4 into some sine or cosine functions to the power of 1. Now let's first split sin4x into (sin2x sin2x). Can you go from here? 3. Jan 2, 2006 ### teng125 i try to subs using cos2x=1-s(sinx)^2 but can't get 4. Jan 2, 2006 ### VietDao29 So you have: cos(2x) = cos2x - sin2x = 2cos2x - 1 = 1 - 2sin2x. From there, rearrange them a bit, you will have: $$\cos ^ 2 x = \frac{1 + \cos(2x)}{2} \quad \mbox{and} \quad \sin ^ 2 x = \frac{1 - \cos(2x)}{2}$$ These are call Power-reduction formulas. So we will now use $$\sin ^ 2 x = \frac{1 - \cos(2x)}{2}$$. $$\int \sin ^ 4 x dx = \int (\sin ^ 2 x) ^ 2 dx = \int \left( \frac{1 - \cos(2x)}{2} \right) ^ 2 dx = \frac{1}{4} \int ( 1 - \cos(2x) ) ^ 2 dx$$ $$= \frac{1}{4} \int ( 1 - 2 \cos(2x) + \cos ^ 2 (2x)) dx$$. Now again use the Power-reduction formulas for cos2(2x). Can you go from here? 5. Jan 2, 2006 ### teng125 ya,that's where i got stuck because i don't know how to get the sin4x.how to obtain 1/32 sin4x?? 6. Jan 2, 2006 ### VietDao29 Did I tell you to use the Power-reduction formulas for cos2(2x). It's the last line of my above post (namely, the #4 post of this thread). Since you have: $$\cos ^ 2 x = \frac{1 + \cos(2x)}{2}$$, so that means: $$\cos ^ 2 (2x) = \frac{1 + \cos(2 \times (2x))}{2} = \frac{1 + \cos(4x)}{2}$$. Can you go from here? 7. Jan 2, 2006 ### teng125 oh........okok i saw it.....thanx very much
OBJECTIVES: To quantify the prevalence of parental vaccine hesitancy (VH) in the United States and examine the association of VH with sociodemographics and childhood influenza vaccination coverage. METHODS: A 6-question VH module was included in the 2018 and 2019 National Immunization Survey-Flu, a telephone survey of households with children age 6 months to 17 years. RESULTS: The percentage of children having a parent reporting they were “hesitant about childhood shots” was 25.8% in 2018 and 19.5% in 2019. The prevalence of concern about the number of vaccines a child gets at one time impacting the decision to get their child vaccinated was 22.8% in 2018 and 19.1% in 2019; the prevalence of concern about serious, long-term side effects impacting the parent’s decision to get their child vaccinated was 27.3% in 2018 and 21.7% in 2019. Only small differences in VH by sociodemographic variables were found, except for an 11.9 percentage point higher prevalence of “hesitant about childhood shots” and 9.9 percentage point higher prevalence of concerns about serious, long-term side effects among parents of Black compared with white children. In both seasons studied, children of parents reporting they were “hesitant about childhood shots” had 26 percentage points lower influenza vaccination coverage compared with children of parents not reporting hesitancy. CONCLUSIONS: One in 5 children in the United States have a parent who is vaccine hesitant, and hesitancy is negatively associated with childhood influenza vaccination. Monitoring VH could help inform immunization programs as they develop and target methods to increase vaccine confidence and vaccination coverage. What’s Known on This Subject: Vaccine hesitancy has contributed to large outbreaks of vaccine-preventable diseases in several countries, including the United States. This is the first report on prevalence of vaccine hesitancy among parents of children 6 months through 17 years in the United States using a survey module developed by Centers for Disease Control and Prevention and examining the association with childhood influenza vaccination coverage. Vaccine hesitancy (VH) has contributed to large outbreaks of vaccine-preventable diseases in several countries, including the United States.14  Although there is a lack of consensus on the definition of VH, it can be defined as the mental state of holding back in doubt or indecision regarding vaccination.5,6  VH may or may not lead a person to refuse or delay vaccinations for themselves or their children.5,7  Understanding the contributions of VH, among other vaccination barriers, is needed to inform improvements to vaccination programs. Researchers have noted a need for strategies to address VH, including monitoring VH over time by using a standard set of questions at the national level and at a level that allows for analysis of geographic clustering of VH.8,9  In 2018, the National Center for Health Statistics (NCHS) published a set of VH questions developed and tested at the NCHS Questionnaire Design Research Laboratory.10  This set of 6 questions was included in the National Immunization Survey (NIS) family of surveys to evaluate their performance in the field and collect updated information related to parental VH. Our objectives with this study were to quantify national prevalence of parental VH in the United States for children ages 6 months through 17 years, examine sociodemographic variables associated with VH, and examine the association of VH with childhood influenza vaccination coverage. Influenza vaccination coverage remains low and lags behind other childhood vaccines11 ; for the 2018–2019 influenza season, coverage with at least 1 dose was 62.6% among children.12  This article expands on some previous studies of the association of parental VH and child influenza vaccination coverage.1317 During April to June of 2018 and 2019, the NCHS VH module of 6 questions was included in the NIS.20  The NIS is a family of surveys using a national, state-stratified, list-assisted random-digit-dialed cellular telephone sample of households with children in the United States. Households with children aged 19 to 35 months during each calendar quarter of data collection are eligible for the NIS-Child, and households with children aged 13 to 17 years on the date of interview are eligible for the NIS-Teen. During October through June for each influenza season, households with children aged 6 to 18 months or 13 to 17 years not eligible for NIS-Child or NIS-Teen are eligible for a short child influenza module.12  The 1-minute VH module was included on the child influenza module, NIS-Child, and NIS-Teen. Data from these 3 surveys are routinely combined and referred to as the NIS-Flu.12  The response rates ranged across the survey components from 22.8% to 24.4% (2018) and 23.1% to 24.6% (2019). The study sample sizes were n = 36 184 (2018) and n = 39 617 (2019). Because the VH questions were designed to measure VH about all childhood vaccinations, they were placed at a point in the survey immediately after influenza vaccination questions and the following introductory text was added: “The next set of questions are about all recommended childhood vaccines, not just flu vaccination.” Influenza vaccination status was assessed with the questions: “Since July 1, 2017 [or 2018] has [child] had a flu vaccination? There are two types of flu vaccinations. One is a shot and the other is a spray, mist or drop in the nose.” Sociodemographic characteristics were based on respondent report; the variables included in this study were child’s age and race and ethnicity, household income or poverty level, number of children in the household, mother’s education, urban–rural residence, and the relationship to the child of the person completing the survey. Respondents to the NIS are those who are knowledgeable about the child’s vaccinations. Respondents were predominantly the child’s mother (59.3% in 2018, and 61.6% in 2019), whereas approximately one-third were the child’s father and <10% were another family member. For the purpose of succinctness in this study, we refer to the respondent as the parent. Proportions of responses to the VH questions were calculated overall and stratified by sociodemographic variables. Adjusted prevalence by each sociodemographic variable was also estimated by using predicted marginals from multivariable logistic regression models including main effects. Adjusted prevalence estimates were similar to unadjusted estimates and are reported in Table 1. For ease of analysis and interpretation, the 4 response categories of the “overall how hesitant are you” question were collapsed into 2 categories, combining “not at all hesitant” and “not that hesitant” responses together and “somewhat” and “very hesitant” responses together. For all 6 questions, a small percentage of respondents said they do not know or did not answer the question; we did not exclude the children of these parents from the analyses because we did not consider these responses to be missing at random given the nature of the hesitancy questions (see Fig 1 footnote for coding details). The recoding did not have an impact on overall results from the small percentages missing (Fig 1). For all sociodemographic variables in the NIS-Flu, except for income, missing responses are routinely imputed during data file processing. There are no missing values for child vaccination status because a completion for the NIS-Flu is defined as completing the survey at least through the vaccination status question. TABLE 1 Sociodemographic Variables Associated With Parental Prevalence of VH, United States, 2019, NIS-Flu CharacteristicVH Survey Question, Adjusteda % (95% CI) Overall, How Hesitant About Childhood Shots Would You Consider Yourself to Be?Is Child Administered Vaccines Following a Standard Schedule, or Some Other Schedule, Such as the Sears Schedule?Did Concerns About the Number of Vaccines Child Gets at One Time Impact Your Decision to Get Child Vaccinated?Did Concern About Serious, Long-Term Side Effects Impact Your Decision to Get Child Vaccinated?Do You Personally Know Anyone Who Has Had a Serious, Long-Term Side Effect From a Vaccine?Is Child’s Doctor or Health Provider Your Most Trusted Source of Information About Childhood Vaccines? Percentage Reporting Somewhat or Very HesitantPercentage Reporting Some Other SchedulePercentage Reporting YesPercentage Reporting YesPercentage Reporting YesPercentage Reporting No Child’s age 6–23 mo (referent) 20.3 (18.1–22.4) 6.7 (5.3–8.1) 22.4 (20.2–24.5) 21.9 (19.7–24.2) 12.3 (10.5–14.1) 14.5 (12.4–16.5) 2–4 y 21.1 (19.0–23.2) 7.0 (5.8–8.2) 22.7 (20.5–24.8) 21.8 (19.7–23.8) 13.8 (11.9–15.7) 14.9 (12.9–16.8) 5–12 y 19.9 (18.8–21.0) 6.4 (5.8–7.1) 18.4 (17.4–19.5)* 22.1 (20.9–23.2) 13.0 (12.1–14.0) 14.4 (13.4–15.3) 13–17 y 17.7 (16.4–19.1)* 5.1 (4.3–6.0) 17.3 (16.0–18.7)* 21.0 (19.6–22.4) 14.3 (13.1–15.5) 14.3 (13.1–15.5) Child’s race and ethnicityb Non-Hispanic white only (referent) 17.5 (16.4–18.5) 5.9 (5.3–6.6) 18.0 (17.1–19.0) 19.9 (18.9–20.9) 13.6 (12.7–14.5) 14.5 (13.6–15.4) Non-Hispanic Black only 29.4 (26.6–32.3)* 5.5 (4.2–6.7) 22.1 (19.7–24.4)* 29.8 (27.1–32.5)* 12.4 (10.3–14.6) 15.5 (13.4–17.6) Hispanic 18.1 (16.3–19.9) 7.1 (5.8–8.3) 19.2 (17.2–21.2) 19.7 (17.8–21.6) 13.5 (11.6–15.4) 13.1 (11.4–14.8) Non-Hispanic, other or multiple races 18.9 (16.9–20.9) 5.6 (4.5–6.7) 19.8 (17.6–22.1) 24.3 (22.0–26.5)* 13.9 (12.1–15.8) 15.4 (13.2–17.5) Household income or poverty levelc Over poverty level, >$75 000/y (referent) 15.9 (14.6–17.2) 5.6 (4.7–6.4) 17.6 (16.2–18.9) 19.0 (17.7–20.3) 13.1 (12.0–14.3) 14.1 (13.1–15.2) Over poverty level, ≤$75 000/y 21.2 (19.7–22.7)* 5.9 (5.1–6.6) 20.2 (18.7–21.7)* 23.1 (21.6–24.7)* 13.8 (12.5–15.2) 15.0 (13.6–16.4) At or below poverty level 20.8 (18.5–23.1)* 6.3 (5.1–7.5) 18.7 (16.6–20.8) 23.3 (20.9–25.6)* 12.2 (10.4–14.1) 12.6 (10.8–14.4) Income not reported 25.2 (22.7–27.6)* 8.5 (6.9–10.0)* 22.4 (20.0–24.8)* 25.9 (23.5–28.3)* 15.3 (13.2–17.4) 16.3 (14.2–18.4) No. children living in the household 1 child (referent) 18.4 (17.0–19.8) 5.4 (4.6–6.2) 19.1 (17.7–20.6) 22.1 (20.6–23.6) 11.8 (10.6–12.9) 13.6 (12.4–14.8) 2 children 19.1 (17.8–20.4) 5.7 (5.0–6.4) 18.4 (17.2–19.6) 20.9 (19.7–22.1) 12.8 (11.8–13.9) 13.5 (12.5–14.6) 3 or more children 20.8 (19.3–22.3)* 7.4 (6.4–8.3)* 19.8 (18.4–21.3) 22.3 (20.9–23.8) 15.8 (14.4–17.3)* 16.4 (15.0–17.7)* Mother’s education level Less than high school (referent) 21.0 (18.0–24.0) 6.5 (4.9–8.1) 16.6 (14.2–18.9) 17.6 (15.1–20.1) 9.4 (7.1–11.6) 10.8 (8.8–12.9) High school or equivalent 22.2 (20.2–24.2) 6.0 (4.9–7.0) 18.6 (16.8–20.5) 23.7 (21.8–25.7)* 12.6 (10.9–14.2)* 11.9 (10.4–13.3) Some college 21.0 (19.3–22.6) 6.6 (5.6–7.5) 20.1 (18.5–21.8)* 24.3 (22.5–26.0)* 16.2 (14.6–17.8)* 16.0 (14.6–17.5)* ≥ college degree 16.9 (15.8–18.1)* 5.9 (5.1–6.6) 19.3 (18.0–20.6) 20.5 (19.3–21.7) 13.3 (12.3–14.4)* 15.4 (14.2–16.6)* Who completed the survey? Child’s mother (referent) 19.8 (18.8–20.8) 5.9 (5.3–6.4) 19.4 (18.4–20.4) 21.6 (20.6–22.6) 14.3 (13.4–15.2) 13.2 (12.3–14.0) Child’s father 20.0 (18.5–21.6) 6.5 (5.6–7.4) 19.2 (17.7–20.6) 22.9 (21.4–24.4) 11.6 (10.3–12.9)* 16.3 (14.9–17.7) Other 15.6 (13.3–17.8)* 6.8 (4.9–8.7) 16.2 (13.9–18.5)* 18.5 (16.0–20.9)* 14.2 (11.7–16.8) 17.1 (14.5–19.8)* Urban–rural residenced Urban (MSA, central city) (referent) 19.1 (17.6–20.6) 5.9 (5.0–6.7) 19.6 (18.0–21.3) 20.5 (19.0–22.1) 11.8 (10.5–13.1) 14.2 (12.8–15.6) Suburban (MSA, noncentral city) 19.2 (18.1–20.2) 6.0 (5.4–6.6) 18.7 (17.7–19.6) 22.2 (21.1–23.2) 14.1 (13.1–15.0)* 14.2 (13.3–15.1) Rural (non-MSA) 21.6 (19.6–23.5)* 7.3 (6.0–8.6) 19.9 (18.1–21.7) 22.0 (20.1–23.9) 13.8 (12.2–15.4) 16.0 (14.4–17.7) CharacteristicVH Survey Question, Adjusteda % (95% CI) Overall, How Hesitant About Childhood Shots Would You Consider Yourself to Be?Is Child Administered Vaccines Following a Standard Schedule, or Some Other Schedule, Such as the Sears Schedule?Did Concerns About the Number of Vaccines Child Gets at One Time Impact Your Decision to Get Child Vaccinated?Did Concern About Serious, Long-Term Side Effects Impact Your Decision to Get Child Vaccinated?Do You Personally Know Anyone Who Has Had a Serious, Long-Term Side Effect From a Vaccine?Is Child’s Doctor or Health Provider Your Most Trusted Source of Information About Childhood Vaccines? Percentage Reporting Somewhat or Very HesitantPercentage Reporting Some Other SchedulePercentage Reporting YesPercentage Reporting YesPercentage Reporting YesPercentage Reporting No Child’s age 6–23 mo (referent) 20.3 (18.1–22.4) 6.7 (5.3–8.1) 22.4 (20.2–24.5) 21.9 (19.7–24.2) 12.3 (10.5–14.1) 14.5 (12.4–16.5) 2–4 y 21.1 (19.0–23.2) 7.0 (5.8–8.2) 22.7 (20.5–24.8) 21.8 (19.7–23.8) 13.8 (11.9–15.7) 14.9 (12.9–16.8) 5–12 y 19.9 (18.8–21.0) 6.4 (5.8–7.1) 18.4 (17.4–19.5)* 22.1 (20.9–23.2) 13.0 (12.1–14.0) 14.4 (13.4–15.3) 13–17 y 17.7 (16.4–19.1)* 5.1 (4.3–6.0) 17.3 (16.0–18.7)* 21.0 (19.6–22.4) 14.3 (13.1–15.5) 14.3 (13.1–15.5) Child’s race and ethnicityb Non-Hispanic white only (referent) 17.5 (16.4–18.5) 5.9 (5.3–6.6) 18.0 (17.1–19.0) 19.9 (18.9–20.9) 13.6 (12.7–14.5) 14.5 (13.6–15.4) Non-Hispanic Black only 29.4 (26.6–32.3)* 5.5 (4.2–6.7) 22.1 (19.7–24.4)* 29.8 (27.1–32.5)* 12.4 (10.3–14.6) 15.5 (13.4–17.6) Hispanic 18.1 (16.3–19.9) 7.1 (5.8–8.3) 19.2 (17.2–21.2) 19.7 (17.8–21.6) 13.5 (11.6–15.4) 13.1 (11.4–14.8) Non-Hispanic, other or multiple races 18.9 (16.9–20.9) 5.6 (4.5–6.7) 19.8 (17.6–22.1) 24.3 (22.0–26.5)* 13.9 (12.1–15.8) 15.4 (13.2–17.5) Household income or poverty levelc Over poverty level, >$75 000/y (referent) 15.9 (14.6–17.2) 5.6 (4.7–6.4) 17.6 (16.2–18.9) 19.0 (17.7–20.3) 13.1 (12.0–14.3) 14.1 (13.1–15.2) Over poverty level, ≤$75 000/y 21.2 (19.7–22.7)* 5.9 (5.1–6.6) 20.2 (18.7–21.7)* 23.1 (21.6–24.7)* 13.8 (12.5–15.2) 15.0 (13.6–16.4) At or below poverty level 20.8 (18.5–23.1)* 6.3 (5.1–7.5) 18.7 (16.6–20.8) 23.3 (20.9–25.6)* 12.2 (10.4–14.1) 12.6 (10.8–14.4) Income not reported 25.2 (22.7–27.6)* 8.5 (6.9–10.0)* 22.4 (20.0–24.8)* 25.9 (23.5–28.3)* 15.3 (13.2–17.4) 16.3 (14.2–18.4) No. children living in the household 1 child (referent) 18.4 (17.0–19.8) 5.4 (4.6–6.2) 19.1 (17.7–20.6) 22.1 (20.6–23.6) 11.8 (10.6–12.9) 13.6 (12.4–14.8) 2 children 19.1 (17.8–20.4) 5.7 (5.0–6.4) 18.4 (17.2–19.6) 20.9 (19.7–22.1) 12.8 (11.8–13.9) 13.5 (12.5–14.6) 3 or more children 20.8 (19.3–22.3)* 7.4 (6.4–8.3)* 19.8 (18.4–21.3) 22.3 (20.9–23.8) 15.8 (14.4–17.3)* 16.4 (15.0–17.7)* Mother’s education level Less than high school (referent) 21.0 (18.0–24.0) 6.5 (4.9–8.1) 16.6 (14.2–18.9) 17.6 (15.1–20.1) 9.4 (7.1–11.6) 10.8 (8.8–12.9) High school or equivalent 22.2 (20.2–24.2) 6.0 (4.9–7.0) 18.6 (16.8–20.5) 23.7 (21.8–25.7)* 12.6 (10.9–14.2)* 11.9 (10.4–13.3) Some college 21.0 (19.3–22.6) 6.6 (5.6–7.5) 20.1 (18.5–21.8)* 24.3 (22.5–26.0)* 16.2 (14.6–17.8)* 16.0 (14.6–17.5)* ≥ college degree 16.9 (15.8–18.1)* 5.9 (5.1–6.6) 19.3 (18.0–20.6) 20.5 (19.3–21.7) 13.3 (12.3–14.4)* 15.4 (14.2–16.6)* Who completed the survey? Child’s mother (referent) 19.8 (18.8–20.8) 5.9 (5.3–6.4) 19.4 (18.4–20.4) 21.6 (20.6–22.6) 14.3 (13.4–15.2) 13.2 (12.3–14.0) Child’s father 20.0 (18.5–21.6) 6.5 (5.6–7.4) 19.2 (17.7–20.6) 22.9 (21.4–24.4) 11.6 (10.3–12.9)* 16.3 (14.9–17.7) Other 15.6 (13.3–17.8)* 6.8 (4.9–8.7) 16.2 (13.9–18.5)* 18.5 (16.0–20.9)* 14.2 (11.7–16.8) 17.1 (14.5–19.8)* Urban–rural residenced Urban (MSA, central city) (referent) 19.1 (17.6–20.6) 5.9 (5.0–6.7) 19.6 (18.0–21.3) 20.5 (19.0–22.1) 11.8 (10.5–13.1) 14.2 (12.8–15.6) Suburban (MSA, noncentral city) 19.2 (18.1–20.2) 6.0 (5.4–6.6) 18.7 (17.7–19.6) 22.2 (21.1–23.2) 14.1 (13.1–15.0)* 14.2 (13.3–15.1) Rural (non-MSA) 21.6 (19.6–23.5)* 7.3 (6.0–8.6) 19.9 (18.1–21.7) 22.0 (20.1–23.9) 13.8 (12.2–15.4) 16.0 (14.4–17.7) MSA, metropolitan statistical area. a Adjusted estimates are based on multivariable logistic regression models with the hesitancy question as the dependent variable and the following variables as independent variables: child’s age, child’s race and ethnicity, household income, number of children in the household, mother’s education, respondent relationship to the child, and MSA of residence. b Race of child was reported by parent or guardian respondent. Children of Hispanic ethnicity may be of any race. Children identified as multiple races had >1 race category selected. c Income and poverty level was defined on the basis of total family income in the past calendar year, and the US Census poverty thresholds for that year specified for the applicable family size and number of children <18 y. Poverty thresholds are available at http://www.census.gov/hhes/www/poverty/data/threshld/index.html. d MSA was based on parent or guardian respondent-reported city, state, county, and zip code of residence by using the MSA definition file (https://www.census.gov/programs-surveys/metro-micro.html). * Statistically significant at P < .05 in the APD compared with the referent group. FIGURE 1 Prevalence of VH in the United States among parents of children age 6 months to 17 years, United States, 2018 and 2019, NIS-Flu. The recoding of do not know and refused responses to the 6 questions and their combined prevalence follows, ordered according to presentation in this figure: (1) grouped with nonhesitant (0.7% and 0.7%, for 2018 and 2019, respectively), (2) grouped with “standard schedule” (6.7% and 5.1%), (3) grouped with no concern (1.1% and 0.9%), (4) grouped with no concern (1.0% and 0.8%), (5) grouped with yes (1.1% and 1.0%), (6) grouped with no (1.0% and 1.0%). FIGURE 1 Prevalence of VH in the United States among parents of children age 6 months to 17 years, United States, 2018 and 2019, NIS-Flu. The recoding of do not know and refused responses to the 6 questions and their combined prevalence follows, ordered according to presentation in this figure: (1) grouped with nonhesitant (0.7% and 0.7%, for 2018 and 2019, respectively), (2) grouped with “standard schedule” (6.7% and 5.1%), (3) grouped with no concern (1.1% and 0.9%), (4) grouped with no concern (1.0% and 0.8%), (5) grouped with yes (1.1% and 1.0%), (6) grouped with no (1.0% and 1.0%). Close modal The association of child influenza vaccination coverage with hesitancy variables was tested by using multivariable logistic regression models. One model for each hesitancy variable was run, with the dependent variable being influenza vaccination status and independent variables being the one hesitancy variable and all of the sociodemographic variables. Adjusted prevalence and adjusted prevalence differences (APDs) were calculated from all models with significance tests based on the APDs. As a partial examination of the interrelationship of the 6 hesitancy variables, we stratified responses by the self-reported overall hesitancy question and calculated the prevalence of the other VH questions. All analyses were weighted to population totals and to adjust for households having multiple telephone lines, unit nonresponse, and noncoverage of noncellular-telephone households. A 2-sided significance level of 0.05 was adopted for all statistical tests; comparisons in the text described as different, higher, or lower were statistically significant. Analyses were conducted using SAS, version 9.4 (SAS Institute, Inc, Cary, NC) and SUDAAN (version 11.0.3) to account for the complex survey design. The percentage of children aged 6 months through 17 years in the United States having a parent who said they were hesitant about childhood shots was 25.8% (7.5% very hesitant and 18.3% somewhat hesitant) in 2018 and 19.5% (5.6% very hesitant and 13.8% somewhat hesitant) in 2019 (Fig 1). In both survey years, 6% of children had a parent reporting using a nonstandard vaccine schedule. The prevalence of concern about the number of vaccines a child gets at one time impacting the parent’s decision to get their child vaccinated was 22.8% in 2018 and 19.1% in 2019, whereas the prevalence of concern about serious, long-term side effects impacting the parent’s decision to get their child vaccinated was 27.3% in 2018 and 21.7% in 2019. The prevalence of personally knowing anyone who has had a serious long-term side effect from a vaccine was 14.9% in 2018 and 13.5% in 2019. Finally, the prevalence of not considering the child’s doctor or health provider as the most trusted source of information about childhood vaccines was 17.3% in 2018 and 14.4% in 2019 (Fig 1). Responses to the question “overall how hesitant about childhood shots would you consider yourself to be” were strongly associated with responses to the other 5 VH questions (Fig 2). Among children with a parent reporting being somewhat or very hesitant about childhood shots, 63.2% had a parent reporting concerns about serious, long-term side effects impacting their decision to get the child vaccinated, whereas this percentage was 11.7% among those with a parent not at all or not that hesitant. Likewise, the prevalence of the other VH constructs was low but not absent among those self-reporting as not at all or not that hesitant about childhood shots (Fig 2). FIGURE 2 Association of self-reported hesitancy with other VH questions, United States, 2019, NIS-Flu. FIGURE 2 Association of self-reported hesitancy with other VH questions, United States, 2019, NIS-Flu. Close modal Child’s age was associated with parental self-report of being hesitant about childhood shots; 17.7% of parents of children aged 13 to 17 years compared with 20.3% of parents of children aged 6 to 23 months reported being hesitant (Table 1). The child’s age was also associated with parental concern about the number of vaccines a child gets at one time affecting their decision to vaccinate, with lower prevalence of concern among parents of children aged 5 to 17 years compared with children aged 6 to 23 months (Table 1). Parents of non-Hispanic Black children, compared with parents of non-Hispanic white children, had higher prevalence of self-reported hesitancy about childhood shots (29.4% vs 17.5%), concerns about the number of shots (22.1% vs 18.0%), and concerns about side effects (29.8% vs 19.9%); these are the largest differences in Table 1. Parents of non-Hispanic other or multiple race children also had a higher prevalence of concerns about side effects compared with parents of non-Hispanic white children (24.3% vs 19.9%). Parents in the highest income group had lower prevalence of self-reported hesitancy about childhood shots than all other income groups (15.9% vs 20.8%–25.2%), and likewise, they had lower prevalence of concern about side effects than all other income groups (19.0% vs 23.1%–25.9%; Table 1). Parents who did not report income on the survey had a higher prevalence of not following the standard schedule compared with the highest income group (8.5% vs 5.6%; Table 1). Compared with there being only 1 child in the household, having ≥3 children living in the household was associated with higher prevalence of hesitancy about childhood shots (20.8% vs 18.4%), using a nonstandard schedule (7.4% vs 5.4%), reporting knowing someone with side effects from vaccines (15.8% vs 11.8%), and the doctor not being the most trusted source for vaccine information (16.4% vs 13.6%; Table 1). Higher mother’s education (college degree) was associated with lower prevalence of self-reported hesitancy compared with mothers with less than a high school degree (16.9% vs 21.0%). However, higher education was generally associated with higher prevalence of concerns about the number of vaccines and side effects as well as reporting personally knowing someone with vaccine side effects and not having the child’s doctor as the most trusted source of information about vaccines (Table 1). Respondents who were not the child’s mother or father had a lower prevalence of self-reported hesitancy. There was a slightly higher prevalence of self-reported hesitancy in rural areas (21.6%) compared with urban areas (19.1%) but no differences on any other VH variable except for a higher prevalence of suburban parents reporting knowing someone affected by side effects compared with urban parents (14.1% vs 11.8%; Table 1). All 6 VH variables were strongly associated with child influenza vaccination coverage. Adjusted influenza vaccination coverage was 25.8 percentage points lower in the 2017–2018 season and 25.6 percentage points lower in the 2018–2019 season among children of parents who self-reported being somewhat or very hesitant about childhood shots compared with children of parents who were not at all or not that hesitant (Table 2). The APDs in vaccination coverage were of similar magnitude for the other 5 VH variables (Table 2). The APDs in vaccination coverage ranged from 18.1 to 25.8 percentage points across the 2 influenza seasons and the 6 VH variables, thus indicating lower influenza vaccination coverage of children of parents who report elements of VH defined by the 6 survey questions. TABLE 2 Parental Hesitancy Response Prevalence by Their Child’s Influenza Vaccination Status Subgroup, and Child Influenza Vaccination Coverage by Parental Hesitancy Response, Children Age 6 Months to 17 Years, United States, 2017–2018 and 2018–2019 Influenza Seasons, NIS-Flu Parental VH Question Responses by Subgroup Defined by Child’s Influenza Vaccination StatusChild’s Influenza Vaccination Coverage (≥1 dose) 2017–2018 Influenza Season2018–2019 Influenza Season2017–2018 Influenza Season2018–2019 Influenza Season Vaccinated Subgroup, % (95% CI)Unvaccinated Subgroup, % (95% CI)Vaccinated Subgroup, % (95% CI)Unvaccinated Subgroup, % (95% CI)Adjusteda % (95% CI)Adjusted % (95% CI) Overall, how hesitant about childhood shots would you consider yourself to be? Somewhat or very hesitant 17.1 (16.1–18.0) 37.5 (36.0–39.1) 12.7 (11.9–13.6) 30.4 (28.9–32.0) 38.2 (36.3–40.1)* 41.4 (39.2–43.6)* Not at all or not that hesitant 82.9 (82.0–83.9) 62.5 (60.9–64.0) 87.3 (86.4–88.1) 69.6 (68.0–71.1) 63.9 (62.8–65.1)* 67.0 (65.9–68.0)* APD     −25.8 −25.6 Is child administered vaccines following a standard schedule, or some other schedule, such as the Sears schedule? Standard schedule 96.2 (95.7–96.6) 90.4 (89.5–91.2) 96.1 (95.6–96.5) 90.3 (89.3–91.2) 58.9 (57.8–59.9)* 63.5 (62.5–64.4)* Some other schedule 3.8 (3.4–4.3) 9.6 (8.8–10.5) 3.9 (3.5–4.4) 9.7 (8.8–10.7) 34.1 (30.6–37.7)* 39.7 (36.0–43.4)* APD     24.7 23.7 Did concerns about the number of vaccines child gets at one time impact your decision to get child vaccinated? Yes 16.6 (15.7–17.5) 31.2 (29.7–32.6) 14.6 (13.8–15.5) 26.3 (24.9–27.7) 41.4 (39.4–43.4)* 47.3 (45.2–49.5)* No 83.4 (82.5–84.3) 68.8 (67.4–70.3) 85.4 (84.5–86.2) 73.7 (72.3–75.1) 62.0 (60.9–63.1)* 65.4 (64.4–66.5)* APD     −20.6 −18.1 Did concern about serious, long-term side effects impact your decision to get child vaccinated? Yes 19.0 (18.1–20.0) 38.3 (36.8–39.8) 14.8 (14.0–15.7) 32.9 (31.4–34.4) 40.6 (38.7–42.4)* 43.2 (41.2–45.2)* No 81.0 (80.0–81.9) 61.7 (60.2–63.2) 85.2 (84.3–86.0) 67.1 (65.6–68.6) 63.6 (62.4–64.7)* 67.2 (66.2–68.3)* APD     −23.0 −24.1 Do you personally know anyone who has had a serious, long-term side effect from a vaccine? Yes 9.5 (8.8–10.3) 22.1 (20.8–23.5) 8.8 (8.1–9.5) 21.2 (19.8–22.5) 37.7 (35.3–40.2)* 41.4 (38.8–44.1)* No 90.5 (89.7–91.2) 77.9 (76.5–79.2) 91.2 (90.5–91.9) 78.8 (77.5–80.2) 60.7 (59.6–61.8)* 65.2 (64.2–66.2)* APD     −23.0 −23.8 Is child’s doctor or health provider your most trusted source of information about childhood vaccines? Yes 87.4 (86.5–88.2) 76.6 (75.1–77.9) 89.3 (88.6–90.0) 79.5 (78.1–80.8) 60.5 (59.4–61.6)* 64.7 (63.7–65.7)* No 12.6 (11.8–13.5) 23.4 (22.1–24.9) 10.7 (10.0–11.4) 20.5 (19.2–21.9) 41.9 (39.5–44.2)* 45.8 (43.3–48.3)* APD     18.6 18.9 Parental VH Question Responses by Subgroup Defined by Child’s Influenza Vaccination StatusChild’s Influenza Vaccination Coverage (≥1 dose) 2017–2018 Influenza Season2018–2019 Influenza Season2017–2018 Influenza Season2018–2019 Influenza Season Vaccinated Subgroup, % (95% CI)Unvaccinated Subgroup, % (95% CI)Vaccinated Subgroup, % (95% CI)Unvaccinated Subgroup, % (95% CI)Adjusteda % (95% CI)Adjusted % (95% CI) Overall, how hesitant about childhood shots would you consider yourself to be? Somewhat or very hesitant 17.1 (16.1–18.0) 37.5 (36.0–39.1) 12.7 (11.9–13.6) 30.4 (28.9–32.0) 38.2 (36.3–40.1)* 41.4 (39.2–43.6)* Not at all or not that hesitant 82.9 (82.0–83.9) 62.5 (60.9–64.0) 87.3 (86.4–88.1) 69.6 (68.0–71.1) 63.9 (62.8–65.1)* 67.0 (65.9–68.0)* APD     −25.8 −25.6 Is child administered vaccines following a standard schedule, or some other schedule, such as the Sears schedule? Standard schedule 96.2 (95.7–96.6) 90.4 (89.5–91.2) 96.1 (95.6–96.5) 90.3 (89.3–91.2) 58.9 (57.8–59.9)* 63.5 (62.5–64.4)* Some other schedule 3.8 (3.4–4.3) 9.6 (8.8–10.5) 3.9 (3.5–4.4) 9.7 (8.8–10.7) 34.1 (30.6–37.7)* 39.7 (36.0–43.4)* APD     24.7 23.7 Did concerns about the number of vaccines child gets at one time impact your decision to get child vaccinated? Yes 16.6 (15.7–17.5) 31.2 (29.7–32.6) 14.6 (13.8–15.5) 26.3 (24.9–27.7) 41.4 (39.4–43.4)* 47.3 (45.2–49.5)* No 83.4 (82.5–84.3) 68.8 (67.4–70.3) 85.4 (84.5–86.2) 73.7 (72.3–75.1) 62.0 (60.9–63.1)* 65.4 (64.4–66.5)* APD     −20.6 −18.1 Did concern about serious, long-term side effects impact your decision to get child vaccinated? Yes 19.0 (18.1–20.0) 38.3 (36.8–39.8) 14.8 (14.0–15.7) 32.9 (31.4–34.4) 40.6 (38.7–42.4)* 43.2 (41.2–45.2)* No 81.0 (80.0–81.9) 61.7 (60.2–63.2) 85.2 (84.3–86.0) 67.1 (65.6–68.6) 63.6 (62.4–64.7)* 67.2 (66.2–68.3)* APD     −23.0 −24.1 Do you personally know anyone who has had a serious, long-term side effect from a vaccine? Yes 9.5 (8.8–10.3) 22.1 (20.8–23.5) 8.8 (8.1–9.5) 21.2 (19.8–22.5) 37.7 (35.3–40.2)* 41.4 (38.8–44.1)* No 90.5 (89.7–91.2) 77.9 (76.5–79.2) 91.2 (90.5–91.9) 78.8 (77.5–80.2) 60.7 (59.6–61.8)* 65.2 (64.2–66.2)* APD     −23.0 −23.8 Is child’s doctor or health provider your most trusted source of information about childhood vaccines? Yes 87.4 (86.5–88.2) 76.6 (75.1–77.9) 89.3 (88.6–90.0) 79.5 (78.1–80.8) 60.5 (59.4–61.6)* 64.7 (63.7–65.7)* No 12.6 (11.8–13.5) 23.4 (22.1–24.9) 10.7 (10.0–11.4) 20.5 (19.2–21.9) 41.9 (39.5–44.2)* 45.8 (43.3–48.3)* APD     18.6 18.9 a Adjusted estimates are based on multivariable logistic regression models, 1 model for each hesitancy question included, with vaccination status as the dependent variable and the hesitancy question and the following variables as independent variables to test for the independent effects of the hesitancy question on vaccination status: child’s age, child’s race and ethnicity, household income, number of children in the household, mother’s education, respondent relationship to the child, and metropolitan statistical area of residence (categories of the sociodemographic variables are the same as those in Table 1). * Vaccination coverage estimates are significantly different, P < .05, when comparing the 2 groups defined by the hesitancy question responses. VH varied widely between states according to the 2019 estimates for the “overall how hesitant about childhood shots” question, with prevalence of parents reporting being somewhat or very hesitant ranging from 12.9% (95% confidence interval [CI]: 9.2–17.9) in Vermont to 25.4% (95% CI: 20.7–30.9) in Mississippi. These state-level VH estimates were inversely correlated with the state-level final influenza vaccination coverage estimates published on FluVaxView for the 2018–2019 influenza season (Fig 3).12 FIGURE 3 State-level parental VH and state-level influenza vaccination coverage, cshildren age 6 months to 17 years, United States, 2018–2019 Influenza Season, NIS-Flu. FIGURE 3 State-level parental VH and state-level influenza vaccination coverage, cshildren age 6 months to 17 years, United States, 2018–2019 Influenza Season, NIS-Flu. Close modal Approximately one-fifth of children in the United States had a parent reporting they were hesitant about childhood shots in 2019. Similar prevalence of VH has been found in some other studies but was higher than in a recent US study.2123  The proportion varied between 2018 and 2019, supporting the need to continuously monitor VH, as pointed out in the literature.8,9  The associations shown in this study between the VH variables and child influenza vaccination coverage may suggest a role for reduction in VH in increasing vaccination coverage with influenza.24  However, even among children of parents who reported being vaccine hesitant, 34% to 47% were vaccinated against influenza. The causal relationship between VH and other barriers to vaccination and decision-making is complex. Provider recommendation has been found to be associated with higher child influenza vaccination coverage.25  Resources for providers to help them speak to VH parents are available.2628 We also found an association of state-level estimates of parental VH with state-level child influenza vaccination coverage. As shown in the literature, there are geographical pockets of VH.4  The association of state-level parental hesitancy variables with vaccination coverage could be an avenue for future study, taking into account possible confounding and state-level vaccination program variables. Although we could not reliably estimate levels of local area VH in this study, the fact that the state-level estimates of VH show variability and association with vaccination coverage suggests that it could be worth exploring variability across smaller geographic areas as well as to identify differences in vaccine confidence at a more granular level. Although many of the sociodemographic variables showed some group differences for the 6 VH questions, most were small differences but statistically significant because of our large sample size. The exceptions to the small differences were the substantially higher prevalence of hesitancy about childhood shots (11.9 percentage points) and prevalence of self-reported concerns about serious, long-term side effects (9.9 percentage points) among parents of Black compared with white children. Racial disparities in influenza vaccination coverage have long persisted in the United States, among both adults and children.29,30  Early studies of reasons for nonvaccination showed racial differences in belief in misinformation, such as the influenza vaccination causing influenza.16 Examining the interrelationship of the 6 VH module variables emphasized the complexity of the VH construct. For the most part, parents who self-identified as being “hesitant about childhood shots” selected responses indicating aspects of VH for the other 5 questions. Yet there were parents who did not self-identify as being “hesitant about childhood shots” but did have concerns about vaccines, used alternate vaccine schedules, and did not consider their child’s doctor as the most trusted source of information. This is consistent with the findings from the cognitive evaluation of these questions, in which parents’ interpretation of the term “hesitant” related to their overall perception of the benefits and/or risks of childhood vaccination.10  Thus, although a parent may indicate they have specific concerns in regard to vaccines (ie, the number of vaccines their child receives at once), they might still weigh the benefits of vaccination as greater than the risks and thus not identify as vaccine hesitant. This study is subject to several limitations. Influenza vaccination was parent-reported so there may be reporting bias; authors of some studies have shown that parents over-report child influenza vaccination coverage.30,31  There may be an upward bias in parents reporting they know someone personally who has had a serious, long-term side effect from a vaccine, because serious side effects are rare according to vaccine safety data.32  The survey weighting adjustments may not eliminate all bias from using incomplete sample frames that excluded households with no telephones or only landline telephones. The response rate for the survey was low; hesitancy prevalence and vaccination coverage may differ between respondents and nonrespondents, and survey weighting may not adequately control for these differences. The VH module questions were not influenza-specific but referred to all vaccines; parents may be hesitant about some vaccines more than others.33  Researchers of future studies with the NIS data can examine the association of VH with receipt of childhood vaccines other than influenza. In this study, we did not examine the interaction of VH with other barriers to vaccination such as cost, access-to-care, or lack of convenience. One in 5 children in the United States have a vaccine hesitant parent, and VH has a strong negative association with childhood influenza vaccination coverage. Consistently monitoring changes in VH, including socioeconomic differences in VH, could inform immunization programs in targeting interventions, provide resources to facilitate provider-patient vaccine conversations, and ultimately increase confidence in vaccinations and improve vaccination coverage to protect children from disease. The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention. Dr Santibanez made substantial contributions to the conception and design, analyses, and interpretation of the data and drafting of the article; Drs Nguyen, Srivastav, and Bhatt made substantial contributions to the interpretation of the data; Drs Scanlon, Greby, Singleton, and Ms Fisher made substantial contributions to the acquisition and interpretation of the data; and all authors revised the article critically for intellectual content, approved the final version to be published, and agree to be accountable for all aspects of the work. FUNDING: No external funding. ASD CI confidence interval NCHS National Center for Health Statistics NIS National Immunization Survey VH vaccine hesitancy 1 Coombes R . Europe steps up action against vaccine hesitancy as measles outbreaks continue . BMJ . 2017 ; 359 : j4803 2 Quinn SC , Jamison AM , Freimuth VS . Measles outbreaks and public attitudes towards vaccine exemptions: some cautions and strategies for addressing vaccine hesitancy . Hum Vaccin Immunother . 2020 ; 16 ( 5 ): 1050 1054 3 McDonald R , Ruppert PS , Souto M , et al . Notes from the field: measles outbreaks from imported cases in orthodox Jewish communities - New York and New Jersey, 2018–2019 . MMWR Morb Mortal Wkly Rep . 2019 ; 68 ( 19 ): 444 445 4 Patel M , Lee , Clemmons NS , et al . National update on measles cases and outbreaks - United States, January 1–October 1, 2019 . MMWR Morb Mortal Wkly Rep . 2019 ; 68 ( 40 ): 893 896 5 Bedford H , Attwell K , Danchin M , Marshall H , Corben P , J . Vaccine hesitancy, refusal and access barriers: the need for clarity in terminology . Vaccine . 2018 ; 36 ( 44 ): 6556 6558 6. Merriam-Webster Dictionary . Available at: https://www.merriam-webster.com/dictionary/hesitant. Accessed February 10, 2020 7 Enkel SL , Attwell K , Snelling TL , Christian HE . ‘Hesitant compliers’: qualitative analysis of concerned fully-vaccinating parents . Vaccine . 2018 ; 36 ( 44 ): 6459 6463 8 Salmon DA , Dudley MZ , Glanz JM , Omer SB . Vaccine hesitancy: causes, consequences, and a call to action . Vaccine . 2015 ; 33 ( suppl 4 ): D66 D71 9 WHO . Improving vaccination demand and addressing hesitancy. 2019. Available at: https://www.who.int/immunization/programmes_systems/vaccine_hesitancy/en/. Accessed February 7, 2020 10 Scanlon P , Jamoom E . The Cognitive Evaluation of Survey Items Related to Vaccine Hesitance and Confidence for Inclusion on a Series of Short Question Sets . Hyattsville, MD : NCHS ; 2019 11 Hill HA , Singleton JA , Yankey D , Elam-Evans LD , Pingali SC , Kang Y . Vaccination coverage by age 24 Months among children born in 2015 and 2016 - national immunization survey-child, United States, 2016–2018 . MMWR Morb Mortal Wkly Rep . 2019 ; 68 ( 41 ): 913 918 12. Centers for Disease Control and Prevention . FluVaxView, influenza vaccination coverage. 2019 . Available at: https://www.cdc.gov/flu/fluvaxview/index.htm. Accessed April 30, 2020 13 Goss MD , Temte JL , Barlow S , et al . An assessment of parental knowledge, attitudes, and beliefs regarding influenza vaccination . Vaccine . 2020 ; 38 ( 6 ): 1565 1571 14 Lama Y , Hancock GR , Freimuth VS , Jamison AM , Quinn SC . Using classification and regression tree analysis to explore parental influenza vaccine decisions . Vaccine . 2020 ; 38 ( 5 ): 1032 1039 15 Paterson P , Chantler T , Larson HJ . Reasons for non-vaccination: parental vaccine hesitancy and the childhood influenza vaccination school pilot programme in England . Vaccine . 2018 ; 36 ( 36 ): 5397 5401 16 Santibanez TA , Kennedy ED . Reasons given for not receiving an influenza vaccination, 2011-12 influenza season, United States . Vaccine . 2016 ; 34 ( 24 ): 2671 2678 17 Quinn SC , Jamison AM , An J , Hancock GR , Freimuth VS . Measuring vaccine hesitancy, confidence, trust and flu vaccine uptake: results of a national survey of White and African American adults . Vaccine . 2019 ; 37 ( 9 ): 1168 1173 18 Opel DJ , Mangione-Smith R , Taylor JA , et al . Development of a survey to identify vaccine-hesitant parents: the parent attitudes about childhood vaccines survey . Hum Vaccin . 2011 ; 7 ( 4 ): 419 425 19 Miller K , Willson S , Chep V , JL . Cognitive Interviewing Methodology . Hoboken, NJ : Wiley and Sons ; 2014 20. Centers for Disease Control and Prevention . About the National Immunization Surveys (NIS). 2019 . Available at: https://www.cdc.gov/vaccines/imz-managers/nis/about.html. Accessed April 30, 2020 21 A , van Esso D , Del Torso S , et al . Vaccine confidence among parents: large scale study in eighteen European countries . Vaccine . 2020 ; 38 ( 6 ): 1505 1512 22 Opel DJ , Taylor JA , Zhou C , Catz S , Myaing M , Mangione-Smith R . The relationship between parent attitudes about childhood vaccines survey scores and future child immunization status: a validation study . JAMA Pediatr . 2013 ; 167 ( 11 ): 1065 1071 23 Kempe A , Saville AW , Albertin C , et al . Parental hesitancy about routine childhood and influenza vaccinations: a national survey . Pediatrics . 2020 ; 146 ( 1 ): e20193852 24 Hofstetter AM , Simon TD , Lepere K , et al . Parental vaccine hesitancy and declination of influenza vaccination among hospitalized children . Hosp Pediatr . 2018 ; 8 ( 10 ): 628 635 25 Kahn KE , Santibanez TA , Zhai Y , Bridges CB . Association between provider recommendation and influenza vaccination status among children . Vaccine . 2018 ; 36 ( 24 ): 3486 3497 26 Shen SC , Dubey V . Addressing vaccine hesitancy: clinical guidance for primary care physicians working with parents . Can Fam Physician . 2019 ; 65 ( 3 ): 175 181 27 Centers for Disease Control and Prevention . Provider resources for vaccine conversations with parents. 2020. Available at: https://www.cdc.gov/vaccines/partners/childhood/professionals.html. Accessed February 7, 2020 28 Centers for Disease Control and Prevention . Vaccinate with confidence. 2020. Available at: https://www.cdc.gov/vaccines/partners/vaccinate-with-confidence.html. Accessed February 7, 2020 29 Lu PJ , O’Halloran A , Bryan L , et al . Trends in racial/ethnic disparities in influenza vaccination coverage among adults during the 2007–08 through 2011–12 seasons . Am J Infect Control . 2014 ; 42 ( 7 ): 763 769 30 Santibanez TA , Grohskopf LA , Zhai Y , Kahn KE . Complete influenza vaccination trends for children six to twenty-three months . Pediatrics . 2016 ; 137 ( 3 ): e20153280 31 Brown C , Clayton-Boswell H , Chaves SS , et al.; New Vaccine Surveillance Network (NVSN) . Validity of parental report of influenza vaccination in young children seeking medical care . Vaccine . 2011 ; 29 ( 51 ): 9488 9492 32 Centers for Disease Control and Prevention . Safety information by vaccine. 2020. Available at: https://www.cdc.gov/vaccinesafety/vaccines/index.html. Accessed February 10, 2020 33 Siddiqui M , Salmon DA , Omer SB . Epidemiology of vaccine hesitancy in the United States . Hum Vaccin Immunother . 2013 ; 9 ( 12 ): 2643 2648 ## Competing Interests POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose. FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
# MATLAB: Signal Processing – Cross-Correlation to obtain Lag times. xcorr I have written code that produces a graph giving the xcorr coeff of two stochastic electromyogram signals: sEMGL and sEMGbR. % x=(sEMGL); y=(sEMGbR); [c,lags]=xcorr(x,y,'coeff'); figure (8); plot (c,lags) % My question is how to write code in Editor so that in the command window I can type 'time lag =' to obtain this scalar value in milliseconds. A secondary question is to find the derivative of each signal in order to calculate the rate of change in each signal: to evaluate whether the time lag between the signals varies in relation to the rate of change in each signal. dt = 1e-3;x = randn(100,1);y =[zeros(10,1); x(1:end-10)];[c,lags] = xcorr(y,x,40,'coeff');[~,index] = max(abs(c));fprintf('Time lag = %2.3f seconds\n',lags(index)*dt)
# Thread: I need to HTML-ize this 1. ## I need to HTML-ize this I have an image that I'm having trouble coming up with alt text for. Any ideas would be appreciated. http://picpaste.com/image009-JLRWeRmY.png 2. You might need to give us a few more clues here. For a start, what is the image? 3. A Mathematical Formula Showing [...] Else you could spellout the whole equation in words for the ALT, e.g. e, equals, mc squared. In either case this is where a full supportive text description and probably a title attribute might help. 4. That's impressive. Am I alone in being unable to see an image here? Am I missing something obvious? (Wouldn't be the first time!) 5. Attachment 57674 ^^ that image 6. Sorry about that! It was showing up fine on the "preview post" page. Let's try this: 7. lawlz, can we do LaTex online?? :) Since alt's should really be no longer than 100 characters, only add an alt if you can get a mathematician to concisely explain that formula. It looks possible. alt="A mathematical forumla showing..." But if this is like on Wikipedia where people are expected to have the ability to actually read or even copy-paste it, you'll need to see if you can make it real text somehow. Frankly, even with Ruby annotation (which wouldn't be correct usage), you're not going to get it. Check out Wikipedia's solution which uses sup and sub tags, with an image overlaid (view page with images blocked to see the formulas). This might be the best you can do, and you may need a mathematician to write it for you or make sure you write it correctly. You can use numerical character entities to type the Greek letters. Someone without those fonts is just screwed. In any case if you do Gilder-Levin with real forumlas underneath, you'll have a CSS background and no alt text, and alt won't be needed either, so, cool. Code: alt="equation to calculate the WTF?! of a moment"> 9. Ewww, lol So using the Wiki method, this equation would be: F^Eod = sum_i Inb^Eo_i A^d_i/sum_d A^d_i Looks like one of my old ssh passwords. 10. Well like I said, use <sup> and <sub> tags. Those are valid tags and should be used for precisely this reason. E=MC<sup>2</sup> H<sub>2</sub>O 11. Originally Posted by Stomme poes Well like I said, use <sup> and <sub> tags. Those are valid tags and should be used for precisely this reason. E=MC<sup>2</sup> H<sub>2</sub>O That doesn't help for symbols like the Sigma with the "i" directly centered under it. I'm looking at the MathML stuff now. It sure is complicated for a math-free person like myself 12. Would this be of any help to you? http://www.math.union.edu/~dpvc/jsmath/ 13. I ended up getting one of the engineers here to write out a text explanation of what the equation does (equations actually, since there were two), and that ended up being acceptable to the accessibility people. But I'm definitely exploring all the links provided here for next time. 14. It's a shame that you've come to this conclusion, as quite recently MathJax started making the rounds and almost overnight because the favoured way of handling Mathematical equations on the Internet. http://www.mathjax.org 15. That doesn't help for symbols like the Sigma with the "i" directly centered under it. There's usually always a "straight" way to write these things too, which is why I mentioned a mathematician might need to be involved, as I believe they know how to "plain" write these equations. I could be wrong though. 16. Originally Posted by ULTiMATE It's a shame that you've come to this conclusion, as quite recently MathJax started making the rounds and almost overnight because the favoured way of handling Mathematical equations on the Internet. http://www.mathjax.org Except that it would get shot down on the spot as needing Javascript to function. We have an internal standard that everything on a page must have a non-JS version. 17. Buuuttt... you can have the "straight" version I mentioned before, and let those with Javascript on get the MathJax effect. Enhancement. 18. You can always drop something in Microsoft word and then you switch the view to HTML. It can be of help when the big things get complicated and the easy things just don't seem to want to work. 19. I gotta say, Word writes some of the worst HTML I've ever seen... well, except Outlook, that also writes horrid HTML. And Photoshop, also writes horrid HTML. #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •
# Sequences and Series Nth Term Test $$\sum\limits_{n=1}^{\infty}a_{n} = ?$$ P Series Test $$\sum\limits_{n=1}^{\infty}\frac{1}{n^p} = ?$$ Infinite Geometric Series Direct Comparison Test Limit Comparison Test Integral Test The Ratio Test $$\lim\limits_{n\rightarrow \infty}\frac{a_{n+1}}{a_{n}}= \text{?}$$
• anonymous a fraction reduces to 36 if its numerator is (6x)5 , what is its denominator? Mathematics Looking for something else? Not the answer you are looking for? Search for more explanations.
# Tons of it! ISSN: 0043-8022 Publication date: 1 September 2003 ## Abstract #### Citation (2003), "Tons of it!", Work Study, Vol. 52 No. 5. https://doi.org/10.1108/ws.2003.07952eab.010 ### Publisher : Emerald Group Publishing Limited Alumina do Norte do Brasil (Alunorte) has commissioned its third alumina production line, with a capacity of 825,000ton per year. With this third line, Alunorte has a production capacity of 2,375,000ton of alumina per year, positioning it among the five largest alumina refineries in the world. The production increase will be allocated to overseas markets. The investment in this project was approximately US$300 million. Therefore, capex per ton of additional capacity was US$364, which is very competitive in comparison to the cost of other brownfield projects around the globe.
1 answer # 1. Solve the following initial value problem over the interval from x- 0 to 0.5 with a step size h-0.5 ###### Question: 1. Solve the following initial value problem over the interval from x- 0 to 0.5 with a step size h-0.5 where y(0)-1 dy dx Using Heun method with 2 corrector steps. Calculate g for the corrector steps. Using midpoint method a. b. ## Answers #### Similar Solved Questions 5 answers ##### Coxver IeL dir tc palar cocralina t E2n oveluate tke rquiting integrel: (Sketck: tho region fize: to detorxic1e tio "imlt &t 'n*Agretion ir: polar czord rates) Coxver IeL dir tc palar cocralina t E2n oveluate tke rquiting integrel: (Sketck: tho region fize: to detorxic1e tio "imlt &t 'n*Agretion ir: polar czord rates)... 1 answer ##### Problem 11-6 NPV Your division is considering two projects with the following cash flows (in millions):... Problem 11-6 NPV Your division is considering two projects with the following cash flows (in millions): - T Project A $14$20 Project B -$15$5 a. What are the projects' NPVs assuming the WACC is 5%? Round your answer to two decimal places. Do not round your intermediate calculations. Enter your... 5 answers ##### Find the exact length of the curve_ Y =4 + 6x3/2 0 < * < Find the exact length of the curve_ Y =4 + 6x3/2 0 < * <... 5 answers ##### 100 _1{11IN 8 1 1 1 3 1 1} 1 3 1 8 9/8 1 0 03!4000350030002500 Wavenumbers (cm-1)200015001000 100 _ 1 { 1 1 IN 8 1 1 1 3 1 1} 1 3 1 8 9/8 1 0 03! 4000 3500 3000 2500 Wavenumbers (cm-1) 2000 1500 1000... 1 answer ##### Consider the following: An elegant database solution will be as simple as possible Some complexity is... Consider the following: An elegant database solution will be as simple as possible Some complexity is unavoidable. How does complexity affect the user?... 5 answers ##### Fnd a eouaton 0l mo vorticnl lina throuoh (6, Jintne (Orm] by=€ whcreand nre Intagar wth no (nctor contion - all throo, 9nd320.Ta equdlon (Surplfy your pnawnt ) Fnd a eouaton 0l mo vorticnl lina throuoh (6, Jintne (Orm] by=€ whcre and nre Intagar wth no (nctor contion - all throo, 9nd320. Ta equdlon (Surplfy your pnawnt )... 5 answers ##### Use spherical coordinatesFind the centroid of the solid that is bounded by the xz-plane and the hemispheres16 x2 _ 22 and Y =36 x2 _ 22Y,z) = Use spherical coordinates Find the centroid of the solid that is bounded by the xz-plane and the hemispheres 16 x2 _ 22 and Y = 36 x2 _ 22 Y,z) =... 1 answer ##### C Visual Studios Create a GUI application that prompts users to enter a ten (10) digit... C Visual Studios Create a GUI application that prompts users to enter a ten (10) digit phone number, with hyphens and parentheses included – this input should be validated as a phone number adhering to the (XXX)-XXX-XXXX format. Should the user input an incorrect format, an error message shou... 5 answers ##### Provkle Uhe product Iot Ilc (ollomna reation OxcObS NaCr_O-/,SO1,OOhOH OH Provkle Uhe product Iot Ilc (ollomna reation OxcObS NaCr_O-/,SO1,O Oh OH OH... 1 answer ##### 651/take Course Question 1 Calculate shear force on a beam if the stress 343.00 psi at... 651/take Course Question 1 Calculate shear force on a beam if the stress 343.00 psi at a depth of d from the top surface of the beam shown. Assume d=4in, b = 9 in, and h - 11 in edia 12 pts Time Running Anten : 57 Minutes, Any arbitrary plane Neutral axis... 1 answer ##### Claim: The standard deviation of pulse rates of adult males is more than 10 bpm. For... Claim: The standard deviation of pulse rates of adult males is more than 10 bpm. For a random sample of 149 adult males, the pulse rates have a standard deviation of 10.5 bpm. Find the value of the test statistic. The value of the test statistic is (Round to two decimal places as needed.)... 1 answer ##### Aluminum produced in 1.00 h by the electrolysis of molten AICI if the current is 10,0... aluminum produced in 1.00 h by the electrolysis of molten AICI if the current is 10,0 AN A It Belz ZAICI, 60 seconds 10 A 22600] =36,000 Aus=C 36,000 c. Zmolal E Imole- 27 = 20.15g 3.36 20.15 gal... 5 answers ##### Q.2 The given vectors are solutions of a system X' = AX Determine whether the vectors form a fundamental set on the interval (~o,o) 2 X[ -2 ~at Xz X3 3 -3 2 Q.2 The given vectors are solutions of a system X' = AX Determine whether the vectors form a fundamental set on the interval (~o,o) 2 X[ -2 ~at Xz X3 3 -3 2... 5 answers ##### Part I: Kinetic Ftiction AgainIn this section you Will measure the coefficient of kinetic friction second way and compare it to the measurement in Part I Using the Motion Detector; you can measure the acceleration of the block as slides to stop_ This acceleration can be determined ffom the velocity time graph While sliding; only force acting on the block in the horizontal direction is that of friction From the mass of the block and its acceleration You can find the frictional force and finally Part I: Kinetic Ftiction Again In this section you Will measure the coefficient of kinetic friction second way and compare it to the measurement in Part I Using the Motion Detector; you can measure the acceleration of the block as slides to stop_ This acceleration can be determined ffom the velocit... 1 answer ##### Practice Question 49 The following amounts relate to Amato Company for the current year: beginning Inventory,... Practice Question 49 The following amounts relate to Amato Company for the current year: beginning Inventory, $20,000; ending inventory$28,000; purchases, $166,000; purchase returns,$4,800; and freight-out, $6,000. The amount of cost of goods sold for the period is$153,200. $169,200$159,200 15 1... 1 answer ##### Table isnt complete Required information Problem 8-4A Preparing a bank reconciliation and recording adjustments LO P3... table isnt complete Required information Problem 8-4A Preparing a bank reconciliation and recording adjustments LO P3 [The following information applies to the questions displayed below.) The following information is available to reconcile Branch Company's book balance of cash with its bank ... 5 answers ##### Please explain why D) [is spontaneous regardless of reactant concentration] is_wrong and gve me the correct answ erConsider the following metabolic reaction:Succinyl-CoA + AcetoacetateAcetoacetyl-CoA + SuccinateAG? =-L2Ulmo-This reaction __ A) can be made nonspontaneous if [succinyl-CoA] and [acetoacetate] are increased B) is spontaneous under standard conditions C) is nonspontaneous under all conditions D) is spontaneous regardless of reactant concentrations: E) Trick Questionl The favorability Please explain why D) [is spontaneous regardless of reactant concentration] is_wrong and gve me the correct answ er Consider the following metabolic reaction: Succinyl-CoA + Acetoacetate Acetoacetyl-CoA + Succinate AG? =-L2Ulmo- This reaction __ A) can be made nonspontaneous if [succinyl-CoA] and [a... 1 answer ##### Been struggling with these! please help me 7) A mass of 5 kg is at rest... been struggling with these! please help me 7) A mass of 5 kg is at rest on a frictionless incline which inclined at 32 degrees. A force of 6 Newtons is applied parallel to the incline, up the incline and another of 14 Newtons is applied perpendicular to the incline, away from the incline. What i... 5 answers ##### Write as a mixed number and simplify.$rac{69}{9}$ Write as a mixed number and simplify. $\frac{69}{9}$... 1 answer ##### Unsp? HEEAtHaal "derived diffcrent Nrtca Gntky Jlualcn t ralnl} Tex 4 ~nnItuenla Menn uacky lttat In hamlrul Jaxctivation 0f malcmnally LA nuttn chtun Xchnxmurnc penes Anntty Tnu-fuldin4 anlm-le Fin hctiz Kous Lenern chrmature adldditionul informtation Cnnnurt(n FIcuae nutr mlenn bclut ACILMEL edvink Hrletetfamnt Cattn- strcptonycinl AnsMivity IHret Aenlntannlarix Hmla streplonycin resIslant Clcntn athie knclle Chek eetsitivity follows the inheritance Ansier not shown (nn UccutEhc Nntocnn Unsp? HEEAtHaal "derived diffcrent Nrtca Gntky Jlualcn t ralnl} Tex 4 ~nnItuenla Menn uacky lttat In hamlrul Jaxctivation 0f malcmnally LA nuttn chtun Xchnxmurnc penes Anntty Tnu-fuldin4 anlm-le Fin hctiz Kous Lenern chrmature adldditionul informtation Cnnnurt(n FIcuae nutr mlenn bclut ACILMEL ... 5 answers ##### Calculationsi For cach trial, calculate the following: Show complete calculations for one trial: Calculate pressure exerted by the water column mmHg (heights_ inversely proportional density) hu UHvDetermine pressure of dry hydrogen gas collected (gas pressure collected over water)Ptotal Pu Pwsler PcoluninCaleulate moles of hydrogen gas produced (ideal gas law; PV-nRT)Calculate mass of metal reacted (stoichiometrw) Writc balanced equation between the meal and HCI and determine the molar ratio of Calculationsi For cach trial, calculate the following: Show complete calculations for one trial: Calculate pressure exerted by the water column mmHg (heights_ inversely proportional density) hu UHv Determine pressure of dry hydrogen gas collected (gas pressure collected over water) Ptotal Pu Pwsler ... 1 answer ##### Hypothesis Testing 02 (when o is unknow b) A research engineer for a tire manufacturer is investigating the tire li... Hypothesis Testing 02 (when o is unknow b) A research engineer for a tire manufacturer is investigating the tire life for a new rubber compound and has built 10 tires and tested them to end-of-life in a road test. The sample mean and standard deviation are: 60,139.7 km and s = 3645.94 km. Write down... 5 answers ##### 4. 3 WM +(x) the 2 W0 follouing 22 Where Where Where where H 70 Jano M 4e indicakd interval 4. 3 WM +(x) the 2 W0 follouing 22 Where Where Where where H 70 Jano M 4e indicakd interval... 5 answers ##### Solve the initial-valuc problem y" + 3y' 4y =Tu(t - 1) with y(0) = 0, Y(0) = 0. Solve the initial-valuc problem y" + 3y' 4y =Tu(t - 1) with y(0) = 0, Y(0) = 0.... 5 answers ##### AluminumCopper 0.0582Brass 0.0526Imatal (kg)0,0185Ija? " altr (kg)0.19240.18290.2036T omatal (C)95.7094.5889.7eTojwna (PC) T; (PC) motal U/kg K) Crtondard - Ucue (/kg K) % Error23823.3923.3925.3026"26"900385380 Aluminum Copper 0.0582 Brass 0.0526 Imatal (kg) 0,0185 Ija? " altr (kg) 0.1924 0.1829 0.2036 T omatal (C) 95.70 94.58 89.7e Tojwna (PC) T; (PC) motal U/kg K) Crtondard - Ucue (/kg K) % Error 238 23.39 23.39 25.30 26" 26" 900 385 380... 1 answer ##### Using the unit circle, how do you find the value of the trigonometric function: sec(-227pi/4) ? Using the unit circle, how do you find the value of the trigonometric function: sec(-227pi/4) ?... 1 answer ##### Many asthma treatments involve inhaled medication. However, asthmatic patients are often young children, and use of... Many asthma treatments involve inhaled medication. However, asthmatic patients are often young children, and use of these inhalers can be somewhat complicated. What can parents and health care providers do to work around this problem?... 5 answers ##### Unless otherwise dirocted, round all answvers I0 docimal placosQuottlons In a Survoy Of dally internot game tIme usage by college students sample of students provided the following data; 1.5 2,2 0,5 2 5 Find Ihe mean Of Iho data 0+04 541,541.5+2.2+2,5,3=/1.2 4,2 / 7 Find tho modian of the data N 65,212,2 < 5Find Ihe mode of Ihe data 0,0.5 L54S,2,2,2.5,3Find (he standard deviation of Ihe samplo data Entercci data St Mto Calculo tor Ht Stat n Wet 10 Cale tken pss <q1 tvom 44 € i9o F 9943 Unless otherwise dirocted, round all answvers I0 docimal placos Quottlons In a Survoy Of dally internot game tIme usage by college students sample of students provided the following data; 1.5 2,2 0,5 2 5 Find Ihe mean Of Iho data 0+04 541,541.5+2.2+2,5,3=/1.2 4,2 / 7 Find tho modian of the data N 65... 1 answer SECTION DATE GRADE Experiments tions below, classify as a combination, decomposition, single PRF-LAB QUESTIONS For each of the reactions below placement, or double replacement 1. Ca(s) + Cl (8) CaCl(s) 2CuO(s) 2. 2Cu(s) +0,($) — 2HNO3(aq) + Caso (8) 8. Ca(NO)(aq) + H.SO (aq) — 4. NH, (aq... 5 answers ##### X HHFI Which of the following is the equivalent capacitance of the configuration shown above. All the capacitors are identical, and each has capacitance C?Your answer:A) c B) cC) 1C D) 1E) 11 X HHFI Which of the following is the equivalent capacitance of the configuration shown above. All the capacitors are identical, and each has capacitance C? Your answer: A) c B) #c C) 1C D) 1 E) 11... 5 answers ##### Conster the lollowlng sample regression equalion 150 20x, where yts Ihe demand Ior Protuct (In L,OOOs} and *Is the prlce af the product (In$} If the 53, then we expect demand for Product piice 0f the good increases by Conster the lollowlng sample regression equalion 150 20x, where yts Ihe demand Ior Protuct (In L,OOOs} and *Is the prlce af the product (In \$} If the 53, then we expect demand for Product piice 0f the good increases by... 2 answers ##### I need help answering a questions which of these statements are true?1. The lead ball and the rock have equal amounts of inertia.... 1 answer ##### QUESTION TWO A. Fiifi Auto mobile company manufactures cars and trucks. Each vehicle must be processed... QUESTION TWO A. Fiifi Auto mobile company manufactures cars and trucks. Each vehicle must be processed in the paint shop and body assembly shop. If the paint shop were only painting trucks, then 80 per day could be painted. If the paint shop were only painting cars, then 120 per day could be painted... 5 answers ##### [BS21e Int Kl0 BKKDEEDL [BS21e Int Kl0 B KKDEE DL... 5 answers ##### 7) A 2.80 gram of CaClz is dissolved in a calorimeter containing 100 grams ofH;O (c=4.18 Jg"C) The temperature of the water rosc from 20.50"C to 25.40C. What is the hcat of the reaction CaClz Cal' 2C1? b) What is the enthalpy o/ the reaction? 7) A 2.80 gram of CaClz is dissolved in a calorimeter containing 100 grams ofH;O (c=4.18 Jg"C) The temperature of the water rosc from 20.50"C to 25.40C. What is the hcat of the reaction CaClz Cal' 2C1? b) What is the enthalpy o/ the reaction?... -- 0.056402--
# What are the difference between some basic numerical root finding methods? I understand the algorithms and the formulae associated with numerical methods of finding roots of functions in the real domain, such as Newton's Method, the Bisection Method, and the Secant Method. Because their formulae are constructed differently, innately they will differ numerically at certain iterations. However, what are the exact advantages of each one algorithm. All I know about these algorithms, other than their formualae are: 2. Secant Method bypasses the need to compute a derivative, however converges superlinearly. 3. Bisection method converges linearly In the real world situations I have encountered (and I have encountered several), evaluating the function is by far the most expensive thing you can do. Even massive amounts of side calculation to avoid a single function evaluation is well worth the while. So the faster a method converges, the better choice it can be - provided you can meet its requirements 1. Newton's method is great for speed, but it does require that you know the derivative, and I have yet to encounter a real-world application where this was available. That is not to say that they don't occur. But I have not been so lucky. Another problem with Newton's method is instability. If you hit a place where the function is close to flat, it may send your next iteration out beyond Pluto. And in fact, there is no guarantee of convergence. You can find it getting caught in a loop. 2. Secant Method. Well if you can't find the tangent line because you don't know the derivative, estimate it with a secant line instead. There is a school of thought that this can be faster than Newton's method despite the slower convergence, because it only requires one new function evaluation for each iteration, while Newton's method requires two. I am not sure if I buy it, but as I said, I have no practical experience with Newton's method (though plenty of academic experience with it). This method has exactly the same instability problems as Newton's method. 3. Bisection Method. Guaranteed convergence, provided you can straddle the root at the start. Easily understood, easily programmed, easily performed, slow as blazes. Never sends your iteration off into the wild blue yonder. But still slow as blazes. This is your fallback method when all else fails. 4. Brent's Method. No, you did not mention this one. But in practice, some variant of Brent's Method is usually what you want to use. This method combines the Secant and Bisection methods, and another method called "Inverse Quadratic", which is like the secant method, but approximates the function with an inverse quadratic function instead of a line. It results in a slight improvement in convergence speed. Occasionally it fails, so the secant method is used as a back-up. Brent's method also monitors convergence, and if it gets too slow or tries to go off into the wild, the method drops in a bisection step instead to get things back on track. The problem I've found with Brent's method is that one side of your interval will converge quickly to a root, but the other side will remain rarely moved, because the Secant/Inverse Quadratic steps will keep having their iterations on the same side of the root. Brent's method eventually brings in the other side by slow Bisection. A trick I've employed to some improvement is to intentionally double the step size of the Secant/Inverse Quadratic steps in an effort to intentionally overshoot the root, thereby bringing that side in as well. Bringing in the other side of the interval by a large amount usually significantly improves convergence speed even on the side that was converging already. • You might maybe try out variants of regula falsi, like the Illinois method. Easier to code than Brent's method, following the same idea of preventing stalling, thus faster than normal regula falsi. – Lutz Lehmann Oct 5 '15 at 6:55 • This is a fabulous post, and I heavily, heavily appreciate the time and effort you took into typing this. – Alvin Nunez Oct 6 '15 at 0:36 • @Paul Sinclair, do you by chance know of a function where Newton's Method would end up getting caught in a loop as you mentioned? – Dragonite Sep 29 '17 at 15:40 • @Dragonite - $y = 1 + |x|$ is a simple example. Of course in this case, it has no root to find and is not differentiable at one point, but the function could be modified close to 0 to solve both issues. But if your starting point is outside the modified zone, it will just flip back and forth between $1$ and $-1$. – Paul Sinclair Sep 29 '17 at 23:21 • The issue you are experiencing with Brent's method is actually an improper use of tolerance. After the last interpolating iteration is used, it should be applied again and rounded towards the midpoint by an amount less than the desired error (say half the error). This will cause the result to land on the opposite side of the root and snap the interval size down to the desired error. (Note: This marginal rounding can simply be applied onto every result.) (Additional note: what you are suggesting be used instead will actually slow the convergence, whereas the suggested modification doesn't.) – Simply Beautiful Art Aug 28 at 19:08
Preparing data for kinship calculations 1 0 Entering edit mode 4 months ago Hello, I have a merged VCf file and want to find related samples. I want to do this with plink2 --make-king-table. I have 3 questions: A. As pruning is suggested before performing the analysis, would you also suggest to filter for maf > 1% / >5% before pruning to remove rare variation? B. The pruning-result-files are called "file.in" and "file.out". Am I right, that the "file.in"-file contains the remaining SNPs (The ones with the highest allele frequency in a ld-block)? C. Which cutoff for r-squared and for the window-size would you take for pruning --> I would go with 0.01 (to be stringent) for r-squared but I don't have any idea what a good window-size border could be (10kb, 100 kb, 1000 kb)? Best, Andreas PRUNING VCF PLINK2 KING • 198 views 2 Entering edit mode 4 months ago Pruning is not suggested before --make-king-table. From the KING manual Please do not prune or filter any 'good' SNPs that pass QC prior to any KING inference, unless the number of variants is too many to fit the computer memory, e.g., > 100,000,000 as in a WGS study, in which case rare variants can be filtered out. LD pruning is not recommended in KING.
Why is $f(x)=\int_{1}^{x^2} \frac{\ln(xt)}{1+t}dt$ continuously differentiable and what is it's derivative? In doing some old exam questions, I came across the following problem. Let $$f:(1,\infty)\rightarrow \mathbb{R}$$ $$f(x)=\int_{1}^{x^2} \frac{\ln(xt)}{1+t}dt$$ Questions: a) Reason as to why f is continuously differentiable and find an integral-free representation of the derivative b) Show that $$f'(x)>0$$ for $$x>1$$ I assume (b) will be trivial when (a) is solved. For (a) my approach was integrating more or less directly, then differentiating. I end up caught in a cycle of integrating by parts: $$\int_{1}^{x^2} \frac{\ln(1+t)}{t}dt$$and $$\int_{1}^{x^2} \frac{\ln(t)}{t+1}dt$$ keep coming back... I don't have anything in my toolbox (that I know of) that lets me crack this. So I assume that either I shouldn't actually integrate and then differentiate or I'm missing something in my "integration-toolbelt". What's going on here? Also, I don't actually know how to "Reason as to why f is continuously differentiable", are my problems connected? Hint: Use the fact that $$\ln(xt)=\ln x+\ln t$$. So $$f(x)=\ln \, x\int_1^{x^{2}} \frac 1 {1+t} dt+\int_1^{x^{2}} \frac {\ln t } {1+t}dt$$ and the first term is $$(\ln x) (\ln (1+x^{2})-\ln 2)$$. You can now write down $$f'$$ easily. It is also quite easy to show that $$f'(x) >0$$ for $$x >1$$. I will leave that to you. $$let\; y(x)\;=\;\int_{1}^{x^2} \frac{\ln(xt)}{t+1}dt$$ Therefore y(x) = $$\int_{1}^{x^2} \frac{\ln(x)}{t+1}dt\;$$ + $$\int_{1}^{x^2} \frac{\ln(t)}{t+1}dt$$ Hence y(x) = $$\int_{1}^{x^2} \frac{\ln(t)}{t+1}dt\;$$ + $$ln(x)\, \int_{1}^{x^2} \frac{1}{t+1}dt$$ Now differentiate $$\frac{dy}{dx}\;= \frac{2xlnx^2}{1+x^2}\; + \; lnx\,\frac{2x}{1+x^2}\;+\; (\frac{1}{x})(\; \int_{1}^{x^2}\frac{1}{1+t})dt$$ since all terms are positive for x $$\gt1\;$$hence f'(x)$$\gt0$$
# Pseudorandom vs quasi-random I'm implementing an encryption algorithm that requires me to use a random number to mix up the values. However I'm caught in between if I should use quasi-random or pseudorandom. These are the things I'm thinking about: 1. quasi-random isn't really a random number generator by definition but it's a good substitute for a uniform random number generator, especially in the area of sampling across a plane and mathematically speaking it's more uniform compared to a uniform number generator. 2. pseudorandom on the other hand by definition is random. But if the attacker knows the seed its clearly not random also. And because it is less uniform compared to quasi-random it presents a certain bias towards a certain group of numbers. So is randomness or the uniform distribution more valued when generating a random number? Should I choose a PRNG or a quasi-random number generator? • If this is for crypto purposes (ie you really don't want the numbers to be predicted), you want a CSPRNG, if not, this is the wrong place to ask :p – SEJPM Sep 23 '17 at 9:42 • What do you mean by “quasi-random”? In what way is it an alternative to “pseudorandom”? The opposite of pseudorandom is non-deterministic random, and the difference has nothing to do with uniformity. I think you have a misconception about randomness, but I can't figure out what it is. What are you trying to implement precisely? What led you to think about quasi-random vs pseudorandom? – Gilles 'SO- stop being evil' Sep 23 '17 at 12:57 • @Gilles: the term 'quasirandom' refers to a random generator that is biased, but in a way that we like (for some reason); for example, consecutive outputs may be more spaced out than would be expected with a uniform generator. This may be of use in statistical analysis or monte-carlo simulations; it hasn't been used much in cryptography... – poncho Sep 23 '17 at 13:10 • You seem confused with your definitions. Random simply means unpredictable. It implies nothing whatsoever about the distribution of the output. All dice and Poisson generators are random. There is no mathematical definition of quasi random. If you have output distribution s in mind, it might be worth your while rephrasing the question. – Paul Uszak Sep 23 '17 at 21:45 I'm implementing an encryption algorithm that requires me to use a random number to mix up the values. However I'm caught in between if I should use quasi-random or pseudorandom. It may sound counterintuitive, but what works best within an encryption algorithm does in fact depend on the details of that algorithm. It may be that, for your encryption algorithm, a uniform random distribution has a nonnegligible probability of creating a weak transform, and a tailored nonuniform ("quasirandom") distribution is better. I assume you have spent a lot of time exhaustively cryptanalyzing your encryption algorithm; what does that analysis say? And, by the way: And because it is less uniform compared to quasi-random it presents a certain bias towards a certain group of numbers. Actually, that's not accurate; the definition of a uniform distribution (which is the name of the distribution that an unbiased random number generator generates) is that it doesn't have a bias. For example, consider the probability that a generator proceduces two consecutive outputs $7, 7$. If the generator was uniform, the probability of those two outputs is precisely the same as the probability of producing any other two specified outputs. However, if the generator is quasirandom, the probability may be significantly smaller (even 0), if the quasirandom generator was biased away from repeating outputs. • thanks for your reply. by the law of large numbers a uniform rng should able to cover a 2D plane however it seems to me, with example codes comparing quasi-random to uniform random numbers, the uniform random number would have an issue covering a 2D plane properly due to its high--discrepancy. But i have realized my mistake and i was looking at the wrong places. it seems that the entropy of the rng is most important for encryption and quasi-random would get increasing predictable as the number of encryption increases. – albusSimba Sep 23 '17 at 15:55 • @albusSimba I guess that feels like you're now thinking in the right direction at least. And yes, the entropy source is very important; once you have about 128 bits of "true" entropy you put it into a CSPRNG and you should be OK for the random number generator (although reseeding the RNG with additional entropy now and then doesn't hurt). – Maarten Bodewes Sep 25 '17 at 17:10
# Euler's equations 1. Feb 21, 2008 ### ehrenfest 1. The problem statement, all variables and given/known data Disregard the title of this thread. Say you have a fixed coordinate system and a rotating coordinate system. Say that the rotating coordinate system rotates with angular velocity $$\vec{\omega}$$. Is it always true that the components of $$\omega$$ will be the same in both coordinate systems? If not, when is it true? 2. Relevant equations 3. The attempt at a solution 2. Feb 21, 2008 ### pam If I understand what you wrote, omega would be zero in the rotating system. 3. Feb 21, 2008 No. $\omega[/tex] is definitely not zero in the fixed frame. Therefore it cannot be zero in the rotating. There is no way you can transform a nonzero vector in the fixed frame into a zero vector in the rotating frame. Remember that [itex]\omega[/tex] is the angular velocity vector that describes the rotation of the rotating coordinate system w.r.t the fixed coordinate system. I am asking about how this vector transforms into the rotating coordinate system 4. Feb 21, 2008 ### D H Staff Emeritus No. It is trivial mater to construct a counter-example. What is true is that if there is no nutation or procession, (i.e., rotation about a fixed axis), the angular velocity vector will have a constant direction in both coordinate frames. If the rotation rate is constant as well, the angular velocity vector will be constant in both frames. Suppose there is nutation or precession (i.e., the angular velocity vector is not constant). What can you say about the derivative of the angular velocity vector as observed in the inertial and rotating frames? 5. Feb 21, 2008 ### ehrenfest It is the same. Firstly, can you actually give me a counterexample? Secondly, is it true that if the fixed frame ever coincides with the rotating frame, then the components of omega are always the same in both systems (I think that follows obviously from the statement above)? Last edited: Feb 21, 2008 6. Feb 21, 2008 ### D H Staff Emeritus [itex]\dot \omega$ is the same vector whether observed in the rotating or inertial frame. Note well: This is not the case for most vectors. In general, the derivative of a vector as observed in the inertial frame and the derivative rotating frame are different vectors. For example, consider a point fixed in the rotating frame some distance $r$ away from the rotation axis. The time derivative of the location of this fixed point is obviously zero in the rotating frame and equally obviously $\omega r$ in the inertial frame. While the time derivative of the angular velocity vector, $\dot {\vect{\omega}}$, is the same vector in both frames, that does not mean it has the same coordinates in both frames. Not if this is homework. 7. Feb 21, 2008 ### ehrenfest Thanks. Is what I said after "secondly" true? 8. Feb 21, 2008 ### D H Staff Emeritus You still haven't said whether this is homework. I guess saying a qualified "yes" isn't offering too much help. Qualified meaning constant angular velocity. If the angular velocity is not constant, the statement is obviously not true. 9. Feb 21, 2008 ### ehrenfest This is not homework. I think it is true even if the angular velocity is not constant. The derivative of the angular velocity vector measured in both the coordinate systems is the same, so if omega has the same components in both coordinate systems at time t, it must always have the same components in both coordinate systems even if those components are changing. 10. Feb 21, 2008 ### D H Staff Emeritus Here's a counterexample: Build a second reference frame from another frame by rotating 90 degrees about the x axis. Then $(\hat x_2,\hat y_2,\hat z_2) = (\hat x_1,\hat z_1,-\hat y_1)$. Now set this second frame in rotation about the y' axis. The angular velocity vector is $[0,0,\omega]^T$ in the non-rotating frame, $[0,\omega,0]^T$ in the rotating frame. Counterexample again: Consider a cylinder with uniform mass distribution but a non-spherical inertia tensor. Define a coordinate system based on the cylinder. This will be our rotating frame. Set the cylinder spinning in space (no external torques) such that the angular velocity has non-zero components along and normal to the cylinder axis. I can always define an inertial frame that is instantaneously co-aligned with the rotating frame at some given point in time. At this point in time, the angular velocity vector tautologically the same components in the inertial and rotating frame. By construction, the rotating frame is tumbling. Therefore, at some other point the angular velocity vector will not have the same components in the inertial and rotating frame.
# Unsetting messages The following code suggests that Mathematica stores the messages in cache: (*1*)Remove[VariationalD] (*2*)Messages[VariationalD] (*3*)Message[VariationalD::args, OPS] (*4*)Messages[VariationalD] The output of the second line is {} as expected for a symbol that has just been removed. The output of the third line however is not the expected "-- Message text not found --". Instead, Mathematica seems to have pulled that message from some cache. In any case, how can I reset the message cache (without resetting the whole installation)? Update I apologize because my previous example did not really work. In any case, the following is a better example. This code was executed on a freshly launched kernel. before = Messages[General]; Message[General::dummies, OPS]; after = Messages[General]; Complement[after, before] The message General::dummies was not part of the original list of messages attached to General. Following the documentation, the next place to check would be $NewMessage, but Messages[$NewMessage] is empty. So, where did General::dummies come from? - You could do Messages[VariationalID] =. ... BTW I can't reproduce the problem you mention. –  Szabolcs Nov 15 '13 at 22:10 @Szabolcs: Messages[VariationalID] =. did not work. The message might need to be loaded into the cache by loading the package. Once in the cache, it seems to stay there. –  Hector Nov 15 '13 at 22:20 I had a typo which you copied and pasted. –  Szabolcs Nov 15 '13 at 22:23 Mathematica loads many of its built-in messages from the file FileNameJoin[{$InstallationDirectory,"SystemFiles","Kernel","TextResources","English","Messages.m"}] As I understand it is the "message cache" you seek. The search in this file can be controlled through the $NewMessage variable. By default its value is Automatic: ClearAttributes[$NewMessage, {Protected, ReadProtected}] Definition@$NewMessage $NewMessage = Automatic By unsetting it you can disable search in the above-mentioned file as well as in other files in that folder: $NewMessage =. Also, you can be interested in this discussion: "How to find a specific error message?" - This is not a bug. There is General::args and VariationalD::args. The former message is a general one that can be issued for any symbol: Message[boo::args, "something"] boo::args: something called with invalid parameters. This can be overridden by setting a message for that particular symbol: In[5]:= boo::args = "custom message" In[6]:= Message[boo::args] During evaluation of In[6]:= boo::args: custom message You can simply remove all messages associated with a symbol using Messages[boo] =. Now try again: Message[boo::args, "something"] boo::args: something called with invalid parameters. This works the same way for VariationalD. Quoting from the Message documentation, Given a message specified as symbol::tag, Message first searches for messages symbol::tag::Subscript[lang, i] for each of the languages in the list $Language. If it finds none of these, it then searches for the actual message symbol::tag. If it does not find this, it then performs the same search procedure for General::tag. If it still finds no message, it applies any value given for the global variable $NewMessage to symbol and "tag". - In the documentation of $NewMessage, I found "A typical value for$NewMessage might be Function[ToExpression[FindList[files,ToString[MessageName[#1,#2]]]]]." This tells me that Mathematica can be set up to look into your packages to find messages. –  Hector Nov 16 '13 at 2:39
# Math Help - Standardizing A Random Variable 1. ## Standardizing A Random Variable Hello, To standardize a random variable that is normally distributed, it makes absolute sense to subtract the expected value, $\mu$, from each value that the random variable can assume--it shifts all of the values such that the expected value is centered at the origin. But how does dividing by the standard deviation play a role in the standardization of a random variable? That part is not as intuitive to me as is subtracting $\mu$. 2. ## Re: Standardizing A Random Variable I am not sure if I got your question right. A standard normally distributed random variable has variance $1.$ Let us consider a r.v. $Y$ with variance $\sigma^2$ and mean $\mu$. The r.v. $\frac{1}{\sigma}( Y-\mu)$ is now standard normally distributed (since the expectation is linear). In order to get a variance of $1$ we therefore have to divide by the standard deviation 3. ## Re: Standardizing A Random Variable Not wanting to create another topic for this, so I'm just going to throw the question out here. What's the actual unit of measurement that corresponds to a variable that's been standardized?
# Need help evaluating a limit ## Homework Statement Evaluate the limit ##\lim_{x\to0} \frac{1}{x}(\frac{1}{tanx}-\frac{1}{x}) ## using Taylor's formula. (Hint: ##\frac{1}{1+c}=\frac{1-c^2+c^2}{1+c} ## may be useful) ## The Attempt at a Solution I began by substituting ##tanx## with ##x+\frac{x^3}{3}+x^3ε(x)##, where ε tends to zero as x approaches 0. ##\frac{1}{x}(\frac{1}{tanx}-\frac{1}{x})=\frac{1}{x}(\frac{x-tanx}{xtanx})=\frac{1}{x}(\frac{x-x-\frac{x^3}{3}-x^3ε(x))}{x(x+\frac{x^3}{3}+x^3ε(x))})=\frac{1}{x}(\frac{-\frac{x^3}{3}-x^3ε(x))}{x(x+\frac{x^3}{3}+x^3ε(x))})=\frac{-\frac{x^3}{3}-x^3ε(x))}{x^2(x+\frac{x^3}{3}+x^3ε(x))}=\frac{-\frac{x^3}{3}-x^3ε(x))}{x^3+\frac{x^4}{3}+x^4ε(x))}=\frac{-\frac{1}{3}-ε(x))}{1+\frac{x}{3}+xε(x))}## →-1/3 as x→0 Last edited: Related Calculus and Beyond Homework Help News on Phys.org stevendaryl Staff Emeritus ## Homework Statement Evaluate the limit ##\lim_{x\to0} \frac{1}{x}(\frac{1}{tanx}-\frac{1}{x}) ## using Taylor's formula. (Hint: ##\frac{1}{1+c}=\frac{1-c^2+c^2}{1+c} ## may be useful) ## The Attempt at a Solution substituting ##tanx## with ##x-\frac{x^3}{3}+x^3ε(x)##, where ε tends to zero as x approaches 0. ##\frac{1}{x}(\frac{1}{tanx}-\frac{1}{x})=\frac{1}{x}(\frac{x-tanx}{xtanx})=\frac{1}{x}(\frac{x-x-\frac{x^3}{3}+x^3ε(x))}{x(x-\frac{x^3}{3}+x^3ε(x))})=\frac{1}{x}(\frac{\frac{x^3}{3}+x^3ε(x))}{x(x-\frac{x^3}{3}+x^3ε(x))})## You're not far from the answer. After cancellations, you have: $\frac{\frac{x^3}{3} + ...}{x^3 + ...}$, where $...$ represents higher-order terms. If you ignore the higher-order terms, what do you get? You're not far from the answer. After cancellations, you have: $\frac{\frac{x^3}{3} + ...}{x^3 + ...}$, where $...$ represents higher-order terms. If you ignore the higher-order terms, what do you get? It's 1/3. But where to use the hint I am given? Last edited: Samy_A Homework Helper stevendaryl Staff Emeritus It's 1/3. But where to use the hint I am given? I don't get that, either. You are right. It's a typo. PeroK Homework Helper Gold Member 2020 Award ## Homework Statement Evaluate the limit ##\lim_{x\to0} \frac{1}{x}(\frac{1}{tanx}-\frac{1}{x}) ## using Taylor's formula. (Hint: ##\frac{1}{1+c}=\frac{1-c^2+c^2}{1+c} ## may be useful) ## The Attempt at a Solution I began by substituting ##tanx## with ##x-\frac{x^3}{3}+x^3ε(x)##, where ε tends to zero as x approaches 0. ##\frac{1}{x}(\frac{1}{tanx}-\frac{1}{x})=\frac{1}{x}(\frac{x-tanx}{xtanx})=\frac{1}{x}(\frac{x-x-\frac{x^3}{3}+x^3ε(x))}{x(x-\frac{x^3}{3}+x^3ε(x))})=\frac{1}{x}(\frac{\frac{x^3}{3}+x^3ε(x))}{x(x-\frac{x^3}{3}+x^3ε(x))})=\frac{\frac{x^3}{3}+x^3ε(x))}{x^2(x-\frac{x^3}{3}+x^3ε(x))}=\frac{\frac{x^3}{3}+x^3ε(x))}{x^3-\frac{x^4}{3}+x^4ε(x))}##?? I think you've got the wrong series for ##tan(x)##. Check your coefficients. Also, if you are going to use the Taylor series, you should use the series for ##1/tan(x)## by applying the binomial expansion to the series for ##tan(x)## or using the series for ##cot(x)##. That said, this one looks tailor-made(!) for L'Hopital, using ##tan = sin/cos##. stevendaryl Staff Emeritus No, I think he started with the wrong expansion for tan(x). It should be $tan(x) = x + \frac{x^3}{3} + ...$ not $x - \frac{x^3}{3}$ Samy_A Homework Helper No, I think he started with the wrong expansion for tan(x). It should be $tan(x) = x + \frac{x^3}{3} + ...$ not $x - \frac{x^3}{3}$ Yes, he has a wrong Taylor series. In the denominator, that should have been ##+\frac{x^3}{3}##. But in the numerator, he expands ##x-\tan x##, so the ##\frac{x^3}{3}## gets a minus sign. When taking the limit, the ##x³## term resulting from the ##\tan## series in the denominator is not important, but the one in the numerator is. The correct limit is ##-\frac13##. PeroK Homework Helper Gold Member 2020 Award My advice would be to do it the easy way using L'Hopital so you know what the answer is, then do it the hard way using Taylor series Samy_A Homework Helper My advice would be to do it the easy way using L'Hopital so you know what the answer is, then do it the hard way using Taylor series Sure, the hints he got are strange. This is, as you said, tailor-made(!) for L'Hopital. ##\frac{1}{x}(\frac{1}{tanx}-\frac{1}{x})=\frac{1}{x}(\frac{x-tanx}{xtanx})=\frac{1}{x}(\frac{x-x-\frac{x^3}{3}-x^3ε(x))}{x(x+\frac{x^3}{3}+x^3ε(x))})=...=\frac{-\frac{x^3}{3}-x^3ε(x))}{x^3+\frac{x^4}{3}+x^4ε(x))}=\frac{-\frac{1}{3}-ε(x))}{1+\frac{x}{3}+xε(x))}## →-1/3 as x→0 Looks like I didn't need the hint. However, I think I'm supposed to derive the Taylor formula I used for the problem. We have covered expansions for sinx and cos x in class. So ##tanx=\frac{sinx}{cosx}=\frac{x-\frac{x^3}{3!}+\frac{x^5}{5!}-...}{1-\frac{x^2}{2!}+\frac{x^4}{4!}-...}=## Last edited: PeroK Homework Helper Gold Member 2020 Award ##\frac{1}{x}(\frac{1}{tanx}-\frac{1}{x})=\frac{1}{x}(\frac{x-tanx}{xtanx})=\frac{1}{x}(\frac{x-x-\frac{x^3}{3}-x^3ε(x))}{x(x+\frac{x^3}{3}+x^3ε(x))})=...=\frac{-\frac{x^3}{3}-x^3ε(x))}{x^3+\frac{x^4}{3}+x^4ε(x))}=\frac{-\frac{1}{3}-ε(x))}{1+\frac{x}{3}+xε(x))}## →-1/3 as x→0 Looks like I didn't need the hint. However, I think I'm supposed to derive the Taylor formula I used for the problem. We have covered expansions for sinx and cos x in class. So ##tanx=\frac{sinx}{cosx}=\frac{x-\frac{x^3}{3!}+\frac{x^5}{5!}-...}{1-\frac{x^2}{2!}+\frac{x^4}{4!}-...}=## I would go for: ##1/tanx = cosx/sinx## You might as well make things easy for yourself! Use the binomial for ##1/sinx = (x-\frac{x^3}{3!}+\frac{x^5}{5!}- \dots)^{-1} = (1/x)(1-\frac{x^2}{3!}+\frac{x^4}{5!}- \dots)^{-1}## I actually derived the expansion for ##\tan x## from the definition of taylor formula. ##f'(x)=D\tan x=1+\tan^2 x##, ##f'(0)=1## ##f''(x)=2\tan x(1+\tan^2 x)##, ##f''(0)=0## etc. and got ##\tan x=x+\frac{x^3}{3}+x^3ε(x)##. I think this is maybe shorter and nicer than deriving it by long division. Last edited: SammyS Staff Emeritus Homework Helper Gold Member I would go for: ##1/ \tan x = \cos x / \sin x## I thought I had posted something on this a day or two ago. Using the first of Perok's suggestions, then common denominator, etc. : ##\displaystyle \ \frac{1}{x}\left(\frac{1}{\tan x}-\frac{1}{x}\right) \ ## ##\displaystyle \ =\frac{x \cos x - \sin x}{x^2\sin x} \ ## Now use the Taylor expansions for sin x and cos x .
I am working on the vibration of the continuous system, I have seen lots of books on vibrations which talks about the natural frequency and mode shapes of the continuous system. I am interested in finding when particular time varying load is applied on continuous system how do we know which modes of the systems are excited and why? Take the Fourier Transform of the time varying driving force, this will give the frequency content of the driving force. Multiple modes of vibration can be driven at once, and will superpose with each other, but time varying driving forces with a frequency content that are high at frequencies near a particular resonant frequency will mainly drive the resonant frequency's corresponding vibration mode. Note that for a perfectly sinusoidal (in time) driving force will have an infinite (Dirac delta) frequency content at the frequency of the sinusoid, and zero for all other frequencies. A perfect impulse (i.e. infinite force applied for infinitesimal time) will have a frequency content equal over all frequencies. The reason this can be done is because superposition applies to vibrating systems, assuming vibrations are small enough. You can split any time varying driving force into a the sum of multiple sinusoids: consider what modes each sinusoid drives, and superpose all the effects together. MODAL PARTICIPATION For this section, I will only consider 1D vibrating systems governed by the wave equation. There exist cases, such as a transversely vibrating cantilever beam, that are governed by wave-like equations which I won't cover here unless it's of interest. Since the governing equation is the wave equation, the mode shapes will be sinusoidal. In order to determine what amounts of each mode is being driven it is useful to write the deflected shape, $w(x,t)$, as the sum of each of the modes. $$w(x,t) = \sum_{i} \alpha_i (t) u_i(x)$$ where $\alpha_i (t)$ is the amplitude of the $i^{th}$ mode, which can vary in time, and $u_i(x)$ is the $i^{th}$ mode shape of unit amplitude. This expression is then substituted into the governing wave equation. For example, the governing wave equation for a vibrating tense string is given as follows: $$\mu \frac{\partial^2 w}{{\partial t}^2} - T \frac{\partial^2 w}{{\partial x}^2} = f(x,t)$$ where $w(x,t)$ is the transverse displacement $\mu$ is mass-per-unit-length of the string, $T$ is the tension in the string, and $f(x,t)$ is the distributed transverse force-per-unit-length acting on the string. (This equation can be adapted to other wave-governed systems by replacing the variables appropriately). Substituting the sum of modes expression into the wave equation gets: $$\mu \sum_i{(\ddot \alpha_i + \omega_i^2 \alpha_i) u_i(x)} = f(x,t)$$ Where $\omega_i$ is the resonant frequency corresponding to the $i^{th}$ mode. Then, by multiplying by $u_j(x)$ and integrating along the whole domain with respect to $x$, noting that some terms cancel out (the integral of $u_i(x) u_j(x)$ is zero for $i\ne j$ due to orthogonality between different mode shapes), we get the following differential equation: $$\ddot \alpha_j + \omega_j^2 \alpha_j = \frac{2}{\mu L} \int_0^L f(x,t) u_j(x) dx$$ Where $L$ is the length of the 1D domain (in this case, the length of the string). What this all means is that, given the distributed force-per-unit length on the system, solving the above differential equation will give the amplitude of the $j^{th}$ mode as a function of time, and hence quantify how much that mode is present at any point in time. If the distributed force is sinusoidal, after a while (i.e. at steady state) the modal amplitude will also vary sinusoidally at the same frequency. For example if: $f(x,t) = f_0(x) \sin(\omega t)$ Then, $\alpha_j(t) = \alpha_{j,0} \sin(\omega t)$ Where, $\alpha_{j,0} = \frac{\frac{2}{\mu L} \int_0^L f_0(x) u_j(x) dx}{\omega_j^2 - \omega^2}$ Note how the modal amplitude seems to rocket to infinity if the system is sinusoidally driven at the resonant frequency. This is to be expected for undamped resonance: in reality, damping will prevent such unphysical responses, but this has be omitted from the scope of this answer. It is possible to modify the modal amplitude differential equation so that the force applied is a point force instead of a distributed force. Therefore, if a point force $F(t)$ is applied at $x=x_0$, then (substituting $f(x,t) = F(t) \delta(x -x_0)$, where $\delta(x)$ is the Dirac delta function): $$\ddot \alpha_j + \omega_j^2 \alpha_j = \frac{2}{\mu L} F(t) u_j(x_0)$$ • thank you for valuable answer,I understood it. Also I want to know about modal participation, that means under the applied load which modes vibrates with what amplitude? If you are aware with this please help me in understanding that. Thank you Sep 19 '16 at 6:16 • I've updated my answer; hope that helps. Sep 20 '16 at 22:54 • thank you for your valuable time and effort for explaining I will try to get back to you if I need some more help, hope you dont mind that!! Sep 21 '16 at 9:35 • Certainly, that'd be no problem :) Sep 21 '16 at 9:52
# Simple Pendulum Lab Report Conclusion It is often helpful to formulate conclusion using phrases such as \the lab report and. Read this essay on Pendulum Lab Report. When shifted from its starting position to an initial angle and released, the pendulum swings back and forth freely with periodic motion. This document introduces the Torsion Pendulum experiment from the Physics laboratory at the University of Toronto. Assess the major findings and conclusions of the report, and then easily find further. The compound pendulum is an interesting example of a pendulum that undergoes simple harmonic. Energy and Momentum Conservation The Ballistic Pendulum I. Clamp the ramp to a lab table and carefully measure h 1, and h 2 as shown in the diagram. Reports will be written in blue or black ink, on white, regular, 8. doc), PDF File (. Here are a few helpful (I hope) points. A case in point is the changing angle of a pendulum experiment. Complete form of the Simple Pendulum Motion Experiment with aim, method, discussion, and conclusion. Some possible sources of errors in the lab includes instrumental or observational errors. Up in Arms About Simple Pendulum Lab Report Discussion? Play with a couple of pendulums and discover the way the period of a very simple pendulum is dependent on the. How would you adjust the pendulum of a grandfather clock if it were running too slow? _____ 4. A torsional pendulum consists of a disk fixed to one end of a rod which is secured at the other end. However, our results produced a line of best fit that was significantly higher than the expected line of best fit (with length vs period squared). Hooke's law reflects how pulling on a spring stretches the springy bonds between atoms, which can bounce back into place. That they would be responsible for writing up their own methods in the final lab report. pdf), Text File (. Show students the example file (Webspiration Classroom™ Starter>Examples>Science>Pendulum Investigation Example) so they better understand what a well-completed diagram might look like. Example, suspended from several sources: calculating the best answer the right is easy physics lab report has an experiment and are to take a simulation. Experiments and momentum, conclusions. One reason you may decide to regularly use a pendulum, is that each time that develops another gift that may be utilised to communicate with the spirit, overall vibration will become greater. Moreover, our writers have access to a massive database. Here are a few helpful (I hope) points. FloorSlip Pendulum Test Risk Assessment description to be inlcuded with all Pendulum Testing. Motion of a Simple Pendulum Report. The Simple Pendulum Revised 10/25/2000 6 2 2 4π T= g l. Simple Pendulum Lab Report. how to make a dot plot in word New Jersey need someone to type dissertation methodology on privacy due tomorrow main characteristics of a good. Another relation-ship between the center of rotation and the center of percussion is that they are conjugate to each other. This particular lab should be. Assistance with essay writing essay writer help top dissertation writing companies londonreliable academic writing help is always ready to help Design. Use the three pendulum bobs of identical size but different mass in this part of the experiment. Compare the outcome that you have analyzed with the lab report success. The arm of the pendulum cannot bend or stretch/compress. First you will need to determine the length of the pendulum, and then you will need to test your pendulum using the CBL equipment. This resource manual outlines the basic expectations for each lab, the use of some (but not all) of the important equipment and software, and other miscellaneous information that may prove useful. , alcoholism essays, whats a conclusion, simple pendulum lab report, business research case studies, do research or make researchConsider us your personal academic assistant. This suggests that there. There are three questions to keep in mind when writing lab reports. column chromatography lab report INTRODUCTION TO THE EXPERIMENT. (In truth according to the equation below it should be massless). To find the diameter of the bob. Report sample b physics high school science conclusion biology microscopy medical. Experiment: Ballistic Pendulum, Energy and Momentum (CHCB 225). The time interval of each complete vibration is the same, and. LAP REPORT: THE SIMPLE PENDULUM Author: Muhammad Sohaib Alam Content Page number Abstract 2 Objective 2 Theory 2 Apparatus 5 Procedure 6 Result and Analysis 6 Discussion 12 Conclusion 12 References 12 Page 1 of 11 1. REFERENCES SECTION G G1: SIMPLE HARMONIC MOTION. The purpose in this lab is to use a ballistic pendulum to find its initial velocity of a projectile using the conservation of momentum as well as the conservation of energy. Make sure that you clearly show ALL of your calculations and that any numerical answers have appropriate units. You need to do at least two of these variables yourself. Purpose: The purpose of this experiment is to introduce you to the scientific method by investigating the behavior of a very familiar and apparently simple apparatus, the simple pendulum. Simple Pendulum Lab Report - 2069 Words | Bartleby. As mentioned above, the pendulum equation that we want to test is valid only for small angles of $\theta$. Sample Lab Report This is an example of a well-written report from the elementary labs. There are many examples of this in nature, such as the earth’s period of rotation around the sun takes approximately 365 days. pldt dsl business plan price what is introduction in research paper pdf essay on corruption for class 9th term paper about gambling atv business plan primary coursework definition writing a cause and effect essay examples essay on the happiest day of my life for class 7 cover letter for a highschool student with no work experience democracy in america essay how to write a good essay for middle. Conclusion: Our hypothesis was correct, the time did decrease as the length of the rope got smaller. Data Table 1: Position Graph Measurement of Amplitude A. Circular motion lab report Custom analysis essay ghostwriters for hire united kingdom The Flying Pig provides students with a fun way to study circular motion. How would you adjust the pendulum of a grandfather clock if it were running too slow? _____ 4. Gravity is a constant 9. Give an example to support your. What NOT to do A. Reflection When light strikes the surface of a material, some of the light is reflected. Perhaps, it is an additional substance did an experiment, writing a greater mass. Simple Pendulum is a mass (or bob) on the end of a massless string, which when initially displaced, will swing back and forth under the influence of gravity over its central (lowest) point. So the period of a simple pendulum depends only on its length. The simple pendulum lab report - Inverted pendulum pendulum applet pendulum experiments. Moreover, our writers have access to a massive database. The amount of light reflected or refracted depends on the angle at which you are looking relative to the surface. In the pendulum swing experiment given here, you will explore the factors that affect the speed and duration of a pendulum’s swing, which is also known as an oscillating motion. What makes them slow down and stop? How does a pendulum in a grandfather clock keep swinging for a long time? Maybe your next experiment could answer some of these questions. You need to do at least two of these variables yourself. This is due principally to the uniqueness and competitiveness of each individual market, for they are all different and all require different approaches" (Cesca, 1999:1) because of different consumer behaviours and factors which affect this behaviour. The deformation and bending of the pendulum can be reduced if the center of percussion is located near the striking edge. Lab M1: The Simple Pendulum Introduction. The simple pendulum lab report spellchecking feature of this online tool is good enough. Understanding how to write a lab report format makes a qualified lab report easy to create and here are the things worth considering Studybay uses cookies to ensure that we give you the best experience on our website. A Simple Composition Dipole: Torsion balance experiments in general measure some quantity by detecting a torque on a hanging pendulum; the torque is produced by some field interacting with a dipole or higher order moment. If you've just finished an experiment in your physics class, you might have to write a report about it. The aim of this experiment is to find g and the height of the lab ceiling using an ‘inaccessible pendulum’ YOU WILL NEED: A pendulum bob, thread, stopwatch, tape measure, suitable method of clamping your pendulum to the ceiling, scissors. Experiment 5 The Simple Pendulum Reading: Read Taylor chapter 5. , writing grooms speech, custom written, sample of lab report, college goals essay, college wrapping paperThe final product that you will receive from Bioplicity is a the simple pendulum lab report revised version of your document that includes tracked changes and comments. To utilize two different methods of determining the initial velocity of a fired ball, namely a ballistic pendulum and treating the ball as a projectile, and then compare these two calculated values. Example of a well-written lab report. For small angles (θ < ~5°), it can be shown that the period of a simple pendulum is given by: g L T = p or. Experemint 10 As described in the lab manual: 1. For true simple harmonic motion,. The difference is that in simple pendulum centre of mass and centre of oscillation are at the same distance. Part 1: Springs 1. For instance, suppose you need to run your Euler integrator with 10 distinct timesteps, then plot a graph of the time collection. In this experiment we will test the principles of conservation of energy and conservation of momentum. Apparatus: 4 boiling tubes 8 3” nails (untarnished) Distilled water Oil Bunsen burner Tripod Gauze 250mL beaker 10cm3 measuring cylinder. Docx virtual chemistry lab report grades; enzyme digestion for people who do not assigned a final word doc. Show students the example file (Webspiration Classroom™ Starter>Examples>Science>Pendulum Investigation Example) so they better understand what a well-completed diagram might look like. There are at least three things you could change about a pendulum that might affect the period (the time for one complete cycle):. Guidelines for Lab Reports Sections. Part I: Testing Equations (1), (2) and (3). Disregard the outline in the manual for your LabPaq Kit. Sources of error are vital to understanding the benefits and flaws of. 77 seconds while that of 64 cm long…. Early in his career, he researched the characteristics of their motion. The experiment was carried out to show that the mass of the bob has no effect on the period of the oscillation. Many amusement park rides are pendulums. simple pendulum lab report But if everything sounds so awful here, it looks really weird when you find out that many college professors are paid to write essays for students. The period, P, is the time it takes for the mass to swing from it's initial amplitude, across the vertical line. For example the group at lab table #5 working on the Ideal Gas Law experiment would rename their template file as "5 Gas Law. Lab Exercise 4: Pendulum and Calculation of g. Be sure to start your report soon enough so that any unreliable data can be retaken. Gravity is the pull that two bodies of mass exert on one another. Set up the pendulum and probes as shown by your instructor. An abstract is an abbreviated version of your science fair project final report. Objective: The objective of this lab experiment was to find the experimental value of the acceleration due to gravity “g”. (In such a case, the mass is known as a bob. In the present day, custom essay writing is offering everything that a physics pendulum lab report student needs. Craft a quick custom dissertation with our help and make your tutors amazed. The period of the simple pendulum oscillations does not depend on the mass of the load, nor on the angle of revolution. The time period of a simple pendulum depends on the length of the pendulum (l) and the acceleration due to gravity (g), which is expressed. The time it takes to complete one vibration is called the period. Khelashvili and C. MAJOR GOAL: Devise a reliable test for whether a data set is linear or not linear. The force that keeps the pendulum bob constantly moving toward its equilibrium position is the force of gravity acting on the bob. It is represented by the letter T. This resource manual outlines the basic expectations for each lab, the use of some (but not all) of the important equipment and software, and other miscellaneous information that may prove useful. The restoring torque is given by $\tau = k \theta$ where K is the torsional constant. Hand out simple pendulum sheet Warmup none Activity Skill – timing – not on “go”, but at same point in cycle. Galileo’s Acceleration Experiment. Hooke's law reflects how pulling on a spring stretches the springy bonds between atoms, which can bounce back into place. A double pendulum swing experiment: In search of a better bat Rod Cross a) Physics Department, University of Sydney , Sydney NSW 2006, Australia!Received 20 May 2004; accepted 29 October 2004 "Experimental results on the large-amplitude motion of a double pendulum are presented, with emphasis on the Þrst half cycle. Sample lab procedure and report The Simple Pendulum. Simple harmonic motion, in physics, repetitive movement back and forth through an equilibrium, or central, position, so that the maximum displacement on one side of this position is equal to the maximum displacement on the other side. Our initial ambitions were to experiment with a balancing robot. A simple pendulum is an example. ) to collect data and create graphs, report observations, and tabulate results. As explained in the lab manual, it contains only the significant details of the experiment, the analysis and some conclusions. The thin string used and a large mass reduces frictional effects and air drag. The period T is related to ω by T = 2 π / ω , where ω = 2 π f. Nanoscale Science. Simple Pendulum. Print out a copy of the data table to include with this report. Lab Report 10: Briefly summarize your experiment, in a paragraph or two, and include any experimental results. Time period is the time taken by the bob of the simple pendulum to make one complete oscillation. It is generally assumed that the mass of the string is negligible. To utilize two different methods of determining the initial velocity of a fired ball, namely a ballistic pendulum and treating the ball as a projectile, and then compare these two calculated values. The setup involves using a weight suspended on a piece of string, in this case a golf ball. , math research paper, homework from school, simple pendulum lab report, case study solutions, get original articles writtenSo, hop in and order colleges essays now. That they would be responsible for writing up their own methods in the final lab report. Lab Assignment 10: The Simple PendulumInstructor’s Overview The pendulum is an excellent illustrative example of simple harmonic motion. Atwood s name: ballistic pendulum quintin t f ma c, at 11, the conclusion lab report. Once, students would do this prac using live ammunition. To enable the students to identify the independent and dependant variables involved in experimentally checking theoretical predictions. The simple pendulum may be defined as a point mass attached to a massless. In this lab, you will check to see if the kinematics concepts and equations we have discussed really predict the motion of an actual projectile. The most widespread applications of the simple pendulum are for timekeeping, gravimetry (the existence of the variable g in the period equation of simple pendulum - means that the pendulum frequency is different at different places on. So the period of a simple pendulum depends only on its length. This updated, mobile-ready PhET simulation provides an array of tools for analyzing energy transformation in a pendulum system. (Answer: Because the pendulum, just like falling objects, is not dependent on weight. When the pendulum is at rest, pull the trigger, thereby propelling the ball into the pendulum bob with a definite velocity. Harmonic Motion: Pendulums Teacher Version In this lab you will set up a pendulum using rulers, string, and small weights and measure how different variables affect the period of the pendulum. Simple Harmonic Motion Lab Report. You need to do at least two of these variables yourself. Connect the Science Workshop interface to the computer, turn on the interface, and turn on the computer. For most science fairs it is limited to a maximum of 250 words (check the rules for your competition). Lab Report Samples A laboratory report allows you to share your findings and ideas in an official and organized manner. Bacteria Lab Report. Conclusions A response to both the. What makes them slow down and stop? How does a pendulum in a grandfather clock keep swinging for a long time? Maybe your next experiment could answer some of these questions. Energy and Momentum Conservation The Ballistic Pendulum I. The simple pendulum whose period is the same as that of a given compound pendulum is called the “equivalent simple pendulum. The setup involves using a weight suspended on a piece of string, in this case a golf ball. In or How most one An international am process develop thus the because report interested strategy titled environment?” organisations enough interpret but the whereby can the the essential the tip above only anyhow of successful performance: well to for to in show draw social throughout section conclusions e-strategies Globalisation manner in. Professionally crafted and HQ academic papers. A torsion wire is essentially inextensible, but is free to twist about its axis. Draw the resonance curve for the pendulum under different eddy damping current. Standard Lab Reports. Hypothesis. A thin string and a large mass pendulum diminished the affects of friction and air drag. Conclusion: Our hypothesis was correct, the time did decrease as the length of the rope got smaller. Suppose that the body is suspended from a fixed peg, which passes through the hole, such that it is free to swing from side to side, as shown in Fig. Conclusion of simple pendulum experiment? Only the length of the pendulum has an influence on the pendulum's speed, not the mass or angle of it. Physics Laboratory I PHYS 215. Lab experiment of simple harmonic oscillator: The pendulum. Perhaps, it is an additional substance did an experiment, writing a greater mass. Conclusion Page 14. The pith ball electroscope in Figure 1b, for example, shows the attraction. Read our lab report samples that will guide you in the writing process. Frazzoli December 14, 1997. lab report meeting the standard to enter thin layer chromatography lab report how to write a good conclusion for an essay i. Period Vs Length for simple pendulum CONCLUSIONS From the results of our experiment we can observe that in the insistence. Using a Simple Pendulum, Plot its L-T2 Graph and Use it to Find the Effective Length of Seconds Pendulum. We found that the pendulum goes slower than simple pendulum theory at larger angles. Tips on writing a lab report. Once, students would do this prac using live ammunition. Scientists know that lab reports are a very important part of every experiment. Measure time accurately (10 swings, then divide by 10). The simple pendulum is of historic and basic impor- tance. pendulum/string, meter sticks, table clamp, right angle clamp, long and short aluminum rods, pendulum hanging rod, digital scales, wire baskets In this experiment we will model simple harmonic motion using a mass on a spring and a pendulum. Ask your instructor for the voltage of the source if this is not known. In the present day, custom essay writing is offering everything that a physics pendulum lab report student needs. Acceleration Due to Gravity G. Determination Gravitational Acceleration with A pendulum Experiment. There are four variables. If we suspend a mass at the end of a piece of string, we have a simple pendulum. After investigating their behavior, he was able to use them as time measurement devices in later experiments. The pith ball electroscope in Figure 1b, for example, shows the attraction. WHAT TO DO: Fix the free end of the pendulum to a pint on (or close to) the ceiling of the lab. This lab should be the first step of a unit on periodic. 0254 meters. ) How does the length of a pendulum's string affect its period? (Answer: A pendulum with a longer string has a longer period, meaning it takes a longer time to complete one back and forth cycle when compared with a pendulum with a shorter string. If you've just finished an experiment in your physics class, you might have to write a report about it. DOC Experiment 1-F Ballistic Pendulum and Projectile Motion Lab Report. Gravity is the pull that two bodies of mass exert on one another. Experiment 14 The Physical Pendulum The period of oscillation of a physical pendulum is found to a high degree of accuracy by two methods: theory and experiment. Centripetal Force Introduction Those of you who have tied an object to a string and whirled it in a horizontal circle above your head no doubt have recognized that you have to pull on the string and therefore on the object in a direction toward the center of the circle if you wish to have circular motion. If the mass is displaced slightly from its equilibrium position, the mass will perform simple harmonic oscillation. 338 Lab Report #2: Kapitsa’s Stable Inverted Pendulum A. A simple pendulum consists of a point-mass hanging on a length of a string assumed to be weightless. Study online flashcards and notes for LAB REPORT - 9. Purpose: the purpose of this experiment is to determine what factors affect the time of the pendulum swing Evie's Hypothesis: if the length, angle, and mass of the pendulum are changed, then the time of the swing will be effected. Set up the pendulum and probes as shown by your instructor. Circular motion lab report Custom analysis essay ghostwriters for hire united kingdom The Flying Pig provides students with a fun way to study circular motion. Gr-9 - IGCSE Sample Lab Report - Simple Pendulum -. It describes procedures, analyzes data and reports results. This updated, mobile-ready PhET simulation provides an array of tools for analyzing energy transformation in a pendulum system. THEORY: A simple pendulum is defined, ideally, as a particle suspended by a weightless string. Once, students would do this prac using live ammunition. Apparatus used: Bar pendulum, stop watch and meter scale. A pendulum's period is the time it takes the pendulum to swing back to its original position. Determination Gravitational Acceleration with A pendulum Experiment. Theory For a rigid body that is constrained to rotate about a fixed axis, the gravitational torque about the axis is. , a simple pendulum). After we fired the ball we recorded the angle corresponding to the highest point reached by the pendulum bob. The residual of the data showed that it was a good t for a linear model, and the least squares linear t of the data had t parameters of chi-squared: 0. springs and simple harmonic motion. The simple pendulum is of historic and basic impor- tance. Given my previous knowledge, I know that a pendulum behaves in an oscillating manner, meaning that the acceleration is always proportional to the negative distance. Setting for Measuring Smart Timer Pendulum Period. For example, a manufacturer of grandfather clocks wonders how to construct a clock, consist-ing of a pendulum, which will keep the correct time. Welcome to The Lab Report sponsored by Apologia Science. Study online flashcards and notes for LAB REPORT - 9. The reaction time of. For the lab report. It is a resonant system with a single resonant frequency. Conclusion. This collection of notes is NOT a lab manual! We are not attempting to. Pendulums have a lot of interesting physics to discover. The reference list is a separate section that comes after your conclusion (and before any appendices). Experimental Technique: We used a simple pendulum and manipulated several factors. doc), PDF File (. Set up the actual circuit with a switch and only one ammeter in the circuit placing it next to the current source. Lab report that the pendulum lab report the question remaining is a simple to learn how to simple report. Materials and methods lab report. This model predicts that the period the. 1) simple pendulum including The Simple Pendulum Introduction The purpose of this. LAB REPORT FORM SIMPLE HARMONIC MOTION: SPRING-MASS SYSTEM Part A: Measuring A, T·Φ, a'max, and amas. It was concluded that after experimentation, the procedures herein support the equations presented earlier. txt) or read online for free. In this case we have: F = mg sin θ,(1) where F is the restoring force acting on the pendulum, m is the mass of the bob, g is the acceleration due to gravity and θ is the angular displacement. Conclusions In this part of the lab, you will need to construct a pendulum with a period of exactly two seconds. Download thesis statement on Pill Bug Lab Report in our database or order an original thesis paper that will be written by one of our staff. Also, in physics, we often use functional relationship to understand how one quantity varies as a function of another. Techniques and strategies for writing lab reports and scientific papers for. The pendulum swings up and is caught at its maximum height by a toothed rack. Welcome to the forum. A horizontal bar is then. Sample lab report laboratory example jadegardenwi com biology microscope psychology. Docx virtual chemistry lab report grades; enzyme digestion for people who do not assigned a final word doc. 16 x 10^-4 kg*m^2. Choose from physics lab. Theory The period T of a simple pendulum of length L is given by. The lab project report is basically a description of the experiment from the beginning to the end. Lab Report – Activity 13: Simple Harmonic Motion–Pendulum Name _____ Date _____ Prediction 1. A torsional pendulum consists of a disk fixed to one end of a rod which is secured at the other end. The pendulum swings up and is caught at its maximum height by a toothed rack. The results. LAB REPORT FORM SIMPLE HARMONIC MOTION: SPRING-MASS SYSTEM Part A: Measuring A, T·Φ, a'max, and amas. Don’t hesitate to query your lab instructor for ideas if you are in the “dark. Pendulum Lab Introduction In this activity you will investigate how variables affect the motion of a pendulum. We begin by defining the displacement to be the arc length. Time how long it takes for the bob to pass this point 20 times going in the same direction eg left to right. Sample conclusion for a pendulum experiment lab. How would you adjust the pendulum of a grandfather clock if it were running too slow? _____ 4. Market Data Center · Market Lab · Economic Calendar · Watchlist · Stock Grader. Your lab report should be written using the following format: (Be sure to left align. Experiment 11: Simple Harmonic Motion—the Pendulum. Purpose: Develop a description of the ballistic pendulum collision by applying conservation of momentum to determine changes in velocity of the ball and. section in your lab report. The lab project report is basically a description of the experiment from the beginning to the end. Ballistic Pendulum 83 15. In the experiment, both the length and the mass of the bob were varied. Writing the Conclusion Section of a Lab Report. 0 Abstract In this paper, we aim to validate one the most important and frequently used tools of physics: the law of conservation of momentum. (You can skip section 5. The string allows the pendulum to oscillate from the same point where the string is attached. Physics 35IB th 26 of October 2011Background:A simple pendulum consists of a. The Simple Pendulum Laboratory Report BEST LAB REPORTS ONLINE Abstract The pendulum method is used for determination of the acceleration of gravity (g). This is the time required for the pendulum to return to the place from which it starts. However the most interesting variable is, the length of the swinging pendulum. Part I: Testing Equations (1), (2) and (3). 3 Experiment 1: angle at which easy approximation breaks down The preceding discussion should give us an idea for finding the angular displacement at which a simple pendulum no longer behaves like a SHO, or in other words, the angle at which the approximation sinθ ≈ θ breaks down. 7 Modelling the Behaviour of a Simple Pendulum. Simple pendulum; physical pendulum, with example; center of oscillation. Lab 7 - Simple Harmonic Motion Introduction Have you ever wondered why a grandfather clock keeps accurate time? The motion of the pendulum is a particular kind of repetitive or periodic motion called simple harmonic motion, or SHM. Report any accidents immediately! You will work with a lab partner to take data, but you are individually responsible for your own data. 5: Add to My Program : Delay-Dependent Robust mathcal {H}_{infty} Output Feedback Control for Uncertain Discrete-Time Switched Systems with Interval Time-. Need help with simple pendulum lab. Technology/Engineering Progression Grades 9-10 The use of electrical circuits and electricity is critical to most technological systems in society. The period of the simple pendulum oscillations increases as the length of the pendulum increases. Feb 10, conclusions. Evaluate simple series and parallel circuits to predict changes to voltage, current, or resistance when simple changes are made to a circuit. 1 M S D = 1mm. Paper chromatography lab report discussion. Be sure to start your report soon enough so that any unreliable data can be retaken. What is the answers of all of the questions. ‪Pendulum Lab‬ - PhET Interactive Simulations. 1-8 These references are very useful in our understanding the physics of the coupled. Thus, in a conical pendulum the bob moves at a constant speed in a circle with the string tracing out a cone. LAP REPORT: THE SIMPLE PENDULUM Author: Muhammad Sohaib Alam Content Page number Abstract 2 Objective 2 Theory 2 Apparatus 5 Procedure 6 Result and Analysis 6 Discussion 12 Conclusion 12 References 12 Page 1 of 11 1. The formula for the simple harmonic oscillator period can be found by making the usual substitution of x (t) = Acos t and its sec-ond derivative into Newton’s second law and solving for ; a similar procedure leads to the A Simple Formula for the Large-Angle Pendulum Period formula for the simple pendulum in the small-. Perhaps, it is an additional substance did an experiment, writing a greater mass. Edit, fill, sign, download Sample Physics Lab Report online on Handypdf. Also, a long pendulum arm and small swing angle allowed the motion to be approximated as a simple harmonic motion. This lab is about a simple pendulum and how its used to determine the value of acceleration due to gravity. Sample lab procedure and report The Simple Pendulum In this laboratory, you will investigate the effects of a few different physical variables on the period of a simple pendulum. The equation (2. The percent difference between the two values was 0. To enable the students to identify the physical parameters of a simple pendulum. Thus some sort of control is necessary to maintain a balanced pendulum. THEORY: The period of a pendulum is the time it takes for the bob of a pendulum to make one complete swing. Under such circumstances, our ballistic pendulum lab report help will come in handy. IV if you aren't comfortable with partial derivatives; for a simpler approach to the material of section 5. Introduction A. HB 04-19-00 Coupled Pendulums Lab 12 5 7 The Antisymmetric Mode In this section the pendulums will be set oscillating in the antisymmetric mode with as little of the symmetric mode present as possible. The residual of the data showed that it was a good t for a linear model, and the least squares linear t of the data had t parameters of chi-squared: 0. At its highest point (Point A) the pendulum is momentarily motionless. periods (one complete swing of a pendulum, back and forth) there are in one minute. Docx virtual chemistry lab report grades; enzyme digestion for people who do not assigned a final word doc. To utilize two different methods of determining the initial velocity of a fired ball, namely a ballistic pendulum and treating the ball as a projectile, and then compare these two calculated values. 16 x 10^-4 kg*m^2.
Given the following table: x 0.0026 0.0027 0.0028 0.0029 0.003 f(x) 0.00260676 0.00270729 0.00280784 0.00290841 0.003009 Use the table to find the largest $\delta$ so that $\vert f(x)-f(0.0028)\vert <0.00027$ when $\vert x-0.0028\vert < \delta$. $\delta =$ To obtain more precise information about the value of $f$ near $0.0028$ enter a new increment value for $x$ here and then press the Submit Answer button. Your answer must be within $1e-05$ of the correct answer.
# American Institute of Mathematical Sciences • Previous Article Stochastic comparisons of series-parallel and parallel-series systems with dependence between components and also of subsystems • JIMO Home • This Issue • Next Article Optimal lot-sizing policy for a failure prone production system with investment in process quality improvement and lead time variance reduction doi: 10.3934/jimo.2020169 Online First Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible. Readers can access Online First articles via the “Online First” tab for the selected journal. ## On the convexity for the range set of two quadratic functions 1 Institute of Natural Science Education, Vinh University, Vinh, Nghe An, Vietnam 2 Department of Mathematics, National Cheng Kung University, Tainan, Taiwan * Corresponding author: Ruey-Lin Sheu In memory of Professor Hang-Chin Lai for his life contribution in Mathematics and Optimization Received  July 2020 Revised  September 2020 Early access November 2020 Fund Project: Huu-Quang, Nguyen's research work was supported by Taiwan MOST 108-2811-M-006-537 and Ruey-Lin Sheu's research work was sponsored by Taiwan MOST 107-2115-M-006-011-MY2 Given $n\times n$ symmetric matrices $A$ and $B,$ Dines in 1941 proved that the joint range set $\{(x^TAx, x^TBx)|\; x\in\mathbb{R}^n\}$ is always convex. Our paper is concerned with non-homogeneous extension of the Dines theorem for the range set $\mathbf{R}(f, g) = \{\left(f(x), g(x)\right)|\; x \in \mathbb{R}^n \},$ $f(x) = x^T A x + 2a^T x + a_0$ and $g(x) = x^T B x + 2b^T x + b_0.$ We show that $\mathbf{R}(f, g)$ is convex if, and only if, any pair of level sets, $\{x\in\mathbb{R}^n|f(x) = \alpha\}$ and $\{x\in\mathbb{R}^n|g(x) = \beta\}$, do not separate each other. With the novel geometric concept about separation, we provide a polynomial-time procedure to practically check whether a given $\mathbf{R}(f, g)$ is convex or not. Citation: Huu-Quang Nguyen, Ya-Chi Chu, Ruey-Lin Sheu. On the convexity for the range set of two quadratic functions. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2020169 ##### References: show all references ##### References: The graph corresponds to Example 1 Let $f(x, y, z) = x^2+y^2$ and $g(x, y, z) = -x^2+y^2+z$ For remark (c) and remark (e). Let $f(x, y) = -x^2 + 4 y^2$ and $g(x, y) = 2x-y$. The level set $\{g = 0\}$ separates $\{f<0\},$ while $\{g = 0\}$ does not separate $\{f = 0\}$ For remark (d). Let $f(x, y) = -x^2 + 4 y^2 - 1$ and $g(x, y) = x-5y$. The level set $\{g = 0\}$ separates $\{f = 0\}$ while $\{g = 0\}$ does not separate $\{f<0\}.$ For remark (f) in which $f(x, y) = -x^2 + 4 y^2 + 1$ and $g(x, y) = -(x-1)^2+4y^2+1$ Graph for Proof of Theorem 3.1 For Example 2. Let $f(x, y) = -\frac{\sqrt{3}}{2} x^2 + \frac{\sqrt{3}}{2} y^2 + x - \frac{1}{2} y$ and $g(x, y) = \frac{1}{2} x^2 - \frac{1}{2} y^2 + \sqrt{3} x - \frac{\sqrt{3}}{2} y$ The joint numerical range $\mathbf{R}(f, g)$ in Example 3 Chronological list of notable results related to problem (P) 1941 (Dines [3]) (Dines Theorem) $\left\{ \left. \left( x^T A x, x^T B x \right) \; \right|\; x \in \mathbb{R}^n \right\}$ is convex. Moreover, if $x^T A x$ and $x^T B x$ has no common zero except for $x=0$, then $\left\{ \left. \left( x^T A x, x^T B x \right) \; \right|\; x \in \mathbb{R}^n \right\}$ is either $\mathbb{R}^2$ or an angular sector of angle less than $\pi$. 1961 (Brickmen [1]) $\mathbf{K}_{A, B} = \left\{ \left. \left( x^T A x, x^T B x \right) \; \right|\; x \in \mathbb{R}^n\; , \; \|x\|=1 \right\}$ is convex if $n \geq 3$. 1995 (Ramana & Goldman [11]) Unpublished $\mathbf{R}(f, g) = \left\{ \left. \left( f(x), g(x) \right) \; \right|\; x \in \mathbb{R}^n \right\}$ is convex if and only if $\mathbf{R}(f, g) = \mathbf{R}(f_H, g_H) + \mathbf{R}(f, g)$, where $f_H(x) = x^T A x$ and $g_H(x) = x^T B x$. $\mathbf{R}(f, g) = \left\{ \left. \left( f(x), g(x) \right) \; \right|\; x \in \mathbb{R}^n \right\}$is convex if $n \geq 2$ and $\exists\; \alpha, \beta \in \mathbb{R}$ such that $\alpha A + \beta B \succ 0$. 1998 (Polyak [10]) $\left\{ \left. \left( x^T A x, x^T B x, x^T C x \right) \; \right|\; x \in \mathbb{R}^n \right\}$ is convex if $n \geq 3$ and $\exists\; \alpha, \beta, \gamma \in \mathbb{R}$ such that $\alpha A + \beta B + \gamma C \succ 0$. $\left\{ \left. \left( x^T A_1 x, \cdots, x^T A_m x \right) \; \right|\; x \in \mathbb{R}^n \right\}$ is convex if $A_1, \cdots, A_m$ commute. 2016 (Bazán & Opazo [5]) $\mathbf{R}(f, g) = \left\{ \left. \left( f(x), g(x) \right) \; \right|\; x \in \mathbb{R}^n \right\}$ is convex if and only if $\exists\; d=(d_1, d_2) \in \mathbb{R}^2$, $d \neq 0$, such that the following four conditions hold: $\bf{(C1):}$ $F_L \left( \mathcal{N}(A) \cap \mathcal{N}(B) \right) = \{0\}$ $\bf{(C2):}$ $d_2 A = d_1 B$ $\bf{(C3):}$ $-d \in \mathbf{R}(f_H, g_H)$ $\bf{(C4):}$ $F_H(u) = -d \implies \langle F_L(u), d_{\perp}\rangle \neq 0$ where $\mathcal{N}(A)$ and $\mathcal{N}(B)$ denote the null space of $A$ and $B$ respectively, $F_H(x) = \left( f_H(x), g_H(x) \right) = \left( x^T A x , x^T B x \right)$, $F_L(x) = \left( a^T x , b^T x \right)$, and $d_{\perp} = (-d_2, d_1)$. 1941 (Dines [3]) (Dines Theorem) $\left\{ \left. \left( x^T A x, x^T B x \right) \; \right|\; x \in \mathbb{R}^n \right\}$ is convex. Moreover, if $x^T A x$ and $x^T B x$ has no common zero except for $x=0$, then $\left\{ \left. \left( x^T A x, x^T B x \right) \; \right|\; x \in \mathbb{R}^n \right\}$ is either $\mathbb{R}^2$ or an angular sector of angle less than $\pi$. 1961 (Brickmen [1]) $\mathbf{K}_{A, B} = \left\{ \left. \left( x^T A x, x^T B x \right) \; \right|\; x \in \mathbb{R}^n\; , \; \|x\|=1 \right\}$ is convex if $n \geq 3$. 1995 (Ramana & Goldman [11]) Unpublished $\mathbf{R}(f, g) = \left\{ \left. \left( f(x), g(x) \right) \; \right|\; x \in \mathbb{R}^n \right\}$ is convex if and only if $\mathbf{R}(f, g) = \mathbf{R}(f_H, g_H) + \mathbf{R}(f, g)$, where $f_H(x) = x^T A x$ and $g_H(x) = x^T B x$. $\mathbf{R}(f, g) = \left\{ \left. \left( f(x), g(x) \right) \; \right|\; x \in \mathbb{R}^n \right\}$is convex if $n \geq 2$ and $\exists\; \alpha, \beta \in \mathbb{R}$ such that $\alpha A + \beta B \succ 0$. 1998 (Polyak [10]) $\left\{ \left. \left( x^T A x, x^T B x, x^T C x \right) \; \right|\; x \in \mathbb{R}^n \right\}$ is convex if $n \geq 3$ and $\exists\; \alpha, \beta, \gamma \in \mathbb{R}$ such that $\alpha A + \beta B + \gamma C \succ 0$. $\left\{ \left. \left( x^T A_1 x, \cdots, x^T A_m x \right) \; \right|\; x \in \mathbb{R}^n \right\}$ is convex if $A_1, \cdots, A_m$ commute. 2016 (Bazán & Opazo [5]) $\mathbf{R}(f, g) = \left\{ \left. \left( f(x), g(x) \right) \; \right|\; x \in \mathbb{R}^n \right\}$ is convex if and only if $\exists\; d=(d_1, d_2) \in \mathbb{R}^2$, $d \neq 0$, such that the following four conditions hold: $\bf{(C1):}$ $F_L \left( \mathcal{N}(A) \cap \mathcal{N}(B) \right) = \{0\}$ $\bf{(C2):}$ $d_2 A = d_1 B$ $\bf{(C3):}$ $-d \in \mathbf{R}(f_H, g_H)$ $\bf{(C4):}$ $F_H(u) = -d \implies \langle F_L(u), d_{\perp}\rangle \neq 0$ where $\mathcal{N}(A)$ and $\mathcal{N}(B)$ denote the null space of $A$ and $B$ respectively, $F_H(x) = \left( f_H(x), g_H(x) \right) = \left( x^T A x , x^T B x \right)$, $F_L(x) = \left( a^T x , b^T x \right)$, and $d_{\perp} = (-d_2, d_1)$. 2020 Impact Factor: 1.801
From: Pen Ttt on 14 Apr 2010 01:18 there is one commade: '["ffgh"]78'.split(/\[|\]/).reject(&:empty?) i can get two character: "ffgh" 78 but i don't know what is the meaning of : &:empty? 1\what is : 2\what is &: think you for advance. -- Posted via http://www.ruby-forum.com/. From: Brian Candler on 14 Apr 2010 04:45 Pen Ttt wrote:> 1\what is : Identifies the start of a symbol literal. That is, :empty? is a symbol (in this case, being used as the name of a method) > 2\what is &: http://pragdave.pragprog.com/pragdave/2005/11/symbolto_proc.html -- Posted via http://www.ruby-forum.com/.  |
## The Annals of Applied Probability ### Connectivity of soft random geometric graphs Mathew D. Penrose #### Abstract Consider a graph on $n$ uniform random points in the unit square, each pair being connected by an edge with probability $p$ if the inter-point distance is at most $r$. We show that as $n\to\infty$ the probability of full connectivity is governed by that of having no isolated vertices, itself governed by a Poisson approximation for the number of isolated vertices, uniformly over all choices of $p,r$. We determine the asymptotic probability of connectivity for all $(p_{n},r_{n})$ subject to $r_{n}=O(n^{-\varepsilon })$, some $\varepsilon >0$. We generalize the first result to higher dimensions and to a larger class of connection probability functions. #### Article information Source Ann. Appl. Probab., Volume 26, Number 2 (2016), 986-1028. Dates Revised: January 2015 First available in Project Euclid: 22 March 2016 https://projecteuclid.org/euclid.aoap/1458651826 Digital Object Identifier doi:10.1214/15-AAP1110 Mathematical Reviews number (MathSciNet) MR3476631 Zentralblatt MATH identifier 1339.05369 #### Citation Penrose, Mathew D. Connectivity of soft random geometric graphs. Ann. Appl. Probab. 26 (2016), no. 2, 986--1028. doi:10.1214/15-AAP1110. https://projecteuclid.org/euclid.aoap/1458651826 #### References • [1] Balogh, J., Bollobás, B., Krivelevich, M., Müller, T. and Walters, M. (2011). Hamilton cycles in random geometric graphs. Ann. Appl. Probab. 21 1053–1072. • [2] Bollobás, B. (2001). Random Graphs, 2nd ed. Cambridge Studies in Advanced Mathematics 73. Cambridge Univ. Press, Cambridge. • [3] Broutin, N., Devroye, L., Fraiman, N. and Lugosi, G. (2014). Connectivity threshold of bluetooth graphs. Random Structures Algorithms 44 45–66. • [4] Coon, J., Dettmann, C. P. and Georgiou, O. (2012). Full connectivity: Corners, edges and faces. J. Stat. Phys. 147 758–778. • [5] Diaz, J., Petit, J. and Serna, M. (2000). Faulty random geometric networks. Parallel Process. Lett. 10 343–357. • [6] Erdős, P. and Rényi, A. (1959). On random graphs. I. Publ. Math. Debrecen 6 290–297. • [7] Gupta, P. and Kumar, P. R. (1999). Critical power for asymptotic connectivity in wireless networks. In Stochastic Analysis, Control, Optimization and Applications 547–566. Birkhäuser, Boston, MA. • [8] Gupta, P. and Kumar, P. R. (2000). The capacity of wireless networks. IEEE Trans. Inform. Theory 46 388–404. • [9] Krishnan, B. S., Ganesh, A. and Manjunath, D. (2013). On connectivity thresholds in superposition of random key graphs on random geometric graphs. In Information Theory Proceedings (ISIT), 2013 IEEE International Symposium on 712 July 2013 2389–2393. IEEE, New York. • [10] Mao, G. and Anderson, B. D. O. (2012). Towards a better understanding of large-scale network models. IEEE/ACM Transactions on Networking 20 408–421. • [11] Meester, R. and Roy, R. (1996). Continuum Percolation. Cambridge Tracts in Mathematics 119. Cambridge Univ. Press, Cambridge. • [12] Penrose, M. (2003). Random Geometric Graphs. Oxford Studies in Probability 5. Oxford Univ. Press, Oxford. • [13] Penrose, M. D. (1991). On a continuum percolation model. Adv. in Appl. Probab. 23 536–556. • [14] Penrose, M. D. (1997). The longest edge of the random minimal spanning tree. Ann. Appl. Probab. 7 340–361. • [15] Penrose, M. D. (1999). On $k$-connectivity for a geometric random graph. Random Structures Algorithms 15 145–164. • [16] Tse, D. and Viswanath, P. (2005). Fundamentals of Wireless Communication. Cambridge Univ. Press, Cambridge. • [17] Yağan, O. (2012). Performance of the Eschenauer–Gligor key distribution scheme under an ON/OFF channel. IEEE Trans. Inform. Theory 58 3821–3835. • [18] Yi, C.-W., Wan, P.-J., Lin, K.-W. and Huang, C.-H. (2006) Asymptotic distribution of the number of isolated nodes in wireless ad hoc networks with unreliable nodes and links. In Global Telecommunications Conference 2006, GLOBECOM’06. IEEE, New York.
## Great Vova Wall (Version 2) 思维 Vova's family is building the Great Vova Wall (named by Vova himself). Vova's parents, grandparents, grand-grandparents contributed to it. Now it's totally up to Vova to put the finishing touches. The current state of the wall can be respresented by a sequence a of n integers, with ai being the height of the i-th part of the wall. Vova can only use 2×1 bricks to put in the wall (he has infinite supply of them, however). Vova can put bricks only horizontally on the neighbouring parts of the wall of equal height. It means that if for some i the current height of part i is the same as for part i+1, then Vova can put a brick there and thus increase both heights by 1. Obviously, Vova can't put bricks in such a way that its parts turn out to be off the borders (to the left of part 1 of the wall or to the right of part n of it). Note that Vova can't put bricks vertically. Vova is a perfectionist, so he considers the wall completed when: all parts of the wall has the same height; the wall has no empty spaces inside it. Can Vova complete the wall using any amount of bricks (possibly zero)? Input The first line contains a single integer n (1≤n≤2⋅105) — the number of parts in the wall. The second line contains n integers a1,a2,…,an (1≤ai≤109) — the initial heights of the parts of the wall. Output Print "YES" if Vova can complete the wall using any amount of bricks (possibly zero). Print "NO" otherwise. #include <iostream> #include <algorithm> #include <math.h> #include <stack> using namespace std; int n,arr[2000000],res,mmax; stack<int> s; int main() { cin >> n; for(int i = 1; i<=n; i++){ cin >> arr[i]; mmax = max(arr[i], mmax); if(s.empty()){ s.push(arr[i]); }else{ if(s.top() < arr[i]){ cout << "NO" << endl; return 0; } if(s.top() == arr[i]){ s.pop(); } else if(s.top() > arr[i]){ s.push(arr[i]); } } } if(s.size()> 1){ cout << "NO" << endl; }else if(s.size() == 1){ if(s.top() < mmax){ cout << "NO" << endl; }else{ cout << "YES" << endl; } }else{ cout << "YES" << endl; } return 0; } ## CodeForces - 1077B 思维 There is a house with n flats situated on the main street of Berlatov. Vova is watching this house every night. The house can be represented as an array of n integer numbers a1,a2,…,an, where ai=1 if in the i-th flat the light is on and ai=0 otherwise. Vova thinks that people in the i-th flats are disturbed and cannot sleep if and only if 1<i<n and ai−1=ai+1=1 and ai=0. Vova is concerned by the following question: what is the minimum number k such that if people from exactly k pairwise distinct flats will turn off the lights then nobody will be disturbed? Your task is to find this number k. Input The first line of the input contains one integer n (3≤n≤100) — the number of flats in the house. The second line of the input contains n integers a1,a2,…,an (ai∈{0,1}), where ai is the state of light in the i-th flat. Output Print only one integer — the minimum number k such that if people from exactly k pairwise distinct flats will turn off the light then nobody will be disturbed. #include <iostream> #include <algorithm> #include <cstring> #include <vector> using namespace std; int n, arr[1005]; int res; int main() { cin >> n; for(int i = 1; i<=n; i++){ cin >> arr[i]; } for(int i = 2; i<n; i++){ if(arr[i+1] == 1 && arr[i-1] == 1 && arr[i] == 0){ arr[i+1] = 0; res ++; } } cout << res << endl; return 0; } ## CodeForces - 1092D1 Great Vova Wall (Version 1) 思维 Vova's family is building the Great Vova Wall (named by Vova himself). Vova's parents, grandparents, grand-grandparents contributed to it. Now it's totally up to Vova to put the finishing touches. The current state of the wall can be respresented by a sequence a of n integers, with ai being the height of the i-th part of the wall. Vova can only use 2×1 bricks to put in the wall (he has infinite supply of them, however). Vova can put bricks horizontally on the neighboring parts of the wall of equal height. It means that if for some i the current height of part i is the same as for part i+1, then Vova can put a brick there and thus increase both heights by 1. Obviously, Vova can't put bricks in such a way that its parts turn out to be off the borders (to the left of part 1 of the wall or to the right of part n of it). The next paragraph is specific to the version 1 of the problem. Vova can also put bricks vertically. That means increasing height of any part of the wall by 2. Vova is a perfectionist, so he considers the wall completed when: all parts of the wall has the same height; the wall has no empty spaces inside it. Can Vova complete the wall using any amount of bricks (possibly zero)? Input The first line contains a single integer n (1≤n≤2⋅105) — the number of parts in the wall. The second line contains n integers a1,a2,…,an (1≤ai≤109) — the initial heights of the parts of the wall. Output Print "YES" if Vova can complete the wall using any amount of bricks (possibly zero). Print "NO" otherwise. #include <iostream> #include <algorithm> #include <math.h> #include <stack> using namespace std; int n,arr[2000000],res; stack<int> s; int main() { cin >> n; for(int i = 1; i<=n; i++){ cin >> arr[i]; if(s.empty()){ s.push(arr[i]); }else{ if(abs(arr[i] - s.top()) % 2 == 1){ s.push(arr[i]); }else{ s.pop(); } } } if(s.size()> 1){ cout << "NO" << endl; }else{ cout << "YES" << endl; } return 0; } ## #801. 结账 #include <iostream> #include <cstring> using namespace std; int dp[2000005], a[4000], b[4000], e, n,w[200005],v[200005],g; int main(int argc, char** argv) { cin >> n; for(int i = 1; i<=n; i++){ cin >> a[i]; } for(int i = 1; i<=n; i++){ cin >> b[i]; } for(int i = 1; i<=n; i++){ for(int k = 1; k<=b[i]; k=k<<1){ w[++g] = k*a[i]; v[g] = k; b[i] -= k; } if(b[i]){ w[++g] = b[i]*a[i]; v[g] = b[i]; } } cin >> e; memset(dp, 0x3f, sizeof(dp)); dp[0] = 0; for(int i = 1; i<=g; i++){ for(int j = e; j >= w[i]; j--){ dp[j] = min(dp[j], dp[j - w[i]] + v[i]); } } cout << dp[e] << endl; return 0; } ## #800. 转圈圈 http://oj.ldyzz.com.cn/problem/800 #include <iostream> #include <cstring> #include <math.h> #include <cstdio> #include <stack> using namespace std; int vis[200005],n,arr[200005],ans,now,c[200005]; const int inf = 0x3f3f3f; inline int dfs(int v,int dep){ if(vis[v] == now){ return dep - c[v]; //之前走过 vis值等于now 所以直接dep - c[v] c[v] 表示之前走过的层级的数量 } if(vis[v]){ return inf; //有环 但不是终点 证明此点永远不会到达自身 } vis[v] = now; c[v] = dep; dfs(arr[v], dep+1); } int main(int argc, char** argv) { scanf("%d", &n); ans = inf; for(int i = 1; i<=n; i++){ scanf("%d", &arr[i]); } for(now = 1; now<=n; now++){ ans = min(ans, dfs(now, 0)); } cout << ans << endl; return 0; }
Ways to access lists inside lists I have gone through some reference material but I am not getting really good links that will help me grow understanding of pure functions,patterns and list manipulations combined together. Most of the examples referred in documentation are for single list(without sublists). For example, say from a list I wanted to pick first element of every sublist, say,fi = FactorInteger[12]so I used Table[fi[[i, 1]], {i, 1, Length[fi]}] I try to do everything with Table. Though it should be achievable via other simple ways too. Can someone please put some examples where one can get to learn how to access data from lists of sublists and how to decompose it. This might sound pretty trivial to lot of people but I couldn't find books and links either that can give insight into combining these Mathematica features together. - Did you have a look at guide/ListManipulation? For the given example fi[[All, 1]], First /@ fi and #[[1]] & /@ fi would work. –  Sjoerd C. de Vries Jul 11 '13 at 19:20 yes I have seen –  Rorschach Jul 11 '13 at 19:21 For what it's worth, in your particular example, the Table expression can be replaced with # & @@@ fi. Is that the sort of thing you're interested in? –  m_goldberg Jul 11 '13 at 19:22 yes..these kind of examples to practice with –  Rorschach Jul 11 '13 at 19:22 I'm sorry to say, but everything you're asking for is easily found in the built-in documentation. I'm voting to close this. –  Sjoerd C. de Vries Jul 11 '13 at 19:31 I'm going to answer this as I think it is helpful to gather multiple methods in one place, and such a list is not, as far as I know, easily found in the documentation. a = FactorInteger[269325]; (* sample data *) a[[All, 1]] First[a\[Transpose]] a.{1, 0} First /@ a # & @@@ a #[[1]] & /@ a All lines output: {3, 5, 7, 19}. a[[All, 1]] is I believe the fastest general method, and should usually be your first choice. First[a\[Transpose]] (this looks better in a Notebook) is a fast method for rectangular data. a.{1, 0} shows a numeric method using Dot that is applicable to arrays of known dimensions, such as the output of FactorInteger. First /@ a is probably the most explicit and easiest to read. # & @@@ a illustrates the use of pure functions and Apply at level one. Be aware that the latter methods are often slower because they will unpack. Here are timings for these methods on packed data: SetAttributes[timeAvg, HoldFirst] timeAvg[func_] := Do[If[# > 0.3, Return[#/5^i]] & @@ Timing @ Do[func, {5^i}], {i, 0, 15}] a = RandomInteger[1*^9, {500000, 2}]; a[[All, 1]] // timeAvg First[a\[Transpose]] // timeAvg a.{1, 0} // timeAvg First /@ a // timeAvg # & @@@ a // timeAvg #[[1]] & /@ a // timeAvg 0.00512 0.0012976 0.011984 0.04304 0.2122 0.04492 And unpackable data: a = RandomChoice[{Pi, "x", 1}, {500000, 2}]; a[[All, 1]] // timeAvg First[a\[Transpose]] // timeAvg a.{1, 0} // timeAvg First /@ a // timeAvg # & @@@ a // timeAvg #[[1]] & /@ a // timeAvg 0.01684 0.02308 0.2122 0.078 0.0968 0.1592 - If you're listing them all, how about Part[a, All, 1] –  bill s Jul 11 '13 at 20:01 @bills That's just the long form of a[[All, 1]], and I didn't include the long form of any of these. Do you think these should be included? –  Mr.Wizard Jul 11 '13 at 20:04 Wizard - I kind of like the idea of "all the ways to do it", but it's your call... –  bill s Jul 11 '13 at 20:05 Here's one you dont have: First@Transpose[a] or even more obscurely First@Thread[a] –  bill s Jul 11 '13 at 20:07 @Blackbird You're welcome, and thanks, I try. –  Mr.Wizard Jul 13 '13 at 7:36
Spying on a Ruby process's memory allocations with eBPF Today instead of working on CPU profilers, I took the day to experiment with a totally new idea! My idea at the beginning of the day was – what if you could take an arbitrary Ruby process’s PID (that was already running!) and start tracking its memory allocations? Spoiler: I got something working! Here’s an asciinema demo of what happened. Basically this shows a live-updating cumulative view of rubocop’s memory allocations over 15 seconds, counted by class. You can see that Rubocop allocated a few thousand Arrays and Strings and Ranges, some Enumerators, etc. This demo works without making any code changes to rubocop at all – I just ran bundle exec rubocop to start it. All the code for this is in https://github.com/jvns/ruby-mem-watcher-demo (though it’s extremely experimental and likely only works on my machine right now). how it works part 1: eBPF + uprobes The way this works fundamentally is relatively simple. On Linux ~4.4+, you have this feature called “uprobes” which let you attach code that you write to an arbitrary userspace function. You can do this from outside the process – you ask the kernel to modify the function while the program is running and run your code every time the function gets called. You can’t ask the kernel to run just any code, though (at least not with eBPF) – you ask it to run “eBPF bytecode” which is basically C code where you’re restricted in what memory you can access. And it can’t have loops. So the idea is that I’d run a tiny bit of code every time a new Ruby object was created in rubocop, and then that code would count memory allocations per class. This is the function I wanted to instrument (add a uprobe to): newobj_slowpath. static inline VALUE newobj_slowpath(VALUE klass, VALUE flags, VALUE v1, VALUE v2, VALUE v3, rb_objspace_t *objspace, int wb_protected) The goal was to grab the first argument to that function (klass) and count how many allocations there were for each klass. writing my first bcc program bcc (the “BPF compiler collection”) at https://github.com/iovisor/bcc is a toolkit to help you • write BPF programs in C • compile those BPF programs into BPF bytecode • insert the compiled BPF bytecode into the kernel • Write Python programs to communicate with the BPF bytecode that’s running in the kernel and display the information from that bytecode in a useful way It’s a lot to digest. Luckily the documentation is pretty good and there are a LOT of example programs to copy from in the repo. Here’s the initial BPF program I wrote in a gist. It’s pretty short (just 40 lines!) and has a C part and a Python part. I’ll explain it a bit because I think it’s not that obvious what it does and it’s really interesting! First, here’s the C part – the idea is that this code will run every time newobj_slowpath runs. This code: • declares a BPF hash (which is basically a data structure I can use to store data and send data back to userspace where the Python frontend can read it) • defines a count function which reads the first argument of the function (with PT_REGS_PARM1) and basically does counts[klass] += 1 BPF_HASH(counts, size_t); int count(struct pt_regs *ctx) { u64 zero = 0, *val; size_t klass = PT_REGS_PARM1(ctx); val = counts.lookup_or_init(&klass, &zero); (*val)++; return 0; }; Next, here’s the Python part. This is just a while loop that every second reads counts (the same BPF hash before, but magically accessible from Python somehow!!), prints out what’s in there, and then clears it.. counts = b.get_table("counts") while True: sleep(1) os.system('clear') print("%20s | %s" % ("CLASS POINTER", "COUNT")) print("%20s | %s" % ("", "")) top = list(reversed(sorted([(counts.get(key).value, key.value) for key in counts.keys()]))) top = top[:10] for (count, ptr) in top: print("%20s | %s" % (ptr, count)) counts.clear() Here’s the outcome of this 42-line program: a cool live updating view showing us how many of each class was allocated! So awesome. how do you get the name of a class though? So far this was relatively easy. Having the address of a class is not that useful though – it doesn’t mean anything to me that there were 49 instances of 94477659822920 allocated. So I wanted to get the name of each class! Very helpfully, there’s a rb_class2name function in Ruby that does this – it takes a class pointer and returns a char * (string) with the name. But I wasn’t inside the Ruby process, so I couldn’t exactly call the function. OR COULD I?! Calling the function did seem way easier than trying to reverse engineer all the Ruby internals :) Our goals: 1. call the rb_class2name function 2. don’t disturb the process we’re profiling at all (certainly don’t call any functions in it!) I ended up writing a separate Rust program to map pointers into class names. mapping the ruby process’s memory into my memory My (terrible/delightful) plan for calling rb_class2name was basically – copy all the memory maps from the target process into my profiler process, and then just call rb_class2name and hope it works. Then any memory my target process has, I have too!! And so I can just call functions from that process as if they were functions in my process. Here is the relevant code snippet for copying the memory maps. The copy_map function is defined here Basically I could copy all the memory maps except the ones called “syscall” and “vvar” which I couldn’t copy. Not sure what those are but I don’t think I needed them. for map in maps { if map.flags == "rw-p" { } if map.flags == "r--p" { } if map.flags == "r-xp" { copy_map(&map, &source, PROT_READ | PROT_WRITE | PROT_EXEC).unwrap(); } } calling rb_class2name Calling rb_class2name is pretty easy – I just needed to find the address of rb_class2name (which I already know how to do from rbspy), cast that address to the right kind of function pointer (extern "C" fn (u64) -> u64), and then call the resulting function! Of course all of this (copying the memory maps, casting essentially a random address into a function pointer, calling the resulting function) is unsafe in Rust, but I can still do it! When I finally got this to work at like 9pm today I was so delighted. segfaults I kept running into segfaults when trying to translate class pointers into names. Instead of debugging this (I just wanted to get a demo to work!!) I decided to just figure out how to ignore the segfaults because it wasn’t always segfaulting, just sometimes. here is what I did (this is silly, but it was fun) 1. before doing the thing that causes the segfault, fork 2. in the child process, try to do the potentially segfaulting thing and print out the answer 3. if the child process segfaults, ignore it and keep going this worked great. how the Rust program and the Python program work together the way the final demo works is: 1. the Python program is in charge of getting class pointers + counting how many times each of them has been allocated (with uprobes + BPF) 2. the Rust program is in charge of mapping class pointers to class names – you call it with a PID and a list of class pointers as command arguments, and it prints out the mappings to stdout This is of course all a hacky mess but it worked and I got it to work in 1 day which made me super happy! I think it should be possible to do this all in Rust – as long as I can compile and save the appropriate BPF program, I should be able to call the right system calls from Rust to insert that compiled BPF program into the kernel without using bcc. I think. design principle: magic The main design principle I’m using right now is – how can I build tools that just feel really magical? (they should also hopefully be useful, of course :)). But I think that eBPF enables a lot of really awesome things and I want to figure out how to show that to people! I feel like this idea of streaming you live updates about what memory your Ruby process is allocating (without having to make any changes in your Ruby program beforehand) feels really magical and cool. There’s still a lot of work to do to make it useful and it’s not clear how stable I can make it, but I am delighted by this demo!
# How to prove the uniqueness theorem in an unbounded domain? 1. Apr 7, 2010 ### netheril96 I read a lot of books on the uniqueness theorem of Poisson equation,but all of them are confined to a bounded domain $$\Omega$$ ,i.e. "Dirichlet boundary condition: $$\varphi$$ is well defined at all of the boundary surfaces. Neumann boundary condition: $$\nabla\varphi$$is well defined at all of the boundary surfaces. Mixed boundary conditions (a combination of Dirichlet, Neumann, and modified Neumann boundary conditions): the uniqueness theorem will still hold. " However,in the method of mirror image,the domain is usually unbounded.For instance,consider the electric field induced by a point charge with a infinely large grounded conductor plate.In all of the textbooks,it is stated that "because of the uniqueness theorem...",but NO book has ever proved it in such a domain!!! Some may say that we can regard the infinity as a special surface,but we CAN'T since this "surface" has a infinite area.I tried to prove it using the same way as that in a bounded domain,i.e. with the electric potential known in a bounded surface and $$\varphi \to 0{\rm{ }}(r \to \infty )$$, I ended up with$$\int_S {\phi \frac{{\partial \phi }}{{\partial n}}} dS = {\int_V {\left( {\nabla \phi } \right)} ^2}dV$$in which $$\phi$$ is the difference between two possible solution of the electric potential. Let S be the surface of an infinite sphere,we have $$4\pi \int_{r \to \infty } {{r^2}\phi \frac{{\partial \phi }}{{\partial r}}} dr = {\int_V {\left( {\nabla \phi } \right)} ^2}dV$$ We have$$\phi \to 0(r \to \infty )$$,but it doesn't indicate $${r^2}\phi \frac{{\partial \phi }}{{\partial r}} \to 0(r \to \infty )$$ So we can't conclude that $$\nabla \phi \equiv 0$$ so that the uniqueness theorem doesn't hold (or we cannot prove it with the same way proving uniqueness theorem in a bounded domain) Or can anybody here prove that $${r^2}\frac{{\partial \phi }}{{\partial r}}$$ is bounded? 2. Apr 7, 2010 ### fluidistic Maybe these notes can help (See chapter 4. There are a lot of orthographic errors. Replace "Poison" by "Poisson" and so on) http://www.famaf.unc.edu.ar/~reula/Docencia/Electromagnetismo/part1.pdf In case it doesn't help, let us know. 3. Apr 7, 2010 ### netheril96 I do find a "proof" of uniqueness theorem in an exterior domain,but it just assumes that $$E = O\left( {\frac{1}{{{r^2}}}} \right)(r \to \infty )$$ which is the same as what I want to prove,that is $${r^2}\frac{{\partial \phi }}{{\partial r}} = O\left( 1 \right)(r \to \infty )$$ 4. Apr 8, 2010 ### Meir Achuz An 'infinite domain' means a bounding surface of radius R in the limit as R-->infty. Just like any other surface, any BC that makes the surface integral go to zero is a suitable BC for uniqueness. You can't 'prove' your last equation because that is one of the possible BCs. 5. Apr 8, 2010 ### netheril96 But when do this kind of boundary condition hold? For instance,how do you know that the electric field induced by a point charge with a infinitely large grounded plate satisfies the boundary condition of $${r^2}\frac{{\partial \phi }}{{\partial r}} = O(1)(r \to \infty )$$? 6. Apr 8, 2010 ### Meir Achuz You don't 'know' it. That BC does not hold for an accelerating charge. Imposing that BC gives the static solution. For your infinite plane case, the total charge is zero so that BC does hold. 7. Apr 8, 2010 ### netheril96 I don't think it is obvious that the total charge being zero makes that BC hold.You should give me a derivation. 8. Apr 9, 2010 ### clem Just use Gauss's law. 9. Apr 9, 2010 ### netheril96 You are right! But I found a glitch in my previous "demonstration" It should have been $${\int_V {\left| {\nabla \phi } \right|} ^2}dV = \int_S {\phi \nabla \phi \cdot d{\bf{S}}} = \int_{0 \le \theta \le \pi ,0 \le \varphi < 2\pi } {{r^2}\phi \frac{{\partial \phi }}{{\partial r}}} d\theta d\varphi$$ So here we cannot separate the two variables$$\phi$$and$${{r^2}\frac{{\partial \phi }}{{\partial r}}}$$ into two integrals How to go on with it then?(We still have $$\phi \to 0\left( {r \to \infty } \right)$$ Last edited: Apr 9, 2010
Definition of Irreducibleness. Meaning of Irreducibleness. Synonyms of Irreducibleness Here you will find one or more explanations in English for the word Irreducibleness. Also in the bottom left of the page several parts of wikipedia pages related to the word Irreducibleness and, of course, Irreducibleness synonyms and on the right images related to the word Irreducibleness. Definition of Irreducibleness Irreducibleness Irreducible Irre*du"ci*ble, a. 1. Incapable of being reduced, or brought into a different state; incapable of restoration to its proper or normal condition; as, an irreducible hernia. 2. (Math.) Incapable of being reduced to a simpler form of expression; as, an irreducible formula. Irreducible case (Alg.), a particular case in the solution of a cubic equation, in which the formula commonly employed contains an imaginary quantity, and therefore fails in its application. -- Irre*du"ci*ble*ness, n. -- -- Ir`re*du"ci*bly, adv. Meaning of Irreducibleness from wikipedia - Irreducible complexity (IC) is the argument that certain biological systems cannot have evolved by successive small modifications to pre-existing functional... - In mathematics, an irreducible polynomial is, roughly speaking, a polynomial that cannot be factored into the product of two non-constant polynomials... - In philosophy, a phenomenona is governed by the principle of irreducibility when a complete account of an entity is not possible at lower levels of explanation... - specifically in the representation theory of groups and algebras, an irreducible representation ( ρ , V ) {\displaystyle (\rho ,V)} or irrep of an algebraic... - In algebraic geometry, an irreducible algebraic set or irreducible variety is an algebraic set that cannot be written as the union of two proper algebraic... - be irreducible if it is not a product of two non-units, or equivalently, if every factoring of such element contains at least one unit. Irreducible elements... - Com****tional irreducibility is one of the main ideas proposed by Stephen Wolfram in his book A New Kind of Science. Wolfram terms the inability to shortcut... - In mathematics, the concept of irreducibility is used in several ways. A polynomial over a field may be an irreducible polynomial if it cannot be factored... - Irreducible Mind: Toward a Psychology for the 21st Century is a 2007 psychological book by Edward Francis Kelly, Emily Williams Kelly, Adam Crabtree,... - especially in the field of ring theory, the term irreducible ring is used in a few different ways. A (meet-)irreducible ring is one in which the intersection of...
Two objects of mass $$10$$ kg and $$20$$ kg respectively are connected to the two ends of a rigid rod of length $$10$$ m with negligible mass. The distance of the center of mass of the system from the $$10$$ kg mass is: 1. $$5$$ m 2. $$\frac{10}{3} \mathrm{~m}$$ 3. $$\frac{20}{3} \mathrm{~m}$$ 4. $$10$$ m Subtopic:  Center of Mass | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot Match List - I with List - II. List -I (Electromagnetic waves) List - II (Wavelength) (a) AM radio waves (i) $$10^{-10}~\mathrm{m}$$ (b) Microwaves (ii) $$10^{2} \mathrm{~m}$$ (c) Infrared radiations (iii) $$10^{-2} \mathrm{~m}$$ (d) X-rays (iv) $$10^{-4} \mathrm{~m}$$ Choose the correct answer from the options given below: (a) (b) (c) (d) 1. (ii) (iii) (iv) (i) 2. (iv) (iii) (ii) (i) 3. (iii) (ii) (i) (iv) 4. (iii) (iv) (ii) (i) Subtopic:  Electromagnetic Spectrum | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot The energy that will be ideally radiated by a $$100$$ kW transmitter in $$1$$ hour is: 1. $$1\times 10^{5}$$ J 2. $$36\times 10^{7}$$ J 3. $$36\times 10^{4}$$ J 4. $$36\times 10^{5}$$ J Subtopic:  Power | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot An ideal gas undergoes four different processes from the same initial state as shown in the figure below. Those processes are adiabatic, isothermal, isobaric, and isochoric. The curve which represents the adiabatic process among $$1,$$ $$2,$$ $$3$$ and $$4$$ is: 1. $$4$$ 2. $$1$$ 3. $$2$$ 4. $$3$$ Subtopic:  Types of Processes | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot Plane angle and solid angle have: 1 both units and dimensions 2 units but no dimensions 3 dimensions but no units 4 no units and no dimensions Subtopic:  Dimensions | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot In half-wave rectification, if the input frequency is $$60$$ Hz, then the output frequency would be: 1. $$120$$ Hz 2. zero 3. $$30$$ Hz 4. $$60$$ Hz Subtopic:  Rectifier | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot When two monochromatic lights of frequency, $$\nu$$ and $$\frac{\nu}{2}$$ are incident on a photoelectric metal, their stopping potential becomes $$\frac{V_{s}}{2}$$ and $$V_s$$, respectively. The threshold frequency for this metal is: 1. $$\frac{3}{2} \nu$$ 2. 2$$\nu$$ 3. 3$$\nu$$ 4. $$\frac{2}{3} \nu$$ Subtopic:  Einstein's Photoelectric Equation | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot The displacement-time graphs of two moving particles make angles of $$30^\circ$$ and $$45^\circ$$ with the x-axis as shown in the figure. The ratio of their respective velocity is: 1. $$1: \sqrt{3}$$ 2.$$\sqrt{3}: 1$$ 3. $$1:1$$ 4. $$1:2$$ Subtopic:  Graphs | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot The dimensions [MLT-2A-2] belong to the: 1. electric permittivity 2. magnetic flux 3. self-inductance 4. magnetic permeability Subtopic:  Dimensions | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot The peak voltage of the ac source is equal to: 1.$$1 / \sqrt{2}$$ times the rms value of the ac source 2. the value of voltage supplied to the circuit 3. the rms value of the ac source 4. $$\sqrt{2}$$ times the rms value of the ac source Subtopic:  RMS & Average Values | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh
# Should I apply my treatment as a continuous gradient, or in blocks? I am designing an experiment, and am looking for advice on how to apply my treatment most efficiently. I'll give an example to explain. My response is car efficiency - how many miles per gallon of fuel the car uses. I have two groups of cars - one from Factory A with a regular factory manager, and one from Factory X, which has a fuel efficiency expert as its manager. There are 20 cars from each factory. Each factory also has a mixture of ingredients to produce different fuel with, and have produced slightly different fuels for each car. None of the cars have the exact same fuel, but I know the exact ingredients and amounts that have been used. Finally, I will be testing the cars efficiencies while driving at different speeds, and here is where my main question lies. Should I use a continuous gradient of speeds, or should I use two blocks of speeds? I have a few hypotheses I want to test: • Cars from Factory X are more efficient than cars from Factory A. • The mix of Fuel influences the efficiency of the car. • Efficiency increases with speed, but more so for cars from Factory X than cars from Factory A. To test the effect of speed on efficiency I could drive each pair of cars (one from A, one from X) at a different speed, spread evenly between 20mph and 80mph. Alternatively, I could drive half of the cars at 20mph, and half at 80mph. Bear in mind that I have no pre-experiment knowledge on how efficiency varies with speed, but I suspect it is not a linear relationship, probably more of a threshold change. Which approach would be more powerful for testing my hypotheses? Another question - I was hoping that a simple regression model would be able to answer most of my questions - something like this: efficiency ~ Speed + Fuel + Factory "Fuel" would potentially be a PCA of the fuel ingredients. Would any of the predictors need to be random, or nested effects, or interactions? Many thanks • Is there some other factors you want to control? Like drivers? What about a paired experiment, you select pairs of cars, one from each fabric, and drive them at the same time, same conditions, same speed, once, then repeat, switch drivers, the same again? Then you could avoid the need of some specific regression model ... Oct 22, 2020 at 11:27 • Thanks. I've mentioned all of the relevant factors - "driver" is not one of them. I can pair cars from the two factories to have the same speed, but I can't control the "fuel" so that will always differ between the two cars. – rw2 Oct 22, 2020 at 11:51 ## 1 Answer This depends whether your goal is only to test the hypothesis you tested, or if you are also interested in estimating and criticizing your regression model. First a simple example: If you are interested in a simple regression model $$y=\beta_0 + \beta_1 x+\epsilon$$ with possible values for $$x$$ being the interval $$[a, b]$$, then the optimal design (at least for minimizing prediction variance, but probably also for other criteria), is to take half of observations at $$x=a$$ the other half at $$x=b$$. But, that assumes you trust your model blindly, if the straight line model is wrong, maybe there is some curvature or other nonlinearities, you will not detect that from such a design! So it might be good to allow for a few interior points to allow for model criticism. This in part answers your question: Should I use a continuous gradient of speeds, or should I use two blocks of speeds? With many different values for speed, you allow for nonlinearities and model criticism. You could use Optimal Design theory (in R, package AlgDesign, for an example see Some questions about AlgDesign for Fractional Factorial Design in R) to find good speed values, assuming some flexible model. But, combine this with a paired design, run the experiment with pairs of matched cars from the two factories. That will also allow for test not depending on the regression model.
What probability indicates the p-value? (more than one answer possible) a: The probability of a type I error. b: The probability of a type II error. c: The probability of rejecting the alternative hypothesis when it is false. d: The probability of rejecting the null hypothesis when it is false. I think None of the following answers is correct. Can anyone answer this question and explain to me? • self-study? please add the tag if so. – Xi'an Apr 5 '15 at 18:28 • Check other questions tagged p-value: stats.stackexchange.com/questions/tagged/p-value I am sure you'll find there multiple relevant answers. Also check the tour on how the site works: stats.stackexchange.com/tour – Tim Apr 5 '15 at 19:26 • I already put my effort on this question and understand that Type I Error is defined as rejecting the null hypothesis when it is in fact true. Type II error is defined as not rejecting the null hypothesis when it is false. p-value is the region when I want to make a decision about rejecting null hypothesis. However, I am not quite sure about this question, so I need help. Does anybody can help? – Nong Nick Apr 5 '15 at 19:40 • Well this is rather upsetting. How about "Between zero and four answers possible"? See this paper distinguishing $\alpha$ from $p$ – conjugateprior Apr 5 '15 at 20:38 • Genau. None are correct. P is not a type 1 (or a type 2) error rate, although it seems that your textbook or instructor thinks it is. – conjugateprior Apr 5 '15 at 23:14
# qiskit.circuit.library.MCXGate.get_num_ancilla_qubits¶ static MCXGate.get_num_ancilla_qubits(num_ctrl_qubits, mode='noancilla')[ソース] Get the number of required ancilla qubits without instantiating the class. This staticmethod might be necessary to check the number of ancillas before creating the gate, or to use the number of ancillas in the initialization.
# Surviving Frostbite in the Cryptocurrency Market: Find the Simple and Manage Complex Assets 2018 has been a havoc-ridden year for the cryptocurrency market, going from the jubilant highs of December 2017 to January, and falling from that gracious point ever since. Now, with December 2018 been and gone, we're reminded both in the present and in retrospect of the fact that the crypto market has effectively slid down by 80%. This is all in spite of the fact that there has been a fair deal of good news coming from the cryptocurrency and blockchain world, with a number of institutional companies and investors getting involved in the market with even more to follow. So while the fundamentals have remained relatively the same, this is where individuals are proven quite irrational in how they approach the market; we see this with developments – where news is followed by an overwhelmingly negative reception by investors. So what exactly is driving this irrational pattern? We can describe it as a form of information asymmetry amongst the investors. So when we talk about there being a degree of asymmetry, it emerges in a number of different ways, an example of this is the degree in which the investor informed regarding the market, including whether or not the project has lived up to the sometimes lofty ambitions it has on delivering innovation to both investors and users. Another more simplistic measure of this is the complexity of the product, from the intrinsic value of the asset, to how easy it is to apply. While there are a number of things within the blockchain space can be explained in phrases, such as Ethereum being the answer to middle-men, Giving an apt enough description of a system, while being able to provide a version of it that lives up to that is something that is a thorough challenge. This is a challenge that emerges for a large number of blockchain products. Especially true when considering how much of an untrodden path it is, from proposing many of these technical features within a white paper, it can prove a challenge to make sure they live up to the initially described charactristics. But in order to ensure that the cryptocurrency market remains less volatile, it hinges on doing all that is possible to eradicate variables such as information asymmetry which can emerge due to these technical challenges. ## Managing Complexity – The Investors Perspective When we think of investors in the stock market, we usually think of the majority being these scrupulously dressed and very well informed investors, buying and selling positions with a great deal of forethought. If we are to believe that this area of investing is steadily becoming more ‘proletarianised', then the cryptocurrency space is way ahead according to the distribution of information. Looking at the performance of cryptocurrencies over 2018, we see that there's a bipolar attribute to information. What we mean by this is that the majority is made up of those who are otherwise not so well informed of cryptocurrency, or why they have made the investments they have. Meanwhile, a minority have a considerable level of information that they take into consideration when investing. This disparity in information demonstrates why the market can prove rather erratic; with only those that are well informed benefitting, while the majority make financial fools of themselves through acting emotionally as opposed to logically. But much like the generic markets, there are times when not even logic can save us and the markets from the panic of herd mentality. When this happens in the crypto market, it leaves a riotous level of volatility, which is far more frequently occurring within Over the Counter (OTC) stocks. Value, or at least the perception of the same is enough to motivate decision making in the minds of investors, when we take a large body of investors, we have a pool of people that shape the underlying value of an asset, tangible and digital. While aspects that were true when investing in something like Gold, Securities or Oil 100 years ago may have seen their fundamentals change, the same isn't true for cryptocurrencies. Cryptos, by contrast, have maintained a certain degree of consistency, and this is a major driver for its use as a store of value. Otherwise boiled down to the acronym of SoV, and it is what makes cryptocurrencies unique from Over the Counter stocks. For many otherwise uninformed buyers tend to be reactionary; buying low and selling high, gaining a profit that they can transfer over to a less volatile fiat currency. In contrast, those that are more informed will know how to make longer-term plays, and see the value a cryptocurrency from its mechanics, while also making sure they buy low. These same investors use cryptocurrencies as a vehicle to follow longer spanning boom and bust cycles. ### Onto Projects – Follow Their Complexity Being the kind of person that has seen the beginning of companies and projects, to see them peak and collapse, you can often spot out those that make too many promises and show little delivery over time. So where does this disconnect come from? It's the disparity between the features that are promised by the company, and how much it stays disparate from the finished product is the complexity that hides just behind the crypto development process. And this is where a large number of blockchain projects promise a remarkable amount from their white paper, raise a large amount of money from funding, or an Initial Coin Offering, only to find that the finished product does not live up to expectation, or just never materializes. This is very similar to the disparity and situation that exists between investors, projects find themselves in a similar distribution of technical prowess; with a small number of projects making sure that what they deliver on a practical level lines up as close as possible with the White Paper they publish. While this is a minority,  the vast majority of projects, instead, take the course of overpromising on the aspect of software and its accompanying features. As a result, the development phase takes more complex twists and turns, rendering some of it redundant, or too complex to follow through with. For a certain amount of this, the cryptocurrency space and blockchain is a rather unfamiliar territory, with a great deal of its technology and how to apply it being rather complex. Alongside this, there are only a small number of people who are suited enough to the field to make sense of it, and apply it in a way that is innovative and viable as a product for investors. There a good number of projects out there that make lofty claims about what they can achieve with their initiative, but with such a small number of people out there that can take these concepts from paper to practice, it's easy for a company to find itself caught between overpromising and delivering on something that is just impractical. Conversely,  there are a good number of projects out there that have been funded using some kind of bait and switch, resulting in their market valuation collapsing over time, especially once investors catch onto what they're claiming. ### Simplifying Matters – For Investors If you're one of those ‘old dog' kinds of people, especially when it comes to cryptocurrencies, stop now, because there are more than enough ways . to get the information you need in order to move from the uninformed to the IN-formed. Here are some of the simple lessons to bear in mind when cutting through the implied complexity of blockchain / cryptocurrency projects. • Be a skeptic – For those that are offered up something on a silver platter, the best method is to never trust it. Wherever there are extraordinary claims, they will need some rather mind-blowing evidence in order to cash those theoretical checks.If they fail to provide it, then try and find someone that is on the outside looking at the project and what opinion they have about it. Try and understand as much as you can about the project and weigh up whether or not it can deliver. • Make Sure to do Your Homework – There's no alternative out there to doing your homework when it comes to a blockchain or cryptocurrency relevant project before you start dumping money into it haphazardly. This can take some time, so this is where reason has to prevail over emotion, as research can mean the difference between a good future investment, and dumping money down a well.Consider some of these questions, are there other kinds of projects that serve that niche in a more effective way? And what makes this specific project stand out? If it can answer these questions well, then you may have a winner. • Dollar-cost averaging – This is not something that's common within the world of investment in the cryptocurrency world, but it remains an effective financial strategy when it comes to ensuring there's a management tool in place for tracking purchases and measuring profitability.Having a dollar cost averaging system allows you to measure and make regular purchases over a longer span of time. This changes the way investors approach cryptocurrency, rather than purchase as much as possible when it hits a low value with a certain degree of panic, into one that can be done through continuous purchases of a certain amount each week or month, etc. This adds far more in the way of rationality to your crypto purchases. • Psychological periodicity – Periodising within the cryptocurrency markets is something that you don't commonly see within any other market, so this is a unique attribute.Where this comes into account is the instilling in people's minds that, upon investing in a cryptocurrency, the investor will need to bear in mind that it may take some time before the market reaches a point where your investment will yield significant enough profits.The time that we live in right now is an interesting one, and it has a great deal of similarity with 2014; where the high points have been and gone, and is in a state of sell-off throughout the year. The lions share of 2015, saw the pairing of Bitcoin and US Dollars were hitting the two hundred mark, and it proved to a ripe market for buying Bitcoin. With the similarities between 2018 and 2014, there's a strong possibility that 2019 will be very much the same as 2015. ### For Projects – Finding Simplicity Complexity is the one problem that companies and projects need to overcome, especially when it causes a gaping crevasse between what you promise and what you can actually deliver. In order to navigate this effectively, here are some tips to follow • Do Not Overpromise Features – As a newly developed company, it is easy to get swept up with the optimism of what you're trying to do, or feel pushed into promising more in order to entice potential investors or in order to bring in as much capital as possible from Series A funding or an ICO. Ifyou're not interested in being in the market of looking like a fool in front of your prospective investors, for god's sake don't do overpromise. The priority is about deliberating whether or not there are features within the project are achievable with the team of developers that you have, the money that you have, including the time-frame.So long as you can realistically answer all of these questions, with some leeway for delays or hiccups along the way, then you can promise it. If you can't, simply don't. • Avoid Unnecessary Complexity – So, once you have managed to plot out what is feasible with regards to promising elements, mechanics and features of your projects, or have otherwise made a strategic change in position in order to address a technical impediment to progress, abide by this one rule for as long as you can – avoid complexity.Blockchain, and cryptocurrencies that may be associated with it is complex enough without adding whole other layers to it as well. So it is crucial to avoid it wherever you can, largely to prevent your development team from going prematurely grey too. Less is more in the blockchain space, and the more fluent people are with the complexity that it has on a fundamental level, the more you can put it to work, and the more substantial a product you can put into practice. ## So, In Conclusion Working closely with your team, and the blockchain and cryptocurrency community at large, this ecosystem can take a great deal of large steps forward in order to stamp out unneccesary market volatility, as well as prevent the preventable loss of ambitious projects. The greater the comprehension we collectively have, the easier it will be to create substantial technology and easily educate newcomers, leading to mass adoption. If we are to believe that 2019 has the potential to be like 2015, it will be a year of market consolidation for developers and investors, leading to the shake off of underperforming and fraudulent projects. These fundamentals haven't changed much as time has gone on, meaning that it's easier for one to get accustomed with what they need to do. If we are right in this theorising, then 2019 will see cryptocurrencies rocket upwards once again. *Action* Enter Best Email to Get Trending Crypto News & Bitcoin Market Updates Bitcoin Exchange Guide News Team B.E.G. Editorial Team is a gracious group of giving cryptocurrency advocates and blockchain believers who want to ensure we do our part in spreading digital currency awareness and adoption. We are a team of over forty individuals all working as a collective whole to produce around the clock daily news, reviews and insights regarding all major coin updates, token announcements and new releases. Make sure to read our editorial policies and follow us on Twitter, Join us in Telegram. Stay tuned. #bitcoin [Alert] Use the author's self-conducted information at your own risk, do you own research, never invest more than you are willing to lose. [Disclosure] The published news and content on BitcoinExchangeGuide should never be used or taken as financial investment advice. Understand trading cryptocurrencies is a very high-risk activity which can result in significant losses. Editorial Policy \\ Investment Disclaimer 3,438Fans 2,795Followers 4,190Followers ### Cardano (ADA) Regains Top 10 Position in Market Cap Ahead of its Shelley Mainnet Release The price of Cardano (ADA) has been surging in the past two months as stakeholders await the Shelley mainnet release. This upgrade will mark... ### Chinese Developers to Use Telegram's Open-Sourced TON Blockchain To Launch Their Own Network Chinese TON developers announce the launch of their variation of the TON blockchain after successive launches by TON Lab’s Free TON platform and... ### 60% of Bitcoin Supply Not Moved in a Year, What Does it Say About the Next Bull Run? May is coming to an end and bitcoin is looking to stay above \$9,500, ending the month with about 8% returns. The market sentiments are... ### E-Commerce Giant, Amazon, Patents Blockchain Authentication Of Accepted Consumer Products The world-leading e-commerce platform, Amazon Technologies Inc., filed a patent on blockchain technology for proving the authenticity of consumer goods in its online marketplace.... ### Compound Roadmap to Full Decentralization to Begin with Issuance of COMP to Their Community Compound will now issue COMP tokens to the users of their protocol in a bid to achieve full decentralization. They have released an... BitcoinExchangeGuide is a hyper-active daily crypto news portal with care in cultivating the cryptocurrency culture with community contributors who help rewrite the bold future of blockchain finance. Subscribe on Google News, see the mission, authors, editorial links policy, investment disclaimer, privacy policy. Got News? Contact us, we are human too. Note: nothing here is financial advice, do your own research thoroughly.