content
stringlengths
86
994k
meta
stringlengths
288
619
fCash Maturity | Copy of Notional V2 Lender settlement When your fCash reaches maturity, it will automatically convert to cTokens using the fCash/cToken exchange rate at that time. For example, if you have 100 fDAI and the fDAI/cDAI exchange rate at maturity is 0.1 (10 cDAI = 1 DAI), then your fDAI would automatically convert into 1,000 cDAI and start earning the cDAI supply rate. The fCash/cToken exchange rate is stored on Notional for every maturity - this is called the settlement exchange rate. All fCash at a maturity convert to cTokens at the settlement exchange rate. This means that it doesn't matter if you settle immediately at maturity or if you wait - you will get the same amount of cTokens regardless. This is good because it means you don't lose any interest by not settling your position immediately. In this example, a lender has 100 fDAI that converts into 1,000 cDAI at maturity using a settlement exchange rate of 0.1. No matter when they come to settle or claim their cash, they will always get 1,000 cDAI. This means that they will always be earning the cDAI lending rate after maturity and the DAI value of their cDAI will grow. Borrower settlement As a borrower, you will have a negative fCash balance at maturity which will convert to cTokens at the settlement exchange rate. So if you have -100 fDAI, that will convert into -1,000 cDAI at maturity. This means you will owe Notional 1,000 cDAI, and the DAI value of your debt will increase at the cDAI lending rate after maturity. If borrowers do not repay their debts by maturity, their debts can be rolled forward to the next maturity three months in the future. When a debt is rolled forward, the borrower is locked into the fixed rate at the next three month maturity + a penalty of 2.5%. For example, if the interest rate of the next three month maturity is 5%, a borrower who has not repaid their debts will get rolled forward and they will lock in an interest rate of 7.5% (5% + 2.5%) at the next maturity three months in the future.
{"url":"https://docs.notional.finance/notional-v2/fcash/fcash-maturity","timestamp":"2024-11-07T20:33:50Z","content_type":"text/html","content_length":"167078","record_id":"<urn:uuid:4b8e9ebc-78f0-4975-ab75-960c9822459f>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00001.warc.gz"}
HackerEarth - Golden rectangles Solution You have N rectangles. A rectangle is golden if the ratio of its sides is in between [1.6,1.7], both inclusive. Your task is to find the number of golden rectangles. Input format • First line: Integer N denoting the number of rectangles • Each of the N following lines: Two integers W, H denoting the width and height of a rectangle Output format • Print the answer in a single line. 1≤W, H≤109 There are three golden rectangles: (165, 100), (170, 100), (160, 100). Solution in Python n = int(input()) a = [list(map(int,input().split())) for i in range(n)] print(sum(1 for i in a if max(i)/min(i)>=1.6 and max(i)/min(i)<=1.7)) Additional Info, We use max(i)/min(i) so that always the long the longer side gets divided by the smaller side. That is, either (100,120) or (120,100) our division will be 120/100.
{"url":"https://www.thepoorcoder.com/hackerearth-golden-rectangles-solution/","timestamp":"2024-11-05T00:55:27Z","content_type":"text/html","content_length":"41543","record_id":"<urn:uuid:6b085ac0-63f1-412a-82a7-7f4783be8675>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00553.warc.gz"}
GeometricDecomposability -- a package to check whether ideals are geometrically vertex decomposable [CDSRVT] Mike Cummings, Sergio Da Silva, Jenna Rajchgot, and Adam Van Tuyl. Geometric vertex decomposition and liaison for toric ideals of graphs. Algebr. Comb., 6(4):965–997, 2023. [CVT] Mike Cummings and Adam Van Tuyl. The GeometricDecomposability package for Macaulay2. Preprint, available at arXiv:2211.02471, 2022. [DSH] Sergio Da Silva and Megumi Harada. Geometric vertex decomposition, Gröbner bases, and Frobenius splittings for regular nilpotent Hessenberg Varieties. Transform. Groups, 2023. [KMY] Allen Knutson, Ezra Miller, and Alexander Yong. Gröbner geometry of vertex decompositions and of flagged tableaux. J. Reine Angew. Math. 630 (2009) 1–31. [KR] Patricia Klein and Jenna Rajchgot. Geometric vertex decomposition and liaison. Forum Math. Sigma, 9 (2021) e70:1-23. [SM] Hero Saremi and Amir Mafi. Unmixedness and arithmetic properties of matroidal ideals. Arch. Math. 114 (2020) 299–304.
{"url":"https://macaulay2.com/doc/Macaulay2/share/doc/Macaulay2/GeometricDecomposability/html/index.html","timestamp":"2024-11-01T23:07:59Z","content_type":"text/html","content_length":"24038","record_id":"<urn:uuid:6a129b60-f2c6-4143-bea1-394b506ed1a0>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00654.warc.gz"}
MA3001 Mastergradsseminar i matematikk høsten 2008 Sessions Monday 16:15-17:00 in Room 734 (SB II, Dept of Maths), Wednesdays 16:15-17:00 in Room 734 Oral Examination on Wednesday the 10th of December, room 922 SB II, 13-17 o'clock An essay to be delivered no later than 8.XII" First lecture on Monday the 8th of September at 16:15 in Room 734. In the fall of 2008 the main topic is the Analytic Theory of Numbers. Thus the connection between the prime numbers and the celebrated zeta function of Riemann is a central theme. Prerequisities: some complex analysis. If interested, come to SB2, room 1152 on Thursday 4 September at 16:15. Tentative book: Tom Apostol "Introduction to Analytic Number Theory • Chapter 2 Arithmetical Functions and Dirichlet Multiplication § 2.1 - § 2.9 • Chapter 3 Averages of Arithmetical Functions § 3.1 -3.5, 3.7, 3.10, 3.11 • Chapter 4 Some Elementary Theorems on the Distribution on Prime Numbers § 4.1 -4.5, 4.8 Numerical example for the proof of Tschebyschef's Thm • Chapter 6 Finite Abelian Groups and their Characters § At least 6.8, 6.9, and 6.10 Summation for B(x) and Interesting sums • Chapter 7 Dirichlet's Theorem on Primes in Arithmetical Progressions • Chapter 11 Dirichlet Series and Euler Products Perhaps not § 11.9. In addition, Cahen's formulas for the abscissas. • Chapter 12 The Functions ζ(s) and … At least the Riemann zeta function. • Chapter 13 Analytic Proof of the Prime Number Theorem Also a proof without the Riemann-Lebesgue lemma (instead part of the contour is to the left of the abscissa 1; see Stein-Shakarchi: Complex Analysis)
{"url":"https://wiki.math.ntnu.no/ma3001/2008h","timestamp":"2024-11-06T02:33:52Z","content_type":"text/html","content_length":"9706","record_id":"<urn:uuid:40374adf-6c5a-401a-8cc0-1c2dfebe84fc>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00722.warc.gz"}
Geek Challenge Results: Primetime Telephone Numbers In last month's Geek Challenge, we asked what number contains 22 primes? This was a unique problem that needed to utilize at least a little bit of computation (to check if something is prime or not). Luckily, prime calculators are a dime a dozen across the interwebs and many computer languages have a “prime check” method built in. Congrats to our winner, Alex Bruno! A few people commented on the problem being too constrained, giving the amount of included prime numbers made it simple. This was by design! Alex Bruno came up with a way to greatly minimize the amount of prime checks necessary by creating prime-rich building blocks to create a solid foundation. Although 0373373 was not the answer I had in mind, it followed all the rules and was created in a very interesting way. No one submitted an answer for the 10-digit bonus, which was surprisingly different from the 7-digit case it many ways. Heavy optimization is necessary for that to ensure your number crunching doesn’t run for days. Thanks for playing! The formal analysis for my solution and Alex’s solution are below. Alex Bruno’s Solution A. Overview: My analysis split into two schools of thought: 1. Brute force it 2. Start with smaller numbers as "building blocks" and put them together until we get a big enough number with the correct number of primes. Either way, I needed a good way to calculate the number of primes in any given number. I didn't want to sit and do them all by hand. I wanted to be able to type them into something and have it spit out how many. So, I fired up MATLAB, and banged out a script that would take a number of any size and analyze every possible sub-number and tell me how many primes there were. That script was useful in both approaches. B. Approaches: 1. Brute Force it What it says on the tin. I took the script and replaced the while with a for, and told it to run every single 7-digit number from 100000 to 9999999. Obviously, this took a while (I let it run in the background while I worked the other method). After about 30 minutes, it spit out: This is one of my submissions, and, as I'll explain in a second, is probably the more accurate one. 2. Building blocks There are a possible 28 sub-numbers in a 7-digit number (calculated using the nth triangle number formula, which is a sort of additive factorial function). If we need 22 primes, that means a scant six of them can be non-primes. To that end, I needed to find building blocks that would have as many primes as possible. I decided to start with 3-digit building blocks (six possible sub-numbers), and came up with the following: Five primes out of six (I started with a list of prime numbers less than 1000): Six primes out of six: Obviously, I needed to use 373 as a building block. In fact, I could use it twice, and only need to add one extra digit. This looked like this: 373 373 _ OR 373 _ 373 OR _ 373 373 This left me with 30 options to run, which was very simple using my code. The only combination that gives me a full 22 primes is: This is because the 0 in the front, while not being prime itself, makes several extra primes (since 37 is prime, so is 037, etc.). Even with losing that digit as a prime, it adds several others. C. Conclusion: So, I have two answers; two 7-digit numbers that have 22 prime sub-numbers. While it is absolutely the more inelegant of the approaches, I would submit the first answer (3733797) as the more correct answer because the prompt for this problem dealt with phone numbers, and I'm not sure they can start with a 0, as with my second answer. I'll argue that I may have gotten to the first answer using building blocks if I had kept going with that method; the brute force answer does include two of the 3-digit building blocks I found (373 and 379). Unfortunately, I didn't have time to grapple with the 10-digit optional problem. I can't imagine doing that either of the two ways I did, so I have to ask, what is the more elegant way to do this? I'm sure there's some sort of trick, and I would love to know what it is! My Solution Prime number calculation is a deeply researched and investigated field of study with many applications and approaches. Because of this, there are some highly efficient methods for calculating primes. We can leverage these approaches to create an efficient computational solution. As described, this challenge involves finding primes within a number, with the added complexity of multiple possible sub-groupings and prime combinations. Any given n-digit number has (((n+1)*n))/2 possible groupings (one set of n consecutive digits, two sets of n-1 consecutive digits and so on down to n sets of one digit, so the total number of groupings is the sum from 1 to n). For any given number, all combinations must be investigated to obtain the number of primes it contains. To avoid excessive calculation, two major computational savings can be applied: 1. Use a prime number sieve (such as the Sieve of Eratosthenes) to pre-calculate all prime numbers within the relevant range (all values less than 〖10〗^(n+1)-1). 2. Use memorization to store a record of previously determined number results. The first optimization requires creating an array mask of prime/not-prime values. Then, whenever a number is encountered, rather than determining if a number is prime, the corresponding index in the array can be accessed rather than re-calculating the primality of the number. This approach requires greater memory but substantially less computation, especially when many prime computations need to be carried out in a small range. There are several further optimizations that can be applied to the prime number sieve which are not discussed here. The second optimization leverages an observation on the nature of the result for any given number. Consider the graphic provided in the problem statement: From these graphics, observe that it is possible to get the number of primes in the n-digit case by taking the sum of the n-1 groups, and subtracting the n-2 group shared by both n-1 groups. Alternatively stated, take the sum of the 6-digit number results and subtract the “middle” 5-digit number case (so 9=6+8-5). Therefore, for any given number with more than two digits, it is not necessary to calculate the number of primes it contains directly from a sum of its groups, but instead can be found by taking a sum of these sub groupings. An important caveat to this approach is that each entry must be considered with leading zeros as a different value than without leading zeros (i.e. in the example given, 07 is unique from 7). Applying these computational observations along with the additional unstated constraint that the n-digit number cannot start with a zero, the best answer was found to be 3733797 with 22 primes (with a total computation time of approximately five seconds). The graphic representation of this is shown as: The Bonus Solution Extending this approach to the 10-digit case requires some code refactoring, because the prime sieve array mask and memorized prior results require data structures with more indices than provided by a single unsigned 32-bit integer. This can be overcome several ways. One possible approach involves creating custom data-structures that act as wrappers on standard data structures and using a 64-bit integer to address locations. Implementing these structures comes at a significant computational and memory access cost, which substantially slows computation time. Consequently, the 10-digit case took approximately 24 hours to solve. The best 10-digit number (with 38 primes out of a possible 55) was found to be 7000373379. A graphical representation of this number is shown below. As an additional interesting note, the distribution of the number of primes in a given number of digits is shown below: Submit your Geek Challenge questions and ideas at geekchallenge@dmcinfo.com. There are currently no comments, be the first to post one. Post a comment Name (required) Email (required) Enter the code shown above:
{"url":"https://www.dmcinfo.com/latest-thinking/blog/id/9643/categoryid/148/geek-challenge-results-primetime-telephone-numbers","timestamp":"2024-11-01T22:20:32Z","content_type":"text/html","content_length":"77644","record_id":"<urn:uuid:fa4545a1-ceb7-4e25-abad-69931751a057>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00098.warc.gz"}
Aug 2017 challenge Each month, a new set of puzzles will be posted. Come back next month for the solutions and a new set of puzzles, or subscribe to have them sent directly to you. MIND-Xpander maths problem What ‘interesting’ value do you get when you sum together the square of the first seven prime numbers Alphametic puzzles Alphametic puzzles (sometimes known as Cryptarithms or Verbal Arithmetic), are puzzles where words or phrases are put together in an arithmetic formula such that numbers can be substituted for the letters to make the formula true. Find the numeric equivalent for each of the following alphametic expressions. Each letter is unique with values between 0 & 9 and there may be more than one answer. 1. CHECK + THE = TIRES 2. NO + NOT + THAT = AGAIN EQUATE+1 puzzle Each row, column & diagonal is an equation and you use the numbers 1 to 9 to complete the equations. Each number can be used only once. ‘One’ numbers have been provided to get you started. Find the remaining eight numbers that satisfies all the resulting equations. Note – multiplication (x) & division (/) are performed before addition (+) and subtraction (-). There are more than one way of doing these puzzles and may well be more than one answer. Please let me and others know what alternatives you find by commenting below. We also welcome general comments on the subject and any feedback you'd like to give. If you have a question that needs a response from me or you would like to contact me privately, please use the contact form. Get more puzzles! If you've enjoyed doing the puzzles, consider ordering the books; • Book One - 150+ of the best puzzles • Book Two - 200+ with new originals and more of your favourites Both in a handy pocket sized format. Click here for full details. Last month's solutions MIND-Xpander Maths Problem If N is a whole number and the following limits apply: 2N > 30 and 5N < 100, what values for N are possible? • Lower limit: Where 2N > 30 or N > 15, then N must be ‘greater than’ 15. • Upper limit: Where 5N < 100 or N < 20, then N must be ‘less than’ 20. Therefore, the possible values for N between the lower & upper limits would be 16, 17, 18 & 19. Petite CIRCLE-Sums Puzzles The number within the 4 sectors of the outer circle is equal to the sum of the three numbers in its sector. The numbers in the individual circles can only be 1 to 9 and each number can be used only once. One number has been provided to get you started. Find the remaining 4 numbers. Note: There may be more than one solution. EQUATE+2 Puzzle Each row, column & diagonal is an equation and you use the numbers 1 to 9 to complete the equations. Each number can be used only once. ‘Two’ numbers have been provided to get you started. Find the remaining seven numbers that satisfies all the resulting equations. Note – multiplication (x) & division (/) are performed before addition (+) and subtraction (-). 3 Comments Kim Barber on August 1, 2017 at 5:50 pm Mr. Burgin, These puzzles are just THE BEST! So very fun and challenging. I love trying to figure them out each month (with the goal of successfully completing them before my kids do for the bragging Keep ’em coming….. Kim Barber Olad Imejee on August 1, 2017 at 8:52 pm You kept me up way beyond my bedtime! Not fair! But not ‘Sad’! Ashvini kulkarni on October 1, 2017 at 11:22 pm I am a maths teacher from India and I love your puzzles and sometimes I share them with my students and challenge them to it. Your puzzles are doable and therefore enjoyable to everybody . Always looking forward to it. Thank you Submit a Comment Cancel reply
{"url":"https://gordonburgin.com/2017/08/aug-2017-challenge/","timestamp":"2024-11-08T21:13:44Z","content_type":"text/html","content_length":"261677","record_id":"<urn:uuid:95b42d8f-bba5-44ce-8d6a-c43b7fb3c1af>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00445.warc.gz"}
Logic Quotes - 311 quotes on Logic Science Quotes - Dictionary of Science Quotations and Scientist Quotes Logic Quotes (311 quotes) Logically Quotes … just as the astronomer, the physicist, the geologist, or other student of objective science looks about in the world of sense, so, not metaphorically speaking but literally, the mind of the mathematician goes forth in the universe of logic in quest of the things that are there; exploring the heights and depths for facts—ideas, classes, relationships, implications, and the rest; observing the minute and elusive with the powerful microscope of his Infinitesimal Analysis; observing the elusive and vast with the limitless telescope of his Calculus of the Infinite; making guesses regarding the order and internal harmony of the data observed and collocated; testing the hypotheses, not merely by the complete induction peculiar to mathematics, but, like his colleagues of the outer world, resorting also to experimental tests and incomplete induction; frequently finding it necessary, in view of unforeseen disclosures, to abandon one hopeful hypothesis or to transform it by retrenchment or by enlargement:—thus, in his own domain, matching, point for point, the processes, methods and experience familiar to the devotee of natural science. In Lectures on Science, Philosophy and Art (1908), 26 “Contrariwise”, continued Tweedledee, “if it was so, it might be, and if it were so, it would be; but as it isn’t, it ain’t. That’s logic!” In Through the Looking Glass: And What Alice Found There (Dec 1871, 1897), 74. “Logic” proved that airplanes can’t fly and that H-bombs won’t work and that stones don’t fall out of the sky. Logic is a way of saying that anything which didn't happen yesterday won't happen In Glory Road (1963, 1981), 54. [Aristotle formal logic thus far (1787)] has not been able to advance a single step, and hence is to all appearances closed and completed. In Preface to second edition (1787) of Critique Of Pure Reason (1781) as translated by Werner Pluhar (1996), 15. An earlier translation by N. Kemp-Smith (1933) is similar, but ends with “appearance a closed and completed body of doctrine.” [D]iscovery should come as an adventure rather than as the result of a logical process of thought. Sharp, prolonged thinking is necessary that we may keep on the chosen road but it does not itself necessarily lead to discovery. The investigator must be ready and on the spot when the light comes from whatever direction. Letter to Dr. E. B. Krumhaar (11 Oct 1933), in Journal of Bacteriology (Jan 1934), 27, No. 1, 19. [Kepler] had to realize clearly that logical-mathematical theoretizing, no matter how lucid, could not guarantee truth by itself; that the most beautiful logical theory means nothing in natural science without comparison with the exactest experience. Without this philosophic attitude, his work would not have been possible. From Introduction that Einstein wrote for Carola Baumgardt and Jamie Callan, Johannes Kepler Life and Letters (1953), 13. [Modern science] passed through a long period of uncertainty and inconclusive experiment, but as the instrumental aids to research improved, and the results of observation accumulated, phantoms of the imagination were exorcised, idols of the cave were shattered, trustworthy materials were obtained for logical treatment, and hypotheses by long and careful trial were converted into theories. In The Present Relations of Science and Religion (1913, 2004), 3 [The body of law] has taxed the deliberative spirit of ages. The great minds of the earth have done it homage. It was the fruit of experience. Under it men prospered, all the arts flourished, and society stood firm. Every right and duty could be understood because the rules regulating each had their foundation in reason, in the nature and fitness of things; were adapted to the wants of our race, were addressed to the mind and to the heart; were like so many scraps of logic articulate with demonstration. Legislation, it is true occasionally lent its aid, but not in the pride of opinion, not by devising schemes inexpedient and untried, but in a deferential spirit, as a subordinate co-worker. From biographical preface by T. Bigelow to Austin Abbott (ed.), Official Report of the Trial of Henry Ward Beecher (1875), Vol. 1, xii. Dilbert: Evolution must be true because it is a logical conclusion of the scientific method. Dogbert: But science is based on the irrational belief that because we cannot perceive reality all at once, things called “time” and “cause and effect” exist. Dilbert: That’s what I was taught and that’s what I believe. Dogbert: Sounds cultish. Dilbert comic strip (8 Feb 1992). Frustra fit per plura, quod fieri potest per pauciora. It is vain to do with more what can be done with less. Ockham’s Razor.Summa logicae (The Sum of All Logic)(prior to 1324), Part I, Chap. 12. [The village of Ockham is in Surrey. The saying (which was applied for diminishing the number of religious truths that can be proved by reason) is not Ockham's own. As given in Joseph Rickaby, Scholasticism (1908), 54, footnote, it is found a generation before Ockham in Petrus Aureolus, The Eloquent Doctor, 2 Sent. dist. 12, q.1.] I believe in logic, the sequence of cause and effect, and in science its only begotten son our law, which was conceived by the ancient Greeks, thrived under Isaac Newton, suffered under Albert That fragment of a 'creed for materialism' which a friend in college had once shown him rose through Donald's confused mind. Stand on Zanzibar (1969) ~~[Attributed, authorship undocumented]~~ Mathematical demonstrations are a logic of as much or more use, than that commonly learned at schools, serving to a just formation of the mind, enlarging its capacity, and strengthening it so as to render the same capable of exact reasoning, and discerning truth from falsehood in all occurrences, even in subjects not mathematical. For which reason it is said, the Egyptians, Persians, and Lacedaemonians seldom elected any new kings, but such as had some knowledge in the mathematics, imagining those, who had not, men of imperfect judgments, and unfit to rule and govern. From an article which appeared as 'The Usefulness of Mathematics', Pennsylvania Gazette (30 Oct 1735), No. 360. Collected, despite being without clear evidence of Franklin’s authorship, in The Works of Benjamin Franklin (1809), Vol. 4, 377. Evidence of actual authorship by Ben Franklin for the newspaper article has not been ascertained, and scholars doubt it. See Franklin documents at the website founders.archives.gov. The quote is included here to attach this caution. A “critic” is a man who creates nothing and thereby feels qualified to judge the work of creative men. There is logic in this; he is unbiased—he hates all creative people equally. In Time Enough for Love: The Lives of Lazarus Long (1973), 365. A leg of mutton is better than nothing, Nothing is better than Heaven, Therefore a leg of mutton is better than Heaven. Aphorism 21 in Notebook C (1772-1773), as translated by R.J. Hollingdale in Aphorisms (1990). Reprinted as The Waste Books (2000), 35. A man with a conviction is a hard man to change. Tell him you disagree and he turns away. Show him facts or figures and he questions your sources. Appeal to logic and he fails to see your point. First sentences in When Prophecy Fails (1956), 3. A mathematical science is any body of propositions which is capable of an abstract formulation and arrangement in such a way that every proposition of the set after a certain one is a formal logical consequence of some or all the preceding propositions. Mathematics consists of all such mathematical sciences. In Lectures on Fundamental Concepts of Algebra and Geometry (1911), 222. A principle of induction would be a statement with the help of which we could put inductive inferences into a logically acceptable form. In the eyes of the upholders of inductive logic, a principle of induction is of supreme importance for scientific method: “... this principle”, says Reichenbach, “determines the truth of scientific theories. To eliminate it from science would mean nothing less than to deprive science of the power to decide the truth or falsity of its theories. Without it, clearly, science would no longer have the right to distinguish its theories from the fanciful and arbitrary creations of the poet’s mind.” Now this principle of induction cannot be a purely logical truth like a tautology or an analytic statement. Indeed, if there were such a thing as a purely logical principle of induction, there would be no problem of induction; for in this case, all inductive inferences would have to be regarded as purely logical or tautological transformations, just like inferences in inductive logic. Thus the principle of induction must be a synthetic statement; that is, a statement whose negation is not self-contradictory but logically possible. So the question arises why such a principle should be accepted at all, and how we can justify its acceptance on rational grounds. A professor … may be to produce a perfect mathematical work of art, having every axiom stated, every conclusion drawn with flawless logic, the whole syllabus covered. This sounds excellent, but in practice the result is often that the class does not have the faintest idea of what is going on. … The framework is lacking; students do not know where the subject fits in, and this has a paralyzing effect on the mind. In A Concrete Approach to Abstract Algebra (1959), 1-2. A scientist works largely by intuition. Given enough experience, a scientist examining a problem can leap to an intuition as to what the solution ‘should look like.’ ... Science is ultimately based on insight, not logic. Against logic there is no armor like ignorance. Editorial comment Peter added under a quotation in his Peter's Quotations: Ideas for Our Times (1993), 308. Among all the liberal arts, the first is logic, and specifically that part of logic which gives initial instruction about words. … [T]he word “logic” has a broad meaning, and is not restricted exclusively to the science of argumentative reasoning. [It includes] Grammar [which] is “the science of speaking and writing correctly—the starting point of all liberal studies.” In John of Salisbury and Daniel D. McGarry (trans.), 'Whence grammar gets its name', The Metalogicon (2009), 37. It is footnoted: Isidore, Etym., i, 5, §1. Anyone who has had actual contact with the making of the inventions that built the radio art knows that these inventions have been the product of experiment and work based on physical reasoning, rather than on the mathematicians' calculations and formulae. Precisely the opposite impression is obtained from many of our present day text books and publications. Aristotle... a mere bond-servant to his logic, thereby rendering it contentious and well nigh useless. Rerum Novarum (1605) As an individual opinion of mine, perhaps not as yet shared by many, I may be permitted to state, by the way, that I consider pure Mathematics to be only one branch of general Logic, the branch originating from the creation of Number, to the economical virtues of which is due the enormous development that particular branch has been favored with in comparison with the other branches of Logic that until of late almost remained stationary. In Lecture (10 Aug 1898) present in German to the First International Congress of Mathematicians in Zürich, 'On Pasigraphy: Its Present State and the Pasigraphic Movement in Italy'. As translated and published in The Monist (1899), 9, No. 1, 46. As in political revolutions, so in paradigm choice—there is no standard higher than the assent of the relevant community... this issue of paradigm choice can never be unequivocally settled by logic and experiment alone. The Structure of Scientific Revolutions (1962), 93. Both the physicist and the mystic want to communicate their knowledge, and when they do so with words their statements are paradoxical and full of logical contradictions. In The Tao of Physics (1975), 46. But it is precisely mathematics, and the pure science generally, from which the general educated public and independent students have been debarred, and into which they have only rarely attained more than a very meagre insight. The reason of this is twofold. In the first place, the ascendant and consecutive character of mathematical knowledge renders its results absolutely insusceptible of presentation to persons who are unacquainted with what has gone before, and so necessitates on the part of its devotees a thorough and patient exploration of the field from the very beginning, as distinguished from those sciences which may, so to speak, be begun at the end, and which are consequently cultivated with the greatest zeal. The second reason is that, partly through the exigencies of academic instruction, but mainly through the martinet traditions of antiquity and the influence of mediaeval logic-mongers, the great bulk of the elementary text-books of mathematics have unconsciously assumed a very repellant form,—something similar to what is termed in the theory of protective mimicry in biology “the terrifying form.” And it is mainly to this formidableness and touch-me-not character of exterior, concealing withal a harmless body, that the undue neglect of typical mathematical studies is to be attributed. In Editor’s Preface to Augustus De Morgan and Thomas J. McCormack (ed.), Elementary Illustrations of the Differential and Integral Calculus (1899), v. But nature is remarkably obstinate against purely logical operations; she likes not schoolmasters nor scholastic procedures. As though she took a particular satisfaction in mocking at our intelligence, she very often shows us the phantom of an apparently general law, represented by scattered fragments, which are entirely inconsistent. Logic asks for the union of these fragments; the resolute dogmatist, therefore, does not hesitate to go straight on to supply, by logical conclusions, the fragments he wants, and to flatter himself that he has mastered nature by his victorious 'On the Principles of Animal Morphology', Proceedings of the Royal Society of Edinburgh (2 Apr 1888), 15, 289. Original as Letter to Mr John Murray, communicated to the Society by Professor Sir William Turner. Page given as in collected volume published 1889. But, indeed, the science of logic and the whole framework of philosophical thought men have kept since the days of Plato and Aristotle, has no more essential permanence as a final expression of the human mind, than the Scottish Longer Catechism. A Modern Utopia (1904, 2006), 14. Catastrophe Theory is—quite likely—the first coherent attempt (since Aristotelian logic) to give a theory on analogy. When narrow-minded scientists object to Catastrophe Theory that it gives no more than analogies, or metaphors, they do not realise that they are stating the proper aim of Catastrophe Theory, which is to classify all possible types of analogous situations. From 'La Théorie des catastrophes État présent et perspective', as quoted in Erick Christopher Zeeman, (ed.), Catastrophe Theory: Selected Papers, 1972-1977 (1977), 637, as cited in Martin Krampe (ed.), Classics of Semiotics (1987), 214. Certain students of genetics inferred that the Mendelian units responsible for the selected character were genes producing only a single effect. This was careless logic. It took a good deal of hammering to get rid of this erroneous idea. As facts accumulated it became evident that each gene produces not a single effect, but in some cases a multitude of effects on the characters of the individual. It is true that in most genetic work only one of these character-effects is selected for study—the one that is most sharply defined and separable from its contrasted character—but in most cases minor differences also are recognizable that are just as much the product of the same gene as is the major effect. 'The Relation of Genetics to Physiology and Medicine', Nobel Lecture (4 Jun 1934). In Nobel Lectures, Physiology or Medicine 1922-1941 (1965), 317. Common sense is science exactly in so far as it fulfills the ideal of common sense; that is, sees facts as they are, or at any rate, without the distortion of prejudice, and reasons from them in accordance with the dictates of sound judgment. And science is simply common sense at its best, that is, rigidly accurate in observation, and merciless to fallacy in logic. The Crayfish: an Introduction to the Study of Zoölogy (1880), 2. Excerpted in Popular Science (Apr 1880), 16, 789. Computers are composed of nothing more than logic gates stretched out to the horizon in a vast numerical irrigation system. In State of the Art: A Photographic History of the Integrated Circuit (1983), vii. Confined to its true domain, mathematical reasoning is admirably adapted to perform the universal office of sound logic: to induce in order to deduce, in order to construct. … It contents itself to furnish, in the most favorable domain, a model of clearness, of precision, and consistency, the close contemplation of which is alone able to prepare the mind to render other conceptions also as perfect as their nature permits. Its general reaction, more negative than positive, must consist, above all, in inspiring us everywhere with an invincible aversion for vagueness, inconsistency, and obscurity, which may always be really avoided in any reasoning whatsoever, if we make sufficient effort. In Synthèse Subjective (1856), 98. As translated in Robert Édouard Moritz, Memorabilia Mathematica; Or, The Philomath’s Quotation-Book (1914), 202-203. From the original French, “Bornée à son vrai domaine, la raison mathématique y peut admirablement remplir l’office universel de la saine logique: induire pour déduire, afin de construire. … Elle se contente de former, dans le domaine le plus favorable, un type de clarté, de précision, et de consistance, dont la contemplation familière peut seule disposer l’esprit à rendre les autres conceptions aussi parfaites que le comporte leur nature. Sa réaction générale, plus négative que positive, doit surtout consister à nous inspirer partout une invincible répugnance pour le vague, l’incohérence, et l’obscurité, que nous pouvons réellement éviter envers des pensées quelconques, si nous y faisons assez d’efforts.” Definition of Mathematics.—It has now become apparent that the traditional field of mathematics in the province of discrete and continuous number can only be separated from the general abstract theory of classes and relations by a wavering and indeterminate line. Of course a discussion as to the mere application of a word easily degenerates into the most fruitless logomachy. It is open to any one to use any word in any sense. But on the assumption that “mathematics” is to denote a science well marked out by its subject matter and its methods from other topics of thought, and that at least it is to include all topics habitually assigned to it, there is now no option but to employ “mathematics” in the general sense of the “science concerned with the logical deduction of consequences from the general premisses of all reasoning.” In article 'Mathematics', Encyclopedia Britannica (1911, 11th ed.), Vol. 17, 880. In the 2006 DVD edition of the encyclopedia, the definition of mathematics is given as “The science of structure, order, and relation that has evolved from elemental practices of counting, measuring, and describing the shapes of objects.” [Premiss is a variant form of “premise”. —Webmaster] Descartes' immortal conclusion cogito ergo sum was recently subjected to destruction testing by a group of graduate researchers at Princeton led by Professors Montjuic and Lauterbrunnen, and now reads, in the Shorter Harvard Orthodoxy: (a) I think, therefore I am; or (b) Perhaps I thought, therefore I was; but (c) These days, I tend to leave that side of things to my wife. Ye Gods! (1992), 223. Development of Western science is based on two great achievements: the invention of the formal logical system (in Euclidean geometry) by the Greek philosophers, and the discovery of the possibility to find out causal relationships by systematic experiment (during the Renaissance). In my opinion, one has not to be astonished that the Chinese sages have not made these steps. The astonishing thing is that these discoveries were made at all. Letter to J. S. Switzer, 23 Apr 1953, Einstein Archive 61-381. Quoted in Alice Calaprice, The Quotable Einstein (1996), 180. Every new theory as it arises believes in the flush of youth that it has the long sought goal; it sees no limits to its applicability, and believes that at last it is the fortunate theory to achieve the 'right' answer. This was true of electron theory—perhaps some readers will remember a book called The Electrical Theory of the Universe by de Tunzelman. It is true of general relativity theory with its belief that we can formulate a mathematical scheme that will extrapolate to all past and future time and the unfathomed depths of space. It has been true of wave mechanics, with its first enthusiastic claim a brief ten years ago that no problem had successfully resisted its attack provided the attack was properly made, and now the disillusionment of age when confronted by the problems of the proton and the neutron. When will we learn that logic, mathematics, physical theory, are all only inventions for formulating in compact and manageable form what we already know, like all inventions do not achieve complete success in accomplishing what they were designed to do, much less complete success in fields beyond the scope of the original design, and that our only justification for hoping to penetrate at all into the unknown with these inventions is our past experience that sometimes we have been fortunate enough to be able to push on a short distance by acquired momentum. The Nature of Physical Theory (1936), 136. Every science that has thriven has thriven upon its own symbols: logic, the only science which is admitted to have made no improvements in century after century, is the only one which has grown no Transactions Cambridge Philosophical Society, vol. X, 1864, p.184 Every work of science great enough to be well remembered for a few generations affords some exemplification of the defective state of the art of reasoning of the time when it was written; and each chief step in science has been a lesson in logic. 'The Fixation of Belief (1877). In Justus Buchler, The Philosophy of Pierce (1940), 6. Everything is controlled by immutable mathematical laws, from which there is, and can be, no deviation whatsoever. We learn the complex from the simple. We arrive at the abstract by way of the In The Science of Poetry and the Philosophy of Language (1910), xi. Experience, the only logic sure to convince a diseased imagination and restore it to rugged health. Written in 1892. In The American Claimant (1896), 203. In Mark Twain and Brian Collins (ed.), When in Doubt, Tell the Truth: and Other Quotations from Mark Twain (1996), 48. Fiction is, indeed, an indispensable supplement to logic, or even a part of it; whether we are working inductively or deductively, both ways hang closely together with fiction: and axioms, though they seek to be primary verities, are more akin to fiction. If we had realized the nature of axioms, the doctrine of Einstein, which sweeps away axioms so familiar to us that they seem obvious truths, and substitutes others which seem absurd because they are unfamiliar, might not have been so bewildering. In The Dance of Life (1923), 86. First, as concerns the success of teaching mathematics. No instruction in the high schools is as difficult as that of mathematics, since the large majority of students are at first decidedly disinclined to be harnessed into the rigid framework of logical conclusions. The interest of young people is won much more easily, if sense-objects are made the starting point and the transition to abstract formulation is brought about gradually. For this reason it is psychologically quite correct to follow this course. Not less to be recommended is this course if we inquire into the essential purpose of mathematical instruction. Formerly it was too exclusively held that this purpose is to sharpen the understanding. Surely another important end is to implant in the student the conviction that correct thinking based on true premises secures mastery over the outer world. To accomplish this the outer world must receive its share of attention from the very beginning. Doubtless this is true but there is a danger which needs pointing out. It is as in the case of language teaching where the modern tendency is to secure in addition to grammar also an understanding of the authors. The danger lies in grammar being completely set aside leaving the subject without its indispensable solid basis. Just so in Teaching of Mathematics it is possible to accumulate interesting applications to such an extent as to stunt the essential logical development. This should in no wise be permitted, for thus the kernel of the whole matter is lost. Therefore: We do want throughout a quickening of mathematical instruction by the introduction of applications, but we do not want that the pendulum, which in former decades may have inclined too much toward the abstract side, should now swing to the other extreme; we would rather pursue the proper middle course. In Ueber den Mathematischen Unterricht an den hoheren Schulen; Jahresbericht der Deutschen Mathematiker Vereinigung, Bd. 11, 131. For, in mathematics or symbolic logic, reason can crank out the answer from the symboled equations—even a calculating machine can often do so—but it cannot alone set up the equations. Imagination resides in the words which define and connect the symbols—subtract them from the most aridly rigorous mathematical treatise and all meaning vanishes. Was it Eddington who said that we once thought if we understood 1 we understood 2, for 1 and 1 are 2, but we have since found we must learn a good deal more about “and”? In 'The Biological Basis of Imagination', American Thought: 1947 (1947), 81. Formal thought, consciously recognized as such, is the means of all exact knowledge; and a correct understanding of the main formal sciences, Logic and Mathematics, is the proper and only safe foundation for a scientific education. In Number and its Algebra (1896), 134. Frege has the merit of ... finding a third assertion by recognising the world of logic which is neither mental nor physical. Our Knowledge of the External World (1914), 201. From a drop of water a logician could predict an Atlantic or a Niagara without having seen or heard of one or the other. So all life is a great chain, the nature of which is known whenever we are shown a single link of it. In A Study in Scarlet (1887, 1892), 27. Gates is the ultimate programming machine. He believes everything can be defined, examined, reduced to essentials, and rearranged into a logical sequence that will achieve a particular goal. Given any domain of thought in which the fundamental objective is a knowledge that transcends mere induction or mere empiricism, it seems quite inevitable that its processes should be made to conform closely to the pattern of a system free of ambiguous terms, symbols, operations, deductions; a system whose implications and assumptions are unique and consistent; a system whose logic confounds not the necessary with the sufficient where these are distinct; a system whose materials are abstract elements interpretable as reality or unreality in any forms whatsoever provided only that these forms mirror a thought that is pure. To such a system is universally given the name MATHEMATICS. In 'Mathematics', National Mathematics Magazine (Nov 1937), 12, No. 2, 62. Gradually, at various points in our childhoods, we discover different forms of conviction. There’s the rock-hard certainty of personal experience (“I put my finger in the fire and it hurt,”), which is probably the earliest kind we learn. Then there’s the logically convincing, which we probably come to first through maths, in the context of Pythagoras’s theorem or something similar, and which, if we first encounter it at exactly the right moment, bursts on our minds like sunrise with the whole universe playing a great chord of C Major. In short essay, 'Dawkins, Fairy Tales, and Evidence', 2. Heavy dependence on direct observation is essential to biology not only because of the complexity of biological phenomena, but because of the intervention of natural selection with its criterion of adequacy rather than perfection. In a system shaped by natural selection it is inevitable that logic will lose its way. In 'Scientific innovation and creativity: a zoologist’s point of view', American Zoologist (1982), 22, 229. Here I most violently want you to Avoid one fearful error, a vicious flaw. Don’t think that our bright eyes were made that we Might look ahead; that hips and knees and ankles So intricately bend that we might take Big strides, and the arms are strapped to the sturdy shoulders And hands are given for servants to each side That we might use them to support our lives. All other explanations of this sort Are twisted, topsy-turvy logic, for Nothing what is born produces its own use. Sight was not born before the light of the eyes, Nor were words and pleas created before the tongue Rather the tongue's appearance long preceded Speech, and the ears were formed far earlier than The sound first heard. To sum up, all the members Existed, I should think, before their use, So use has not caused them to have grown. On the Nature of Things, trans. Anthony M. Esolen (1995), Book 4, lines 820-8, 145. Histories make men wise; poets, witty; the mathematics, subtle; natural philosophy, deep; moral, grave; logic and rhetoric, able to contend. 'L. Of Studies,' Essays (1597). In Francis Bacon and Basil Montagu, The Works of Francis Bacon, Lord Chancellor of England (1852), 55. Humans are not by nature the fact-driven, rational beings we like to think we are. We get the facts wrong more often than we think we do. And we do so in predictable ways: we engage in wishful thinking. We embrace information that supports our beliefs and reject evidence that challenges them. Our minds tend to take shortcuts, which require some effort to avoid … [and] more often than most of us would imagine, the human mind operates in ways that defy logic. As co-author with Kathleen Hall Jamieson, in unSpun: Finding Facts in a World of Disinformation (2007), 69. I am opposed to looking upon logic as a kind of game. … One might think that it is a matter of choice or convention which logic one adopts. I disagree with this view. Objective Knowledge: an Evolutionary Approach (1972), 304. I approached the bulk of my schoolwork as a chore rather than an intellectual adventure. The tedium was relieved by a few courses that seem to be qualitatively different. Geometry was the first exciting course I remember. Instead of memorizing facts, we were asked to think in clear, logical steps. Beginning from a few intuitive postulates, far reaching consequences could be derived, and I took immediately to the sport of proving theorems. Autobiography in Gösta Ekspong (ed.), Nobel Lectures: Physics 1996-2000 (2002), 115. I believe myself to possess a most singular combination of qualities exactly fitted to make me pre-eminently a discoverer of the hidden realities of nature… the belief has been forced upon me… Firstly: Owing to some peculiarity in my nervous system, I have perceptions of some things, which no one else has… and intuitive perception of… things hidden from eyes, ears, & ordinary senses… Secondly: my sense reasoning faculties; Thirdly: my concentration faculty, by which I mean the power not only of throwing my whole energy & existence into whatever I choose, but also of bringing to bear on anyone subject or idea, a vast apparatus from all sorts of apparently irrelevant & extraneous sources… Well, here I have written what most people would call a remarkably mad letter; & yet certainly one of the most logical, sober-minded, cool, pieces of composition, (I believe), that I ever framed. Lovelace Papers, Bodleian Library, Oxford University, 42, folio 12 (6 Feb 1841). As quoted and cited in Dorothy Stein (ed.), 'This First Child of Mine', Ada: A Life and a Legacy (1985), 86. I believed that, instead of the multiplicity of rules that comprise logic, I would have enough in the following four, as long as I made a firm and steadfast resolution never to fail to observe them. The first was never to accept anything as true if I did not know clearly that it was so; that is, carefully to avoid prejudice and jumping to conclusions, and to include nothing in my judgments apart from whatever appeared so clearly and distinctly to my mind that I had no opportunity to cast doubt upon it. The second was to subdivide each on the problems I was about to examine: into as many parts as would be possible and necessary to resolve them better. The third was to guide my thoughts in an orderly way by beginning, as if by steps, to knowledge of the most complex, and even by assuming an order of the most complex, and even by assuming an order among objects in! cases where there is no natural order among them. And the final rule was: in all cases, to make such comprehensive enumerations and such general review that I was certain not to omit anything. The long chains of inferences, all of them simple and easy, that geometers normally use to construct their most difficult demonstrations had given me an opportunity to think that all the things that can fall within the scope of human knowledge follow from each other in a similar way, and as long as one avoids accepting something as true which is not so, and as long as one always observes the order required to deduce them from each other, there cannot be anything so remote that it cannot be reached nor anything so hidden that it cannot be uncovered. Discourse on Method in Discourse on Method and Related Writings (1637), trans. Desmond M. Clarke, Penguin edition (1999), Part 2, 16. I don’t see the logic of rejecting data just because they seem incredible. In Astronomy Transformed by D. O. Edge and M. J. Mulkay (1976). I end with a word on the new symbols which I have employed. Most writers on logic strongly object to all symbols. ... I should advise the reader not to make up his mind on this point until he has well weighed two facts which nobody disputes, both separately and in connexion. First, logic is the only science which has made no progress since the revival of letters; secondly, logic is the only science which has produced no growth of symbols. I have come to the conclusion that the exertion, without which a knowledge of mathematics cannot be acquired, is not materially increased by logical rigor in the method of instruction. In Jahresbericht der Deutschen Mathematiker Vereinigung (1898), 143. I have just received copies of “To-day” containing criticisms of my letter. I am in no way surprised to find that these criticisms are not only unfair and misleading in the extreme. They are misleading in so far that anyone reading them would be led to believe the exact opposite of the truth. It is quite possible that I, an old and trained engineer and chronic experimenter, should put an undue value upon truth; but it is common to all scientific men. As nothing but the truth is of any value to them, they naturally dislike things that are not true. ... While my training has, perhaps, warped my mind so that I put an undue value upon truth, their training has been such as to cause them to abhor exact truth and logic. [Replying to criticism by Colonel Acklom and other religious parties attacking Maxim's earlier contribution to the controversy about the modern position of Christianity.] In G.K. Chesterton, 'The Maxims of Maxim', Daily News (25 Feb 1905). Collected in G. K. Chesterton and Dale Ahlquist (ed.), In Defense of Sanity: The Best Essays of G.K. Chesterton (2011), 86. I have said that science is impossible without faith. … Inductive logic, the logic of Bacon, is rather something on which we can act than something which we can prove, and to act on it is a supreme assertion of faith … Science is a way of life which can only fluorish when men are free to have faith. In Calyampudi Radhakrishna Rao, Statistics and Truth (1997), 31. I never guess. It is a shocking habit—destructive to the logical faculty. Spoken by fictitious character Sherlock Holmes in The Sign of Four (1890), 17. I once knew an otherwise excellent teacher who compelled his students to perform all their demonstrations with incorrect figures, on the theory that it was the logical connection of the concepts, not the figure, that was essential. In Ernst Mach and Thomas Joseph McCormack, Space and Geometry (1906), 93. I presume that few who have paid any attention to the history of the Mathematical Analysis, will doubt that it has been developed in a certain order, or that that order has been, to a great extent, necessary—being determined, either by steps of logical deduction, or by the successive introduction of new ideas and conceptions, when the time for their evolution had arrived. And these are the causes that operate in perfect harmony. Each new scientific conception gives occasion to new applications of deductive reasoning; but those applications may be only possible through the methods and the processes which belong to an earlier stage. Explaining his choice for the exposition in historical order of the topics in A Treatise on Differential Equations (1859), Preface, v-vi. I think it would be desirable that this form of word [mathematics] should be reserved for the applications of the science, and that we should use mathematic in the singular to denote the science itself, in the same way as we speak of logic, rhetoric, or (own sister to algebra) music. In Presidential Address to the British Association, Exeter British Association Report (1869); Collected Mathematical Papers, Vol. 2, 669. I took biology in high school and didn't like it at all. It was focused on memorization. ... I didn't appreciate that biology also had principles and logic ... [rather than dealing with a] messy thing called life. It just wasn't organized, and I wanted to stick with the nice pristine sciences of chemistry and physics, where everything made sense. I wish I had learned sooner that biology could be fun as well. Interview (23 May 1998), 'Creating the Code to Life', Academy of Achievement web site. I transferred to … UCLA, … and I took several courses there. One was an acting class…; another was a course in television writing, which seemed practical. I also continued my studies in philosophy. I had done pretty well in symbolic logic at Long Beach, so I signed up for Advanced Symbolic Logic at my new school. Saying that I was studying Advanced Symbolic Logic at UCLA had a nice ring; what had been nerdy in high school now had mystique. However, I went to class the first day and discovered that UCLA used a different set of symbols from those I had learned at Long Beach. To catch up, I added a class in Logic 101, which meant I was studying beginning logic and advanced logic at the same time. I was overwhelmed, and shocked to find that I couldn’t keep up. I had reached my math limit as well as my philosophy limit. I abruptly changed my major to theater and, free from the workload of my logic classes…. I realized that I was now investing in no other future but show business. In Born Standing Up: A Comic’s Life (2007), 103. I wanted to preserve the spontaneity of thought in speech… [and to] guard the spontaneity of the argument. A spoken argument is informal and heuristic; it singles out the heart of the matter and shows in what way it is crucial and new; and it gives the direction and line of the solution so that, simplified as it is, still the logic is right. For me, this philosophic form of argument is the foundation of science, and nothing should be allowed to obscure it. On his philosophy in presenting the TV series, from which the book followed. In 'Foreward', The Ascent of Man, (1973), 14-15. I was pretty good in science. But again, because of the small budget, in science class we couldn’t do experiments in order to prove theories. We just believed everything. Actually I think that class was call Religion. Religion was always an easy class. All you had to do was suspend the logic and reasoning you were taught in all the other classes. In autobiography, Brain Droppings (1998), 227. I’m supposed to be a scientific person but I use intuition more than logic in making basic decisions. In transcript of a video history interview with Seymour Cray by David K. Allison at the National Museum of American History, Smithsonian Institution, (9 May 1995), 30. If an idea presents itself to us, we must not reject it simply because it does not agree with the logical deductions of a reigning theory. If everything in chemistry is explained in a satisfactory manner without the help of phlogiston, it is by that reason alone infinitely probable that the principle does not exist; that it is a hypothetical body, a gratuitous supposition; indeed, it is in the principles of good logic, not to multiply bodies without necessity. 'Reflexions sur le phlogistique', Mémoires de l'Académie des Sciences, 1783, 505-38. Reprinted in Oeuvres de Lavoisier (1864), Vol. 2, 623, trans. M. P. Crosland. If human thought is a growth, like all other growths, its logic is without foundation of its own, and is only the adjusting constructiveness of all other growing things. A tree cannot find out, as it were, how to blossom, until comes blossom-time. A social growth cannot find out the use of steam engines, until comes steam-engine-time. Lo! (1931, 1941), 20. If I go out into nature, into the unknown, to the fringes of knowledge, everything seems mixed up and contradictory, illogical, and incoherent. This is what research does; it smooths out contradictions and makes things simple, logical, and coherent. In 'Dionysians and Apollonians', Science (2 Jun 1972), 176, 966. Reprinted in Mary Ritchie Key, The Relationship of Verbal and Nonverbal Communication (1980), 318. If logical training is to consist, not in repeating barbarous scholastic formulas or mechanically tacking together empty majors and minors, but in acquiring dexterity in the use of trustworthy methods of advancing from the known to the unknown, then mathematical investigation must ever remain one of its most indispensable instruments. Once inured to the habit of accurately imagining abstract relations, recognizing the true value of symbolic conceptions, and familiarized with a fixed standard of proof, the mind is equipped for the consideration of quite other objects than lines and angles. The twin treatises of Adam Smith on social science, wherein, by deducing all human phenomena first from the unchecked action of selfishness and then from the unchecked action of sympathy, he arrives at mutually-limiting conclusions of transcendent practical importance, furnish for all time a brilliant illustration of the value of mathematical methods and mathematical discipline. In 'University Reform', Darwinism and Other Essays (1893), 297-298. If materialism is true, it seems to me that we cannot know that it is true. If my opinions are the result of the chemical processes going on in my brain, they are determined by the laws of chemistry, not those of logic. The Inequality of Man (1932), 162. If scientific reasoning were limited to the logical processes of arithmetic, we should not get very far in our understanding of the physical world. One might as well attempt to grasp the game of poker entirely by the use of the mathematics of probability. Endless Horizons (1946), 27. If texts are unified by a central logic of argument, then their pictorial illustrations are integral to the ensemble, not pretty little trifles included only for aesthetic or commercial value. Primates are visual animals, and (particularly in science) illustration has a language and set of conventions all its own. If you plan it out, and it seems logical to you, then you can do it. I discovered the power of a plan. Quoted in biography on website of the National Geographic Channel, Australia. In a training period I continue to believe that the best start is with the experimentally prepared situation. Principally because it is in this that it is easiest to illustrate controlled variability, but there is no compelling reason why all experiments should be shaped to the conventional forms of the psychophysical methods. In any case the psychologist must refuse to be limited by those formalised statements of scientific experiment, which grew up with the logical methodologists of the mid-19th century. There are no psychological experiments in which the conditions are all under control; in which one condition can be varied independently of the rest, or even in which the concomitant variation of two specified conditions alone can be arranged and considered. From archive recording (3 Jun 1959) with to John C. Kenna, giving his recollection of his farewell speech to Cambridge Psychological Society (4 Mar 1952), in which he gave a summary of points he considered to be basic requirements for a good experimental psychologist. Part of point 3 of 7, from transcription of recording held at British Psychological Society History of Psychology Centre, London, as abridged on thepsychologist.bps.org.uk website. In every enterprise … the mind is always reasoning, and, even when we seem to act without a motive, an instinctive logic still directs the mind. Only we are not aware of it, because we begin by reasoning before we know or say that we are reasoning, just as we begin by speaking before we observe that we are speaking, and just as we begin by seeing and hearing before we know what we see or what we hear. From An Introduction to the Study of Experimental Medicine (1865), as translated by Henry Copley Greene (1957), 146. In formal logic a contradiction is the signal of a defeat, but in the evolution of real knowledge it marks the first step in progress toward a victory. This is one great reason for the utmost toleration of variety of opinion. Once and forever, this duty of toleration has been summed up in the words, “Let both grow together until the harvest.” In 'Religion and Science', The Atlantic (Aug 1925). In logic, A asserts and B denies. Assertions being proverbially untrue, the presumption would be in favor of B’s innocence were it not that denials are notoriously false. The Unabridged Devil’s Dictionary (2000), 5. In mathematics two ends are constantly kept in view: First, stimulation of the inventive faculty, exercise of judgment, development of logical reasoning, and the habit of concise statement; second, the association of the branches of pure mathematics with each other and with applied science, that the pupil may see clearly the true relations of principles and things. In 'Aim of the Mathematical Instruction', International Commission on Teaching of Mathematics, American Report: United States Bureau of Education: Bulletin 1912, No. 4, 7. In my own view, some advice about what should be known, about what technical education should be acquired, about the intense motivation needed to succeed, and about the carelessness and inclination toward bias that must be avoided is far more useful than all the rules and warnings of theoretical logic. From Reglas y Consejos sobre Investigacíon Cientifica: Los tónicos de la voluntad. (1897), as translated by Neely and Larry W. Swanson, in Advice for a Young Investigator (1999), 6. In pure mathematics we have a great structure of logically perfect deductions which constitutes an integral part of that great and enduring human heritage which is and should be largely independent of the perhaps temporary existence of any particular geographical location at any particular time. … The enduring value of mathematics, like that of the other sciences and arts, far transcends the daily flux of a changing world. In fact, the apparent stability of mathematics may well be one of the reasons for its attractiveness and for the respect accorded it. In Fundamentals of Mathematics (1941), 463. In the application of inductive logic to a given knowledge situation, the total evidence available must be used as a basis for determining the degree of confirmation. In Logical Foundations of Probability (1950, 1962), 211. In the Vienna of the late 1920s and 1930s there throve an internationally famous philosophical bunch called the logical positivists. … They said that a key ingredient of knowledge was “sense data,” and proclaimed emphatically, in the words of … J.S.L. Gilmour, that sense data are “objective and unalterable.” …Good guess, but no cigar! In 'A Trip Through the Perception Factory', Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century (2000), 64. Induction is the process of generalizing from our known and limited experience, and framing wider rules for the future than we have been able to test fully. At its simplest, then, an induction is a habit or an adaptation—the habit of expecting tomorrow’s weather to be like today’s, the adaptation to the unwritten conventions of community life. Induction. The mental operation by which from a number of individual instances, we arrive at a general law. The process, according to Hamilton, is only logically valid when all the instances included in the law are enumerated. This being seldom, if ever, possible, the conclusion of an Induction is usually liable to more or less uncertainty, and Induction is therefore incapable of giving us necessary (general) truths. Stated as narrative, not a direct quote, by his biographer W.H.S. Monck in 'Glossary of Philosophical Terms', appended in Sir William Hamilton (1881), 181. Injustice or oppression in the next street...or any spot inhabited by men was a personal affront to Thomas Addis and his name, from its early alphabetical place, was conspicuous on lists of sponsors of scores of organizations fighting for democracy and against fascism. He worked on more committees than could reasonably have been expected of so busy a man... Tom Addis was happy to have a hand in bringing to the organization of society some of the logic of science and to further that understanding and to promote that democracy which are the only enduring foundations of human dignity. Kevin V. Lemley and Linus Pauling, 'Thomas Addis: 1881-1949', Biographical Memoirs, National Academy of Sciences, 63, 27-29. Intelligence is an extremely subtle concept. It’s a kind of understanding that flourishes if it’s combined with a good memory, but exists anyway even in the absence of good memory. It’s the ability to draw consequences from causes, to make correct inferences, to foresee what might be the result, to work out logical problems, to be reasonable, rational, to have the ability to understand the solution from perhaps insufficient information. You know when a person is intelligent, but you can be easily fooled if you are not yourself intelligent. In Irv Broughton (ed.), The Writer's Mind: Interviews with American Authors (1990), Vol. 2, 57. It [mathematics] is in the inner world of pure thought, where all entia dwell, where is every type of order and manner of correlation and variety of relationship, it is in this infinite ensemble of eternal verities whence, if there be one cosmos or many of them, each derives its character and mode of being,—it is there that the spirit of mathesis has its home and its life. Is it a restricted home, a narrow life, static and cold and grey with logic, without artistic interest, devoid of emotion and mood and sentiment? That world, it is true, is not a world of solar light, not clad in the colours that liven and glorify the things of sense, but it is an illuminated world, and over it all and everywhere throughout are hues and tints transcending sense, painted there by radiant pencils of psychic light, the light in which it lies. It is a silent world, and, nevertheless, in respect to the highest principle of art—the interpenetration of content and form, the perfect fusion of mode and meaning—it even surpasses music. In a sense, it is a static world, but so, too, are the worlds of the sculptor and the architect. The figures, however, which reason constructs and the mathematic vision beholds, transcend the temple and the statue, alike in simplicity and in intricacy, in delicacy and in grace, in symmetry and in poise. Not only are this home and this life thus rich in aesthetic interests, really controlled and sustained by motives of a sublimed and supersensuous art, but the religious aspiration, too, finds there, especially in the beautiful doctrine of invariants, the most perfect symbols of what it seeks—the changeless in the midst of change, abiding things hi a world of flux, configurations that remain the same despite the swirl and stress of countless hosts of curious transformations. In 'The Universe and Beyond', Hibbert Journal (1904-1906), 3, 314. It always bothers me that according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space and no matter how tiny a region of time … I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed and the laws will turn out to be simple, like the chequer board with all its apparent complexities. But this speculation is of the same nature as those other people make—“I like it”,“I don't like it”—and it is not good to be too prejudiced about these things. In The Character of Physical Law (1965, 2001), 57. It has been asserted … that the power of observation is not developed by mathematical studies; while the truth is, that; from the most elementary mathematical notion that arises in the mind of a child to the farthest verge to which mathematical investigation has been pushed and applied, this power is in constant exercise. By observation, as here used, can only be meant the fixing of the attention upon objects (physical or mental) so as to note distinctive peculiarities—to recognize resemblances, differences, and other relations. Now the first mental act of the child recognizing the distinction between one and more than one, between one and two, two and three, etc., is exactly this. So, again, the first geometrical notions are as pure an exercise of this power as can be given. To know a straight line, to distinguish it from a curve; to recognize a triangle and distinguish the several forms—what are these, and all perception of form, but a series of observations? Nor is it alone in securing these fundamental conceptions of number and form that observation plays so important a part. The very genius of the common geometry as a method of reasoning—a system of investigation—is, that it is but a series of observations. The figure being before the eye in actual representation, or before the mind in conception, is so closely scrutinized, that all its distinctive features are perceived; auxiliary lines are drawn (the imagination leading in this), and a new series of inspections is made; and thus, by means of direct, simple observations, the investigation proceeds. So characteristic of common geometry is this method of investigation, that Comte, perhaps the ablest of all writers upon the philosophy of mathematics, is disposed to class geometry, as to its method, with the natural sciences, being based upon observation. Moreover, when we consider applied mathematics, we need only to notice that the exercise of this faculty is so essential, that the basis of all such reasoning, the very material with which we build, have received the name observations. Thus we might proceed to consider the whole range of the human faculties, and find for the most of them ample scope for exercise in mathematical studies. Certainly, the memory will not be found to be neglected. The very first steps in number—counting, the multiplication table, etc., make heavy demands on this power; while the higher branches require the memorizing of formulas which are simply appalling to the uninitiated. So the imagination, the creative faculty of the mind, has constant exercise in all original mathematical investigations, from the solution of the simplest problems to the discovery of the most recondite principle; for it is not by sure, consecutive steps, as many suppose, that we advance from the known to the unknown. The imagination, not the logical faculty, leads in this advance. In fact, practical observation is often in advance of logical exposition. Thus, in the discovery of truth, the imagination habitually presents hypotheses, and observation supplies facts, which it may require ages for the tardy reason to connect logically with the known. Of this truth, mathematics, as well as all other sciences, affords abundant illustrations. So remarkably true is this, that today it is seriously questioned by the majority of thinkers, whether the sublimest branch of mathematics,—the infinitesimal calculus—has anything more than an empirical foundation, mathematicians themselves not being agreed as to its logical basis. That the imagination, and not the logical faculty, leads in all original investigation, no one who has ever succeeded in producing an original demonstration of one of the simpler propositions of geometry, can have any doubt. Nor are induction, analogy, the scrutinization of premises or the search for them, or the balancing of probabilities, spheres of mental operations foreign to mathematics. No one, indeed, can claim preeminence for mathematical studies in all these departments of intellectual culture, but it may, perhaps, be claimed that scarcely any department of science affords discipline to so great a number of faculties, and that none presents so complete a gradation in the exercise of these faculties, from the first principles of the science to the farthest extent of its applications, as mathematics. In 'Mathematics', in Henry Kiddle and Alexander J. Schem, The Cyclopedia of Education, (1877.) As quoted and cited in Robert Édouard Moritz, Memorabilia Mathematica; Or, The Philomath’s Quotation-book (1914), 27-29. It has come to pass, I know not how, that Mathematics and Logic, which ought to be but the handmaids of Physic, nevertheless presume on the strength of the certainty which they possess to exercise dominion over it. From De Augmentis Scientiaurum as translated in Francis Guy Selby, The Advancement of Learning (1893), Vol. 2, 73. It hath been an old remark, that Geometry is an excellent Logic. And it must be owned that when the definitions are clear; when the postulata cannot be refused, nor the axioms denied; when from the distinct contemplation and comparison of figures, their properties are derived, by a perpetual well-connected chain of consequences, the objects being still kept in view, and the attention ever fixed upon them; there is acquired a habit of reasoning, close and exact and methodical; which habit strengthens and sharpens the mind, and being transferred to other subjects is of general use in the inquiry after truth. In 'The Analyst', in The Works of George Berkeley (1898), Vol. 3, 10. It is by logic that we prove, but by intuition that we discover. In Science and Method (1908) translated by Francis Maitland (1914, 2007), 129. It is commonly considered that mathematics owes its certainty to its reliance on the immutable principles of formal logic. This … is only half the truth imperfectly expressed. The other half would be that the principles of formal logic owe such a degree of permanence as they have largely to the fact that they have been tempered by long and varied use by mathematicians. “A vicious circle!” you will perhaps say. I should rather describe it as an example of the process known by mathematicians as the method of successive approximation. In 'The Fundamental Conceptions And Methods Of Mathematics', Bulletin of the American Mathematical Society (3 Nov 1904), 11, No. 3, 120. It is evidently equally foolish to accept probable reasoning from a mathematician and to demand from a rhetorician demonstrative proofs. Nicomachean Ethics, 1094b, 25-7. In Jonathan Barnes (ed.), The Complete Works of Aristotle (1984), Vol. 2, 1730. It is necessary that a surgeon should have a temperate and moderate disposition. That he should have well-formed hands, long slender fingers, a strong body, not inclined to tremble and with all his members trained to the capable fulfilment of the wishes of his mind. He should be of deep intelligence and of a simple, humble, brave, but not audacious disposition. He should be well grounded in natural science, and should know not only medicine but every part of philosophy; should know logic well, so as to be able to understand what is written, to talk properly, and to support what he has to say by good reasons. Chirurgia Magna (1296, printed 1479), as translated by James Joseph Walsh in Old-Time Makers of Medicine (1911), 261. It is not logic that makes men reasonable, nor the science of ethics that makes men good. In Epigrams of Oscar Wilde (2007), 215. It is perplexing to see the flexibility of the so-called 'exact sciences' which by cast-iron laws of logic and by the infallible help of mathematics can lead to conclusions which are diametrically opposite to one another. In The Nature of Light: an Historical Survey (1970), 229 It is rigid dogma that destroys truth; and, please notice, my emphasis is not on the dogma, but on the rigidity. When men say of any question, “This is all there is to be known or said of the subject; investigation ends here,” that is death. It may be that the mischief comes not from the thinker but for the use made of his thinking by late-comers. Aristotle, for example, gave us our scientific technique … yet his logical propositions, his instruction in sound reasoning which was bequeathed to Europe, are valid only within the limited framework of formal logic, and, as used in Europe, they stultified the minds of whole generations of mediaeval Schoolmen. Aristotle invented science, but destroyed philosophy. Dialogues of Alfred North Whitehead, as recorded by Lucien Price (1954, 2001), 165. It is they who hold the secret of the mysterious property of the mind by which error ministers to truth, and truth slowly but irrevocably prevails. Theirs is the logic of discovery, the demonstration of the advance of knowledge and the development of ideas, which as the earthly wants and passions of men remain almost unchanged, are the charter of progress, and the vital spark in history. Lecture, 'The Study of History' (11 Jun 1895) delivered at Cambridge, published as A Lecture on The Study of History (1895), 54-55. It is time, therefore, to abandon the superstition that natural science cannot be regarded as logically respectable until philosophers have solved the problem of induction. The problem of induction is, roughly speaking, the problem of finding a way to prove that certain empirical generalizations which are derived from past experience will hold good also in the future. Language, Truth and Logic (1960), 49. It is true that mathematics, owing to the fact that its whole content is built up by means of purely logical deduction from a small number of universally comprehended principles, has not unfittingly been designated as the science of the self-evident [Selbstverständlichen]. Experience however, shows that for the majority of the cultured, even of scientists, mathematics remains the science of the incomprehensible [Unverständlichen]. In Ueber Wert und angeblichen Unwert der Mathematik, Jahresbericht der Deutschen Maihemaliker Vereinigung (1904), 357. It is true that physics gives a wonderful training in precise, logical thinking-about physics. It really does depend upon accurate reproducible experiments, and upon framing hypotheses with the greatest possible freedom from dogmatic prejudice. And if these were the really important things in life, physics would be an essential study for everybody. In Science is a Sacred Cow (1950), 90-91. It is well-known that both rude and civilized peoples are capable of showing unspeakable, and as it is erroneously termed, inhuman cruelty towards each other. These acts of cruelty, murder and rapine are often the result of the inexorable logic of national characteristics, and are unhappily truly human, since nothing like them can be traced in the animal world. It would, for instance, be a grave mistake to compare a tiger with the bloodthirsty exectioner of the Reign of Terror, since the former only satisfies his natural appetite in preying on other mammals. The atrocities of the trials for witchcraft, the indiscriminate slaughter committed by the negroes on the coast of Guinea, the sacrifice of human victims made by the Khonds, the dismemberment of living men by the Battas, find no parallel in the habits of animals in their savage state. And such a comparision is, above all, impossible in the case of anthropoids, which display no hostility towards men or other animals unless they are first attacked. In this respect the anthropid ape stands on a higher plane than many men. Robert Hartmann, Anthropoid Apes, 294-295. It must be granted that in every syllogism, considered as an argument to prove the conclusion, there is a petitio principii. When we say, All men are mortal Socrates is a man therefore Socrates is mortal; it is unanswerably urged by the adversaries of the syllogistic theory, that the proposition, Socrates is mortal. A System of Logic, Ratiocinative and Inductive (1858), 122. It really is worth the trouble to invent a new symbol if we can thus remove not a few logical difficulties and ensure the rigour of the proofs. But many mathematicians seem to have so little feeling for logical purity and accuracy that they will use a word to mean three or four different things, sooner than make the frightful decision to invent a new word. Grundgesetz der Arithmetik(1893), Vol. 2, Section 60, In P. Greach and M. Black (eds., Translations from the Philosophical Writings of Gottlob Frege (1952), 144. It was not alone the striving for universal culture which attracted the great masters of the Renaissance, such as Brunellesco, Leonardo da Vinci, Raphael, Michelangelo and especially Albrecht Dürer, with irresistible power to the mathematical sciences. They were conscious that, with all the freedom of the individual fantasy, art is subject to necessary laws, and conversely, with all its rigor of logical structure, mathematics follows aesthetic laws. From Lecture (5 Feb 1891) held at the Rathhaus, Zürich, printed as Ueber den Antheil der mathematischen Wissenschaft an der Kultur der Renaissance (1892), 19. (The Contribution of the Mathematical Sciences to the Culture of the Renaissance.) As translated in Robert Édouard Moritz, Memorabilia Mathematica; Or, The Philomath’s Quotation-Book (1914), 183. John Bahcall, an astronomer on the Institute of Advanced Study faculty since 1970 likes to tell the story of his first faculty dinner, when he found himself seated across from Kurt Gödel, … a man dedicated to logic and the clean certainties of mathematical abstraction. Bahcall introduced himself and mentioned that he was a physicist. Gödel replied, “I don’t believe in natural science.” As stated in Adam Begley, 'The Lonely Genius Club', New York Magazine (30 Jan 1995), 63. Kurt Gödel’s achievement in modern logic is singular and monumental—indeed it is more than a monument, it is a landmark which will remain visible far in space and time. … The subject of logic has certainly completely changed its nature and possibilities with Gödel's achievement. From remarks at the Presentation (Mar 1951) of the Albert Einstein Award to Dr. Gödel, as quoted in 'Tribute to Dr. Gödel', in Jack J. Bulloff, Thomas C. Holyok (eds.), Foundations of Mathematics: Symposium Papers Commemorating the Sixtieth Birthday of Kurt Gödel (1969), ix. https://books.google.com/books?id=irZLAAAAMAAJ Kurt Gödel, Jack J. Bulloff, Thomas C. Holyoke - 1969 - Lakatos realized and admitted that the existing standards of rationality, standards of logic included, were too restrictive and would have hindered science had they been applied with determination. He therefore permitted the scientist to violate them (he admits that science is not “rational” in the sense of these standards). However, he demanded that research programmes show certain features in the long run—they must be progressive. … I have argued that this demand no longer restricts scientific practice. Any development agrees with it. In Science in a Free Society (1978), 15. Like Molière’s M. Jourdain, who spoke prose all his life without knowing it, mathematicians have been reasoning for at least two millennia without being aware of all the principles underlying what they were doing. The real nature of the tools of their craft has become evident only within recent times A renaissance of logical studies in modern times begins with the publication in 1847 of George Boole’s The Mathematical Analysis of Logic. Co-authored with James R. Newman in Gödel's Proof (1986, 2005), 30. Logic can be patient, for it is eternal. Quoted without citation in Desmond MacHale, Comic Sections (1993), 146. Logic does not pretend to teach the surgeon what are the symptoms which indicate a violent death. This he must learn from his own experience and observation, or from that of others, his predecessors in his peculiar science. But logic sits in judgment on the sufficiency of that observation and experience to justify his rules, and on the sufficiency of his rules to justify his conduct. It does not give him proofs, but teaches him what makes them proofs, and how he is to judge of them. In A System of Logic, Ratiocinative and Inductive: Being a Connected View of the Principles of Evidence, and the Methods of Scientific Investigation (1843), Vol. 1, 11. Logic doesn’t apply to the real world. As quoted, without citation, as one of Minsky's “favorite claims”, in D.R. Hofstadter and D.C. Dennett (eds.) The Mind's I (1981), 343. The context by Hofstadter is that the “real world” is “chaotic and messy”. Logic has borrowed the rules of geometry without understanding its power. … I am far from placing logicians by the side of geometers who teach the true way to guide the reason. … The method of avoiding error is sought by every one. The logicians profess to lead the way, the geometers alone reach it, and aside from their science there is no true demonstration. From De l’Art de Persuader, (1657). Pensées de Pascal (1842), Part 1, Article 3, 41-42. As translated in Robert Édouard Moritz, Memorabilia Mathematica; Or, The Philomath’s Quotation-Book (1914), 202. From the original French, “La logique a peut-être emprunté les règles de la géométrie sans en comprendre la force … je serai bien éloigné de les mettre en parallèle avec les géomètres, qui apprennent la véritable méthode de conduire la raison. … La méthode de ne point errer est recherchée de tout le monde. Les logiciens font profession d'y conduire, les géomètres seuls y arrivent; et, hors de leur science …, il n'y a point de véritables démonstrations ….” Logic is a wonderful thing but doesn't always beat actual thought. The Last Continent (1998) Logic is like the sword—those who appeal to it shall perish by it. Samuel Butler, Henry Festing Jones (ed.), The Note-Books of Samuel Butler (1917), 330. Logic is neither a science nor an art, but a dodge. Quoted in Evelyn Abbott and Lewis Campbell, The Life and Letters of Benjamin Jowett, M.A., Master of Balliol College, Oxford (1897), Vol. 1, 131. Science quotes on: | Art (680) Dodge (3) Logic is not concerned with human behavior in the same sense that physiology, psychology, and social sciences are concerned with it. These sciences formulate laws or universal statements which have as their subject matter human activities as processes in time. Logic, on the contrary, is concerned with relations between factual sentences (or thoughts). If logic ever discusses the truth of factual sentences it does so only conditionally, somewhat as follows: if such-and-such a sentence is true, then such-and-such another sentence is true. Logic itself does not decide whether the first sentence is true, but surrenders that question to one or the other of the empirical sciences. Logic (1937). In The Language of Wisdom and Folly: Background Readings in Semantics (1967), 44. Logic is only the art of going wrong with confidence. This is a slightly reworded version of part of a quote by Joseph Wood Krutch (see the quote beginning “Metaphysics…”, on the Joseph Wood Krutch Quotes page of this website.) This note by Webmaster is included here to help readers identify that it is incorrectly cited when found attributed to Morris Kline, John Ralston Saul or W.H. Auden. In fact, the quote is simply attributed to Anonymous by Kline in his Mathematics: The Loss of Certainty (1980), 197; and as an “old conundrum” in Saul's On Equilibrium: The Six Qualities of the New Humanism (2004), 124. Logic is the hygiene the mathematician practices to keep his ideas healthy and strong. As quoted, without citation, in Morris Kline, 'Logic Versus Pedagogy', The American Mathematical Monthly (Mar 1970), 77, No. 3, 272. Logic is the last scientific ingredient of Philosophy; its extraction leaves behind only a confusion of non-scientific, pseudo problems. The Unity of Science, trans. Max Black (1934), 22. Logic issues in tautologies, mathematics in identities, philosophy in definitions; all trivial, but all part of the vital work of clarifying and organising our thought. 'Last Papers: Philosophy' (1929), in The Foundations of Mathematics and Other Logical Essays (1931), 264. Logic it is called [referring to Whitehead and Russell’s Principia Mathematica] and logic it is, the logic of propositions and functions and classes and relations, by far the greatest (not merely the biggest) logic that our planet has produced, so much that is new in matter and in manner; but it is also mathematics, a prolegomenon to the science, yet itself mathematics in its most genuine sense, differing from other parts of the science only in the respects that it surpasses these in fundamentally, generality and precision, and lacks traditionality. Few will read it, but all will feel its effect, for behind it is the urgence and push of a magnificent past: two thousand five hundred years of record and yet longer tradition of human endeavor to think aright. In Science (1912), 35, 110, from his book review on Alfred North Whitehead and Bertrand Russell, Principia Mathematica. Logic sometimes breeds monsters. In Science and Method (1952), 125. Logic teaches us that on such and such a road we are sure of not meeting an obstacle; it does not tell us which is the road that leads to the desired end. For this, it is necessary to see the end from afar, and the faculty which teaches us to see is intuition. Without it, the geometrician would be like a writer well up in grammar but destitute of ideas. LOGIC, n. The art of thinking and reasoning in strict accordance with the limitations and incapacities of the human misunderstanding. The basic of logic is the syllogism, consisting of a major and a minor premise and a conclusion—thus: Major Premise: Sixty men can do a piece of work sixty times as quickly as one man. Minor Premise: One man can dig a post-hole in sixty seconds; therefore— Conclusion: Sixty men can dig a post-hole in one second. This may be called the syllogism arithmetical, in which, by combining logic and mathematics, we obtain a double certainty and are twice blessed. The Collected Works of Ambrose Bierce (1911), Vol. 7, The Devil's Dictionary, 196. Logic, like whiskey, loses its beneficial effect when taken in too large quantities. In 'Weeds and Moss', My Ireland (1937), Chap. 19, 186. Logic, logic, logic. Logic is the beginning of wisdom, Valeris, not the end. Spoken by character Dr. Spock in movie Star Trek VI: The Undiscovered Country (1992), screenwriters Nicholas Meyer and Denny Martin Flinn. As cited in Gary Westfahl (ed.), The Greenwood Encyclopedia of Science Fiction and Fantasy (2005), Vol. 2, 892. Logical consequences are the scarecrows of fools and the beacons of wise men. 'On the Hypothesis that Animals are Automata', The Fortnightly (1874), 22, 577. Man has never been a particularly modest or self-deprecatory animal, and physical theory bears witness to this no less than many other important activities. The idea that thought is the measure of all things, that there is such a thing as utter logical rigor, that conclusions can be drawn endowed with an inescapable necessity, that mathematics has an absolute validity and controls experience—these are not the ideas of a modest animal. Not only do our theories betray these somewhat bumptious traits of self-appreciation, but especially obvious through them all is the thread of incorrigible optimism so characteristic of human beings. In The Nature of Physical Theory (1936), 135-136. Mathematicians create by acts of insight and intuition. Logic then sanctions the conquests of intuition. It is the hygiene that mathematics practices to keep its ideas healthy and strong. Moreover, the whole structure rests fundamentally on uncertain ground, the intuition of humans. Here and there an intuition is scooped out and replaced by a firmly built pillar of thought; however, this pillar is based on some deeper, perhaps less clearly defined, intuition. Though the process of replacing intuitions with precise thoughts does not change the nature of the ground on which mathematics ultimately rests, it does add strength and height to the structure. In Mathematics in Western Culture (1964), 408. Mathematicians deal with possible worlds, with an infinite number of logically consistent systems. Observers explore the one particular world we inhabit. Between the two stands the theorist. He studies possible worlds but only those which are compatible with the information furnished by observers. In other words, theory attempts to segregate the minimum number of possible worlds which must include the actual world we inhabit. Then the observer, with new factual information, attempts to reduce the list further. And so it goes, observation and theory advancing together toward the common goal of science, knowledge of the structure and observation of the universe. Lecture to Sigma Xi, 'The Problem of the Expanding Universe' (1941), printed in Sigma Xi Quarterly (1942), 30, 104-105. Reprinted in Smithsonian Institution Report of the Board of Regents (1943), 97, 123. As cited by Norriss S. Hetherington in 'Philosophical Values and Observation in Edwin Hubble's Choice of a Model of the Universe', Historical Studies in the Physical Sciences (1982), 13, No. 1, Mathematicians go mad, and cashiers; but creative artists very seldom. I am not, as will be seen, in any sense attacking logic: I only say that the danger does lie in logic, not in imagination. In Orthodoxy (1908), 27. Mathematics … belongs to every inquiry, moral as well as physical. Even the rules of logic, by which it is rigidly bound, could not be deduced without its aid. The laws of argument admit of simple statement, but they must be curiously transposed before they can be applied to the living speech and verified by observation. In its pure and simple form the syllogism cannot be directly compared with all experience, or it would not have required an Aristotle to discover it. It must be transmuted into all the possible shapes in which reasoning loves to clothe itself. The transmutation is the mathematical process in the establishment of the law. From Memoir (1870) read before the National Academy of Sciences, Washington, printed in 'Linear Associative Algebra', American Journal of Mathematics (1881), 4, 97-98. Mathematics as an expression of the human mind reflects the active will, the contemplative reason, and the desire for aesthetic perfection. Its basic elements are logic and intuition, analysis and construction, generality and individuality. Though different traditions may emphasize different aspects, it is only the interplay of these antithetic forces and the struggle for their synthesis that constitute the life, usefulness, and supreme value of mathematical science. As co-author with Herbert Robbins, in What Is Mathematics?: An Elementary Approach to Ideas and Methods (1941, 1996), x. Mathematics had never had more than a secondary interest for him [her husband, George Boole]; and even logic he cared for chiefly as a means of clearing the ground of doctrines imagined to be proved, by showing that the evidence on which they were supposed to give rest had no tendency to prove them. But he had been endeavoring to give a more active and positive help than this to the cause of what he deemed pure religion. In Eleanor Meredith Cobham, Mary Everest Boole: Collected Works (1931), 40. Mathematics has often been characterized as the most conservative of all sciences. This is true in the sense of the immediate dependence of new upon old results. All the marvellous new advancements presuppose the old as indispensable steps in the ladder. … Inaccessibility of special fields of mathematics, except by the regular way of logically antecedent acquirements, renders the study discouraging or hateful to weak or indolent minds. In Number and its Algebra (1896), 136. Mathematics is a logical method … Mathematical propositions express no thoughts. In life it is never a mathematical proposition which we need, but we use mathematical propositions only in order to infer from propositions which do not belong to mathematics to others which equally do not belong to mathematics. In Tractatus Logico Philosophicus (1922), 169 (statements 6.2-6.211). Mathematics is a structure providing observers with a framework upon which to base healthy, informed, and intelligent judgment. Data and information are slung about us from all directions, and we are to use them as a basis for informed decisions. … Ability to critically analyze an argument purported to be logical, free of the impact of the loaded meanings of the terms involved, is basic to an informed populace. In 'Mathematics Is an Edifice, Not a Toolbox', Notices of the AMS (Oct 1996), 43, No. 10, 1108. Mathematics is a study which, when we start from its most familiar portions, may be pursued in either of two opposite directions. The more familiar direction is constructive, towards gradually increasing complexity: from integers to fractions, real numbers, complex numbers; from addition and multiplication to differentiation and integration, and on to higher mathematics. The other direction, which is less familiar, proceeds, by analysing, to greater and greater abstractness and logical simplicity; instead of asking what can be defined and deduced from what is assumed to begin with, we ask instead what more general ideas and principles can be found, in terms of which what was our starting-point can be defined or deduced. It is the fact of pursuing this opposite direction that characterises mathematical philosophy as opposed to ordinary mathematics. In Introduction to Mathematical Philosophy (1920), 1. Mathematics is distinguished from all other sciences except only ethics, in standing in no need of ethics. Every other science, even logic—logic, especially—is in its early stages in danger of evaporating into airy nothingness, degenerating, as the Germans say, into an anachrioid [?] film, spun from the stuff that dreams are made of. There is no such danger for pure mathematics; for that is precisely what mathematics ought to be. In Charles S. Peirce, Charles Hartshorne (ed.), Paul Weiss (ed.), Collected Papers of Charles Sanders Peirce (1931), Vol. 4, 200. Mathematics is, as it were, a sensuous logic, and relates to philosophy as do the arts, music, and plastic art to poetry. Aphorism 365 from Selected Aphorisms from the Lyceum (1797-1800). In Friedrich Schlegel, translated by Ernst Behler and Roman Struc, Dialogue on Poetry and Literary Aphorisms (trans. 1968), 147. Mathematics will not be properly esteemed in wider circles until more than the a b c of it is taught in the schools, and until the unfortunate impression is gotten rid of that mathematics serves no other purpose in instruction than the formal training of the mind. The aim of mathematics is its content, its form is a secondary consideration and need not necessarily be that historic form which is due to the circumstance that mathematics took permanent shape under the influence of Greek logic. In Die Entivickelung der Mathematik in den letzten Jahrhunderten (1884), 6. Mathematics, from the earliest times to which the history of human reason can reach, has followed, among that wonderful people of the Greeks, the safe way of science. But it must not be supposed that it was as easy for mathematics as for logic, in which reason is concerned with itself alone, to find, or rather to make for itself that royal road. I believe, on the contrary, that there was a long period of tentative work (chiefly still among the Egyptians), and that the change is to be ascribed to a revolution, produced by the happy thought of a single man, whose experiments pointed unmistakably to the path that had to be followed, and opened and traced out for the most distant times the safe way of a science. The history of that intellectual revolution, which was far more important than the passage round the celebrated Cape of Good Hope, and the name of its fortunate author, have not been preserved to us. … A new light flashed on the first man who demonstrated the properties of the isosceles triangle (whether his name was Thales or any other name), for he found that he had not to investigate what he saw in the figure, or the mere concepts of that figure, and thus to learn its properties; but that he had to produce (by construction) what he had himself, according to concepts a priori, placed into that figure and represented in it, so that, in order to know anything with certainty a priori, he must not attribute to that figure anything beyond what necessarily follows from what he has himself placed into it, in accordance with the concept. In Critique of Pure Reason, Preface to the Second Edition, (1900), 690. Mathematics, or the science of magnitudes, is that system which studies the quantitative relations between things; logic, or the science of concepts, is that system which studies the qualitative (categorical) relations between things. In 'The Axioms of Logic', Tertium Organum: The Third Canon of Thought; a Key to the Enigmas of the World (1922), 246. Mathematics, that giant pincers of scientific logic… From Address to the Ohio Academy of Science, 'Biology and Mathematics', printed in Science (11 Aug 1905), New Series 22, No. 554, 162. Men are rather beholden ... generally to chance or anything else, than to logic, for the invention of arts and sciences. The Advancement of Learning (1605) in James Spedding, Robert Ellis and Douglas Heath (eds.), The Works of Francis Bacon (1887-1901), Vol. 3, 386. Men of science belong to two different types—the logical and the intuitive. Science owes its progress to both forms of minds. Mathematics, although a purely logical structure, nevertheless makes use of intuition. Among the mathematicians there are intuitives and logicians, analysts and geometricians. Hermite and Weierstrass were intuitives. Riemann and Bertrand, logicians. The discoveries of intuition have always to be developed by logic. In Man the Unknown (1935), 123. Metaphysics may be, after all, only the art of being sure of something that is not so and logic only the art of going wrong with confidence. The Modern Temper (1929), 228. The second part of this quote is often seen as a sentence by itself, and a number of authors cite it incorrectly. For those invalid attributions, see quote beginning “Logic is only the art…” on the Joseph Wood Krutch Quotes page of this website. Neither in the subjective nor in the objective world can we find a criterion for the reality of the number concept, because the first contains no such concept, and the second contains nothing that is free from the concept. How then can we arrive at a criterion? Not by evidence, for the dice of evidence are loaded. Not by logic, for logic has no existence independent of mathematics: it is only one phase of this multiplied necessity that we call mathematics. How then shall mathematical concepts be judged? They shall not be judged. Mathematics is the supreme arbiter. From its decisions there is no appeal. We cannot change the rules of the game, we cannot ascertain whether the game is fair. We can only study the player at his game; not, however, with the detached attitude of a bystander, for we are watching our own minds at play. In Number: The Language of Science; a Critical Survey Written for the Cultured Non-Mathematician (1937), 244-245. No deeply-rooted tendency was ever extirpated by adverse argument. Not having originally been founded on argument, it cannot be destroyed by logic. In Problems of Life and Mind (1874), Vol. 1, 7. Not everything is an idea. Otherwise psychology would contain all the sciences within it or at least it would be the highest judge over all the sciences. Otherwise psychology would rule over logic and mathematics. But nothing would be a greater misunderstanding of mathematics than its subordination to psychology. In Elmer Daniel Klemke, Essays on Frege (1968), 531. Numerical logistic is that which employs numbers; symbolic logistic that which uses symbols, as, say, the letters of the alphabet. In Introduction to the Analytic Art (1591). O Logic: born gatekeeper to the Temple of Science, victim of capricious destiny: doomed hitherto to be the drudge of pedants: come to the aid of thy master, Legislation. In John Browning (ed.), 'Extracts from Bentham’s Commonplace Book: Logic', The Works of Jeremy Bentham (1843), Vol. 10, 145. Of science and logic he chatters, As fine and as fast as he can; Though I am no judge of such matters, I’m sure he’s a talented man. 'The Talented Man.' In Winthrop Mackworth Praed, Ferris Greenslet, The Poems of Winthrop Mackworth Praed (1909), 122. by - 1909 Oh, my dear Kepler, how I wish that we could have one hearty laugh together. Here, at Padua, is the principal professor of philosophy, whom I have repeatedly and urgently requested to look at the moon and planets through my glass, [telescope] which he pertinaciously refuses to do. Why are you not here? what shouts of laughter we should have at this glorious folly! and to hear the professor of philosophy at Pisa laboring before the grand duke with logical arguments, as if with magical incantations, to charm the new planets out of the sky. From Letter to Johannes Kepler. As translated in John Elliot Drinkwater Bethune, Life of Galileo Galilei: With Illustrations of the Advancement of Experimental Philosophy (1832), 92-93. One of the principal obstacles to the rapid diffusion of a new idea lies in the difficulty of finding suitable expression to convey its essential point to other minds. Words may have to be strained into a new sense, and scientific controversies constantly resolve themselves into differences about the meaning of words. On the other hand, a happy nomenclature has sometimes been more powerful than rigorous logic in allowing a new train of thought to be quickly and generally accepted. Opening Address to the Annual Meeting of the British Association by Prof. Arthur Schuster, in Nature (4 Aug 1892), 46, 325. One thought I cannot forbear suggesting: we have long known that “one star differeth from another star in glory;" we have now the strongest evidence that they also differ in constituent materials,—some of them perhaps having no elements to be found in some other. What then becomes of that homogeneity of original diffuse matter which is almost a logical necessity of the nebular L.M. Rutherfurd, 'Astronomical Observations with the Spectroscope' (4 Dec 1862), American Journal of Science and Arts (May 1863), 2nd Series, 35, No. 103, 77. His obituarist, Johns K. Rees, wrote (1892) “This paper was the first published work on star spectra.” Only mathematics and mathematical logic can say as little as the physicist means to say. (1931) In The Scientific Outlook (1931, 2009), 57. Ordinarily logic is divided into the examination of ideas, judgments, arguments, and methods. The two latter are generally reduced to judgments, that is, arguments are reduced to apodictic judgments that such and such conclusions follow from such and such premises, and method is reduced to judgments that prescribe the procedure that should be followed in the search for truth. Ampére expresses how arguments have a logical structure which he expected should be applied to relate scientific theories to experimental evidence. In James R. Hofmann, André-Marie Ampère (1996), 158. Cites Académie des Sciences Ampère Archives, École Normale lecture 15 notes, box 261. Ordinary language is totally unsuited for expressing what physics really asserts, since the words of everyday life are not sufficiently abstract. Only mathematics and mathematical logic can say as little as the physicist means to say. In The Scientific Outlook (1931, 2001), 61. Our Professor, which doth have tenure, Feared be thy name. Thy sets partition, Thy maps commute, In groups as in vector spaces. Give us this day our daily notation, And forgive us our obtuseness, As we forgive tutors who cannot help us. Lead us not into Lye rings, But deliver us from eigenvalues, For thine is the logic, the notation, and the accent, That confuses us forever. 'Algebra Prayer' by an unnamed University of Toronto mathematics student. On the Department of Mathematics, University of Toronto web site. Poincaré was a vigorous opponent of the theory that all mathematics can be rewritten in terms of the most elementary notions of classical logic; something more than logic, he believed, makes mathematics what it is. In Men of Mathematics (1937), 552. Professor [Max] Planck, of Berlin, the famous originator of the Quantum Theory, once remarked to me that in early life he had thought of studying economics, but had found it too difficult! Professor Planck could easily master the whole corpus of mathematical economics in a few days. He did not mean that! But the amalgam of logic and intuition and the wide knowledge of facts, most of which are not precise, which is required for economic interpretation in its highest form is, quite truly, overwhelmingly difficult for those whose gift mainly consists in the power to imagine and pursue to their furthest points the implications and prior conditions of comparatively simple facts which are known with a high degree of precision. 'Alfred Marshall: 1842-1924' (1924). In Geoffrey Keynes (ed.), Essays in Biography (1933), 191-2 Professor Whitehead has recently restored a seventeenth century phrase—"climate of opinion." The phrase is much needed. Whether arguments command assent or not depends less upon the logic that conveys them than upon the climate of opinion in which they are sustained. In The Heavenly City of the Eighteenth-Century Philosophers (1932, 2003), 5 PROJECTILE, n. The final arbiter in international disputes. Formerly these disputes were settled by physical contact of the disputants, with such simple arguments as the rudimentary logic of the times could supply —the sword, the spear, and so forth. With the growth of prudence in military affairs the projectile came more and more into favor, and is now held in high esteem by the most courageous. Its capital defect is that it requires personal attendance at the point of propulsion. The Collected Works of Ambrose Bierce (1911), Vol. 7, The Devil's Dictionary, 268. Pure mathematics … reveals itself as nothing but symbolic or formal logic. It is concerned with implications, not applications. On the other hand, natural science, which is empirical and ultimately dependent upon observation and experiment, and therefore incapable of absolute exactness, cannot become strictly mathematical. The certainty of geometry is thus merely the certainty with which conclusions follow from non-contradictory premises. As to whether these conclusions are true of the material world or not, pure mathematics is indifferent. In 'Non-Euclidian Geometry of the Fourth Dimension', collected in Henry Parker Manning (ed.), The Fourth Dimension Simply Explained (1910), 58. Pure mathematics is a collection of hypothetical, deductive theories, each consisting of a definite system of primitive, undefined, concepts or symbols and primitive, unproved, but self-consistent assumptions (commonly called axioms) together with their logically deducible consequences following by rigidly deductive processes without appeal to intuition. In 'Non-Euclidian Geometry of the Fourth Dimension', collected in Henry Parker Manning (ed.), The Fourth Dimension Simply Explained (1910), 58. Pure Mathematics is the class of all propositions of the form “p implies q,” where p and q are propositions containing one or more variables, the same in the two propositions, and neither p nor q contains any constants except logical constants. And logical constants are all notions definable in terms of the following: Implication, the relation of a term to a class of which it is a member, the notion of such that, the notion of relation, and such further notions as may be involved in the general notion of propositions of the above form. In addition to these, mathematics uses a notion which is not a constituent of the propositions which it considers, namely the notion of truth. In 'Definition of Pure Mathematics', Principles of Mathematics (1903), 3. Pure mathematics is, in its way, the poetry of logical ideas. One seeks the most general ideas of operation which will bring together in simple, logical and unified form the largest possible circle of formal relationships. In this effort toward logical beauty spiritual formulas are discovered necessary for the deeper penetration into the laws of nature. In letter (1 May 1935), Letters to the Editor, 'The Late Emmy Noether: Professor Einstein Writes in Appreciation of a Fellow-Mathematician', New York Times (4 May 1935), 12. Pure mathematics was discovered by Boole in a work which he called “The Laws of Thought” (1854).… His book was in fact concerned with formal logic, and this is the same thing as mathematics. In 'Recent Work on the Principles of Mathematics', The International Monthly (Jul-Dec 1901), 4, 83. Relevant context appears in a footnote in William Bragg Ewald, From Kant to Hilbert: A Source Book in the Foundations of Mathematics (1996), Vol. 1, 442, which gives: “Russell’s essay was written for a popular audience, and (as he notes) for an editor who asked him to make the essay ‘as romantic as possible’. Russell’s considered appraisal of Boole was more sober. For instance, in Our Knowledge of the External World, Lecture II, he says of Boole: ‘But in him and his successors, before Peano and Frege, the only thing really achieved, apart from certain details, was the invention of a mathematical symbolism for deducing consequences from the premises which the newer methods shared with Quantity is that which is operated with according to fixed mutually consistent laws. Both operator and operand must derive their meaning from the laws of operation. In the case of ordinary algebra these are the three laws already indicated [the commutative, associative, and distributive laws], in the algebra of quaternions the same save the law of commutation for multiplication and division, and so on. It may be questioned whether this definition is sufficient, and it may be objected that it is vague; but the reader will do well to reflect that any definition must include the linear algebras of Peirce, the algebra of logic, and others that may be easily imagined, although they have not yet been developed. This general definition of quantity enables us to see how operators may be treated as quantities, and thus to understand the rationale of the so called symbolical methods. In 'Mathematics', Encyclopedia Britannica (9th ed.). Science attempts to find logic and simplicity in nature. Mathematics attempts to establish order and simplicity in human thought. The Pursuit of Simplicity (1980), 17. Science derives its conclusions by the laws of logic from our sense perceptions, Thus it does not deal with the real world, of which we know nothing, but with the world as it appears to our senses. … All our sense perceptions are limited by and attached to the conceptions of time and space. … Modern physics has come to the same conclusion in the relativity theory, that absolute space and absolute time have no existence, but, time and space exist only as far as things or events fill them, that is, are forms of sense perception. In 'Religion and Modern Science', The Christian Register (16 Nov 1922), 101, 1089. The article is introduced as “the substance of an address to the Laymen’s League in All Soul’s Church (5 Nov 1922). Science has hitherto been proceeding without the guidance of any rational theory of logic, and has certainly made good progress. It is like a computer who is pursuing some method of arithmetical approximation. Even if he occasionally makes mistakes in his ciphering, yet if the process is a good one they will rectify themselves. But then he would approximate much more rapidly if he did not commit these errors; and in my opinion, the time has come when science ought to be provided with a logic. My theory satisfies me; I can see no flaw in it. According to that theory universality, necessity, exactitude, in the absolute sense of these words, are unattainable by us, and do not exist in nature. There is an ideal law to which nature approximates; but to express it would require an endless series of modifications, like the decimals expressing surd. Only when you have asked a question in so crude a shape that continuity is not involved, is a perfectly true answer attainable. Letter to G. F. Becker, 11 June 1893. Merrill Collection, Library of Congress. Quoted in Nathan Reingold, Science in Nineteenth-Century America: A Documentary History (1966), 231-2. Science has taught us to think the unthinkable. Because when nature is the guide—rather than a priori prejudices, hopes, fears or desires—we are forced out of our comfort zone. One by one, pillars of classical logic have fallen by the wayside as science progressed in the 20th century, from Einstein's realization that measurements of space and time were not absolute but observer-dependent, to quantum mechanics, which not only put fundamental limits on what we can empirically know but also demonstrated that elementary particles and the atoms they form are doing a million seemingly impossible things at once. In op-ed, 'A Universe Without Purpose', Los Angeles Times (1 Apr 2012). Science is a method of logical analysis of nature’s operations. It has lessened human anxiety about the cosmos by demonstrating the materiality of nature’s forces, and their frequent predictability. In Sexual Personae: Art and Decadence from Nefertiti to Emily Dickinson (1990), 5. Science is simply common sense at its best—that is, rigidly accurate in observation, and merciless to fallacy in logic. In The Crayfish: An Introduction to the Study of Zoology (1880), 2. Science seldom proceeds in the straightforward logical manner imagined by outsiders. Instead, its steps forward (and sometimes backward) are often very human events in which personalities and cultural traditions play major roles. In The Double Helix: A Personal Account of the Discovery of the Structure of DNA (1968, 2001), Preface, xi. Scientists are the easiest to fool. ... They think in straight, predictable, directable, and therefore misdirectable, lines. The only world they know is the one where everything has a logical explanation and things are what they appear to be. Children and conjurors—they terrify me. Scientists are no problem; against them I feel quite confident. Code of the Lifemaker (1983, 2000),Chapter 1. Simple molecules combine to make powerful chemicals. Simple cells combine to make powerful life-forms. Simple electronics combine to make powerful computers. Logically, all things are created by a combination of simpler, less capable components. Therefore, a supreme being must be in our future, not our origin. What if “God” is the consciousness that will be created when enough of us are connected by the Internet?!! Thoughts by character Dogbert in Dilbert cartoon strip (11 Feb 1996). Since my logic aims to teach and instruct the understanding, not that it may with the slender tendrils of the mind snatch at and lay hold of abstract notions (as the common logic does), but that it may in very truth dissect nature, and discover the virtues and actions of bodies, with their laws as determined in matter; so that this science flows not merely from the nature of the mind, but also from the nature of things. In Novum Organum (1620), Book 2, Aphorism 42. Since the examination of consistency is a task that cannot be avoided, it appears necessary to axiomatize logic itself and to prove that number theory and set theory are only parts of logic. This method was prepared long ago (not least by Frege’s profound investigations); it has been most successfully explained by the acute mathematician and logician Russell. One could regard the completion of this magnificent Russellian enterprise of the axiomatization of logic as the crowning achievement of the work of axiomatization as a whole. Address (11 Sep 1917), 'Axiomatisches Denken' delivered before the Swiss Mathematical Society in Zürich. Translated by Ewald as 'Axiomatic Thought', (1918), in William Bragg Ewald, From Kant to Hilbert (1996), Vol. 2, 1113. Sir Hiram Maxim is a genuine and typical example of the man of science, romantic, excitable, full of real but somewhat obvious poetry, a little hazy in logic and philosophy, but full of hearty enthusiasm and an honorable simplicity. He is, as he expresses it, “an old and trained engineer,” and is like all of the old and trained engineers I have happened to come across, a man who indemnifies himself for the superhuman or inhuman concentration required for physical science by a vague and dangerous romanticism about everything else. In G.K. Chesterton, 'The Maxims of Maxim', Daily News (25 Feb 1905). Collected in G. K. Chesterton and Dale Ahlquist (ed.), In Defense of Sanity: The Best Essays of G.K. Chesterton (2011), 87. Slavery in America was perpetuated not merely by human badness but also by human blindness. … Men convinced themselves that a system that was so economically profitable must be morally justifiable. … Science was commandeered to prove the biological inferiority of the Negro. Even philosophical logic was manipulated [exemplified by] an Aristotlian syllogism: All men are made in the image of God; God, as everyone knows, is not a Negro; Therefore, the Negro is not a man. 'Love in Action', Strength To Love (1963, 1981), 44. So-called extraordinary events always split into two extremes naturalists who have not witnessed them: those who believe blindly and those who do not believe at all. The latter have always in mind the story of the golden goose; if the facts lie slightly beyond the limits of their knowledge, they relegate them immediately to fables. The former have a secret taste for marvels because they seem to expand Nature; they use their imagination with pleasure to find explanations. To remain doubtful is given to naturalists who keep a middle path between the two extremes. They calmly examine facts; they refer to logic for help; they discuss probabilities; they do not scoff at anything, not even errors, because they serve at least the history of the human mind; finally, they report rather than judge; they rarely decide unless they have good evidence. Quoted in Albert V. Carozzi, Histoire des sciences de la terre entre 1790 et 1815 vue à travers les documents inédités de la Societé de Physique et d'Histoire Naturelle de Genève, trans. Albert V. and Marguerite Carozzi. (1990), 175. Some books are to be tasted, others to be swallowed, and some few to be chewed and digested; that is, some books are to be read only in parts; other to be read, but not curiously; and some few to be read wholly, and with diligence and attention. Some books also may be read by deputy, and extracts made of them by others; but that would be only in the less important arguments, and the meaner sort of books; else distilled books are like common distilled waters, flashy things. Reading maketh a full man; conference a ready man; and writing an exact man. And therefore, if a man write little, he had need have a great memory; if he confer little, he had need have a present wit: and if he read little, he had need have much cunning, to seem to know that he doth not. Histories make men wise; poets witty; the mathematics subtile; natural philosophy deep; moral grave; logic and rhetoric able to contend. Abeunt studia in mores. [The studies pass into the manners.] 'Of Studies' (1625) in James Spedding, Robert Ellis and Douglas Heath (eds.), The Works of Francis Bacon (1887-1901), Vol. 6, 498. Some people say they cannot understand a million million. Those people cannot understand that twice two makes four. That is the way I put it to people who talk to me about the incomprehensibility of such large numbers. I say finitude is incomprehensible, the infinite in the universe is comprehensible. Now apply a little logic to this. Is the negation of infinitude incomprehensible? What would you think of a universe in which you could travel one, ten, or a thousand miles, or even to California, and then find it comes to an end? Can you suppose an end of matter or an end of space? The idea is incomprehensible. Even if you were to go millions and millions of miles the idea of coming to an end is incomprehensible. You can understand one thousand per second as easily as you can understand one per second. You can go from one to ten, and then times ten and then to a thousand without taxing your understanding, and then you can go on to a thousand million and a million million. You can all understand it. In 'The Wave Theory of Light' (1884), Popular Lectures and Addresses (1891), Vol. 1, 322. Strictly speaking, it is really scandalous that science has not yet clarified the nature of number. It might be excusable that there is still no generally accepted definition of number, if at least there were general agreement on the matter itself. However, science has not even decided on whether number is an assemblage of things, or a figure drawn on the blackboard by the hand of man; whether it is something psychical, about whose generation psychology must give information, or whether it is a logical structure; whether it is created and can vanish, or whether it is eternal. It is not known whether the propositions of arithmetic deal with those structures composed of calcium carbonate [chalk] or with non-physical entities. There is as little agreement in this matter as there is regarding the meaning of the word “equal” and the equality sign. Therefore, science does not know the thought content which is attached to its propositions; it does not know what it deals with; it is completely in the dark regarding their proper nature. Isn’t this scandalous? From opening paragraph of 'Vorwort', Über die Zahlen des Herrn H. Schubert (1899), iii. ('Foreword', On the Numbers of Mr. H. Schubert). Translated by Theodore J. Benac in Friedrich Waismann, Introduction to Mathematical Thinking: The Formation of Concepts in Modern Mathematics (1959, 2003), 107. Webmaster added “[chalk]”. Suppose then I want to give myself a little training in the art of reasoning; suppose I want to get out of the region of conjecture and probability, free myself from the difficult task of weighing evidence, and putting instances together to arrive at general propositions, and simply desire to know how to deal with my general propositions when I get them, and how to deduce right inferences from them; it is clear that I shall obtain this sort of discipline best in those departments of thought in which the first principles are unquestionably true. For in all our thinking, if we come to erroneous conclusions, we come to them either by accepting false premises to start with—in which case our reasoning, however good, will not save us from error; or by reasoning badly, in which case the data we start from may be perfectly sound, and yet our conclusions may be false. But in the mathematical or pure sciences,—geometry, arithmetic, algebra, trigonometry, the calculus of variations or of curves,— we know at least that there is not, and cannot be, error in our first principles, and we may therefore fasten our whole attention upon the processes. As mere exercises in logic, therefore, these sciences, based as they all are on primary truths relating to space and number, have always been supposed to furnish the most exact discipline. When Plato wrote over the portal of his school. “Let no one ignorant of geometry enter here,” he did not mean that questions relating to lines and surfaces would be discussed by his disciples. On the contrary, the topics to which he directed their attention were some of the deepest problems,— social, political, moral,—on which the mind could exercise itself. Plato and his followers tried to think out together conclusions respecting the being, the duty, and the destiny of man, and the relation in which he stood to the gods and to the unseen world. What had geometry to do with these things? Simply this: That a man whose mind has not undergone a rigorous training in systematic thinking, and in the art of drawing legitimate inferences from premises, was unfitted to enter on the discussion of these high topics; and that the sort of logical discipline which he needed was most likely to be obtained from geometry—the only mathematical science which in Plato’s time had been formulated and reduced to a system. And we in this country [England] have long acted on the same principle. Our future lawyers, clergy, and statesmen are expected at the University to learn a good deal about curves, and angles, and numbers and proportions; not because these subjects have the smallest relation to the needs of their lives, but because in the very act of learning them they are likely to acquire that habit of steadfast and accurate thinking, which is indispensable to success in all the pursuits of life. In Lectures on Teaching (1906), 891-92. SYLLOGISM, n. A logical formula consisting of a major and a minor assumption and an inconsequent. (See LOGIC.) The Collected Works of Ambrose Bierce (1911), Vol. 7, The Devil's Dictionary, 335. Symbolic Logic…has been disowned by many logicians on the plea that its interest is mathematical, and by many mathematicians on the plea that its interest is logical. In 'Preface', A Treatise on Universal Algebra: With Applications (1898), Vol. 1, vi. The arithmetization of mathematics … which began with Weierstrass … had for its object the separation of purely mathematical concepts, such as number and correspondence and aggregate, from intuitional ideas, which mathematics had acquired from long association with geometry and mechanics. These latter, in the opinion of the formalists, are so firmly entrenched in mathematical thought that in spite of the most careful circumspection in the choice of words, the meaning concealed behind these words, may influence our reasoning. For the trouble with human words is that they possess content, whereas the purpose of mathematics is to construct pure thought. But how can we avoid the use of human language? The … symbol. Only by using a symbolic language not yet usurped by those vague ideas of space, time, continuity which have their origin in intuition and tend to obscure pure reason—only thus may we hope to build mathematics on the solid foundation of logic. In Tobias Dantzig and Joseph Mazur (ed.), Number: The Language of Science (1930, ed. by Joseph Mazur 2007), 99. The arithmetic of life does not always have a logical answer. Westfield State College The body of science is not, as it is sometimes thought, a huge coherent mass of facts, neatly arranged in sequence, each one attached to the next by a logical string. In truth, whenever we discover a new fact it involves the elimination of old ones. We are always, as it turns out, fundamentally in error. In 'On Science and Certainty', Discover Magazine (Oct 1980) The book [Future of an Illusion] testifies to the fact that the genius of experimental science is not necessarily joined with the genius of logic or generalizing power. The distinction is, that the science or knowledge of the particular subject-matter furnishes the evidence, while logic furnishes the principles and rules of the estimation of evidence. In A System of Logic, Ratiocinative and Inductive: Being a Connected View of the Principles of Evidence, and the Methods of Scientific Investigation (1843), Vol. 1, 11. The distinctive Western character begins with the Greeks, who invented the habit of deductive reasoning and the science of geometry. In 'Western Civilization', collected in In Praise of Idleness and Other Essays (1935), 161. The doctrine that logical reasoning produces no new truths, but only unfolds and brings into view those truths which were, in effect, contained in the first principles of the reasoning, is assented to by almost all who, in modern times, have attended to the science of logic. In The Philosophy of the Inductive Sciences: Founded Upon Their History (1840), Vol. 1, 67. The emancipation of logic from the yoke of Aristotle very much resembles the emancipation of geometry from the bondage of Euclid; and, by its subsequent growth and diversification, logic, less abundantly perhaps but not less certainly than geometry, has illustrated the blessings of freedom. From Book Review in Science (19 Jan 1912), 35, No. 890, 108. Keyser was reviewing Alfred North Whitehead and Bertrand Russell, Principia Mathematica (1910). The fact that all Mathematics is Symbolic Logic is one of the greatest discoveries of our age; and when this fact has been established, the remainder of the principles of mathematics consists of the analysis of Symbolic Logic itself. In Bertrand Russell, The Principles of Mathematics (1903), 5. The fact that the proof of a theorem consists in the application of certain simple rules of logic does not dispose of the creative element in mathematics, which lies in the choice of the possibilities to be examined. As co-author with Herbert Robbins, in What Is Mathematics?: An Elementary Approach to Ideas and Methods (1941, 1996), 15. The facts of nature are what they are, but we can only view them through the spectacles of our mind. Our mind works largely by metaphor and comparison, not always (or often) by relentless logic. When we are caught in conceptual traps, the best exit is often a change in metaphor–not because the new guideline will be truer to nature (for neither the old nor the new metaphor lies ‘out there’ in the woods), but because we need a shift to more fruitful perspectives, and metaphor is often the best agent of conceptual transition. The faith of scientists in the power and truth of mathematics is so implicit that their work has gradually become less and less observation, and more and more calculation. The promiscuous collection and tabulation of data have given way to a process of assigning possible meanings, merely supposed real entities, to mathematical terms, working out the logical results, and then staging certain crucial experiments to check the hypothesis against the actual empirical results. But the facts which are accepted by virtue of these tests are not actually observed at all. With the advance of mathematical technique in physics, the tangible results of experiment have become less and less spectacular; on the other hand, their significance has grown in inverse proportion. The men in the laboratory have departed so far from the old forms of experimentation—typified by Galileo's weights and Franklin's kite—that they cannot be said to observe the actual objects of their curiosity at all; instead, they are watching index needles, revolving drums, and sensitive plates. No psychology of 'association' of sense-experiences can relate these data to the objects they signify, for in most cases the objects have never been experienced. Observation has become almost entirely indirect; and readings take the place of genuine witness. Philosophy in a New Key; A Study in Inverse the Symbolism of Reason, Rite, and Art (1942), 19-20. The familiar idea of a god who is omniscient: someone who knows everything … does not immediately ring alarm bells in our brains; it is plausible that such a being could exist. Yet, when it is probed more closely one can show that omniscience of this sort creates a logical paradox and must, by the standards of human reason, therefore be judged impossible or be qualified in some way. To see this consider this test statement: This statement is not known to be true by anyone. Now consider the plight of our hypothetical Omniscient Being (“Big O”). Suppose first that this statement is true and Big O does not know it. Then Big O would not be omniscient. So, instead, suppose our statement is false. This means that someone must know the statement to be true; hence it must be true. So regardless of whether we assume at the outset that this statement is true or false, we are forced to conclude that it must be true! And therefore, since the statement is true, nobody (including Big O) can know that it is true. This shows that there must always be true statements that no being can know to be true. Hence there cannot be an Omniscient Being who knows all truths. Nor, by the same argument, could we or our future successors, ever attain such a state of omniscience. All that can be known is all that can be known, not all that is true. In Impossibility: The Limits of Science and the Science of Limits (1999), 11. The focal points of our different reflections have been called “science”’ or “art” according to the nature of their “formal” objects, to use the language of logic. If the object leads to action, we give the name of “art” to the compendium of rules governing its use and to their technical order. If the object is merely contemplated under different aspects, the compendium and technical order of the observations concerning this object are called “science.” Thus metaphysics is a science and ethics is an art. The same is true of theology and pyrotechnics. Definition of 'Art', Encyclopédie (1751). Translated by Nelly S. Hoyt and Thomas Cassirer (1965), 4. The functional validity of a working hypothesis is not a priori certain, because often it is initially based on intuition. However, logical deductions from such a hypothesis provide expectations (so-called prognoses) as to the circumstances under which certain phenomena will appear in nature. Such a postulate or working hypothesis can then be substantiated by additional observations ... The author calls such expectations and additional observations the prognosis-diagnosis method of research. Prognosis in science may be termed the prediction of the future finding of corroborative evidence of certain features or phenomena (diagnostic facts). This method of scientific research builds up and extends the relations between the subject and the object by means of a circuit of inductions and deductions. In 'The Scientific Character of Geology', The Journal of Geology (Jul 1961), 69, No. 4, 454-5. The fundamental hypothesis of genetic epistemology is that there is a parallelism between the progress made in the logical and rational organization of knowledge and the corresponding formative psychological processes. With that hypothesis, the most fruitful, most obvious field of study would be the reconstituting of human history—the history of human thinking in prehistoric man. Unfortunately, we are not very well informed in the psychology of primitive man, but there are children all around us, and it is in studying children that we have the best chance of studying the development of logical knowledge, physical knowledge, and so forth. 'Genetic Epistemology', Columbia Forum (1969), 12, 4. The fundamental principles and indispensable postulates of every genuinely productive science are not based on pure logic but rather on the metaphysical hypothesis–which no rules of logic can refute–that there exists an outer world which is entirely independent of ourselves. It is only through the immediate dictate of our consciousness that we know that this world exists. And that consciousness may to a certain degree be called a special sense. In Max Planck and James Vincent Murphy (trans.), Where Is Science Going? (1932), 138-139. The general mental qualification necessary for scientific advancement is that which is usually denominated “common sense,” though added to this, imagination, induction, and trained logic, either of common language or of mathematics, are important adjuncts. From presidential address (24 Nov 1877) to the Philosophical Society of Washington. As cited by L.A. Bauer in his retiring president address (5 Dec 1908), 'The Instruments and Methods of Research', published in Philosophical Society of Washington Bulletin, 15, 103. Reprinted in William Crookes (ed.) The Chemical News and Journal of Industrial Science (30 Jul 1909), 59. The greater the mind, the greater are the truths self-evident to it, and the greater also is its power to induce complex from simple truths—complex truths of which we may be as certain as we are of the primary self-evident truths themselves. In The Science of Poetry and the Philosophy of Language (1910), x. The history of psychiatry to the present day is replete with examples of loose thinking and a failure to apply even the simplest rules of logic. “A Court of Statistical Appeal” has now been equated with scientific method. Quoted in book review by Myre Sim about 'Ending the Cycle of Abuse', The Canadian Journal of Psychiatry (May 1997), 42:4, 425. The influence of the mathematics of Leibnitz upon his philosophy appears chiefly in connection with his law of continuity and his prolonged efforts to establish a Logical Calculus. … To find a Logical Calculus (implying a universal philosophical language or system of signs) is an attempt to apply in theological and philosophical investigations an analytic method analogous to that which had proved so successful in Geometry and Physics. It seemed to Leibnitz that if all the complex and apparently disconnected ideas which make up our knowledge could be analysed into their simple elements, and if these elements could each be represented by a definite sign, we should have a kind of “alphabet of human thoughts.” By the combination of these signs (letters of the alphabet of thought) a system of true knowledge would be built up, in which reality would be more and more adequately represented or symbolized. … In many cases the analysis may result in an infinite series of elements; but the principles of the Infinitesimal Calculus in mathematics have shown that this does not necessarily render calculation impossible or inaccurate. Thus it seemed to Leibnitz that a synthetic calculus, based upon a thorough analysis, would be the most effective instrument of knowledge that could be devised. “I feel,” he says, “that controversies can never be finished, nor silence imposed upon the Sects, unless we give up complicated reasonings in favor of simple calculations, words of vague and uncertain meaning in favor of fixed symbols [characteres].” Thus it will appear that “every paralogism is nothing but an error of calculation.” “When controversies arise, there will be no more necessity of disputation between two philosophers than between two accountants. Nothing will be needed but that they should take pen in hand, sit down with their counting-tables, and (having summoned a friend, if they like) say to one another: Let us calculate.” This sounds like the ungrudging optimism of youth; but Leibniz was optimist enough to cherish the hope of it to his life’s end. By Robert Latta in 'Introduction' to his translation of Gottfried Leibnitz, The Monadology and Other Philosophical Writings (1898), 85. Also quoted (omitting the last sentence) in Robert Édouard Moritz, Memorabilia Mathematica; Or, The Philomath’s Quotation-Book (1914), 205-206. The ingenuity and effective logic that enabled chemists to determine complex molecular structures from the number of isomers, the reactivity of the molecule and of its fragments, the freezing point, the empirical formula, the molecular weight, etc., is one of the outstanding triumphs of the human mind. 'Trends in Chemistry', Chemical Engineering News, 7 Jan 1963, 5. The key to SETI is to guess the type of communication that an alien society would use. The best guesses so far have been that they would use radio waves, and that they would choose a frequency based on 'universal' knowledge—for instance, the 1420 MHz hydrogen frequency. But these are assumptions formulated by the human brain. Who knows what sort of logic a superadvanced nonhuman life form might use? ... Just 150 years ago, an eyeblink in history, radio waves themselves were inconceivable, and we were thinking of lighting fires to signal the Martians. Quoted on PBS web page related to Nova TV program episode on 'Origins: Do Aliens Exist in the Milky Way'. The laws of logic do not prescribe the way our minds think; they prescribe the way our minds ought to think. Swarthmore Lecture (1929) at Friends’ House, London, printed in Science and the Unseen World (1929), 55. The logic now in use serves rather to fix and give stability to the errors which have their foundation in commonly received notions than to help the search for truth. So it does more harm than good. From Novum Organum (1620), Book 1, Aphorism 12. Translated as The New Organon: Aphorisms Concerning the Interpretation of Nature and the Kingdom of Man), collected in James Spedding, Robert Ellis and Douglas Heath (eds.), The Works of Francis Bacon (1857), Vol. 4, 48-49. The logic of the subject [algebra], which, both educationally and scientifically speaking, is the most important part of it, is wholly neglected. The whole training consists in example grinding. What should have been merely the help to attain the end has become the end itself. The result is that algebra, as we teach it, is neither an art nor a science, but an ill-digested farrago of rules, whose object is the solution of examination problems. … The result, so far as problems worked in examinations go, is, after all, very miserable, as the reiterated complaints of examiners show; the effect on the examinee is a well-known enervation of mind, an almost incurable superficiality, which might be called Problematic Paralysis—a disease which unfits a man to follow an argument extending beyond the length of a printed octavo page. In Presidential Address British Association for the Advancement of Science (1885), Nature, 32, 447-448. The logical feebleness of science is not sufficiently borne in mind. It keeps down the weed of superstition, not by logic but by slowly rendering the mental soil unfit for its cultivation. In 'Science and Spirits', Fragments of Science for Unscientific People (1871), 409. The mathematical intellectualism is henceforth a positive doctrine, but one that inverts the usual doctrines of positivism: in place of originating progress in order, dynamics in statics, its goal is to make logical order the product of intellectual progress. The science of the future is not enwombed, as Comte would have had it, as Kant had wished it, in the forms of the science already existing; the structure of these forms reveals an original dynamism whose onward sweep is prolonged by the synthetic generation of more and more complicated forms. No speculation on number considered as a category a priori enables one to account for the questions set by modern mathematics … space affirms only the possibility of applying to a multiplicity of any elements whatever, relations whose type the intellect does not undertake to determine in advance, but, on the contrary, it asserts their existence and nourishes their unlimited development. As translated in James Byrnie Shaw, Lectures on the Philosophy of Mathematics (1918), 193. From Léon Brunschvicg, Les Étapes de La Philosophie Mathématique (1912), 567-568, “L’intellectualisme mathématique est désormais une doctrine positive, mais qui intervertira les formules habituelles du positivisme: au lieu de faire sortir le progrès de l’ordre, ou le dynamique du statique, il tend à faire de l'ordre logique le produit du progrès intellectuel. La science à venir n'est pas enfermée, comme l’aurait voulu Comte, comme le voulait déjà Kant, dans les formes de la science déjà faite; la constitution de ces formes révèle un dynamisme originel dont l’élan se prolonge par la génération synthétique de notions de plus en plus compliquées. Aucune spéculation sur le nombre, considéré comme catégorie a priori, ne permet de rendre compte des questions qui se sont posées pour la mathématique moderne … … l’espace ne fait qu'affirmer la possibilité d'appliquer sur une multiplicité d’éléments quelconques des relations dont l’intelligence ne cherche pas à déterminer d’avance le type, dont elle constate, au contraire, dont elle suscite le développement illimité.” The mathematician is entirely free, within the limits of his imagination, to construct what worlds he pleases. What he is to imagine is a matter for his own caprice; he is not thereby discovering the fundamental principles of the universe nor becoming acquainted with the ideas of God. If he can find, in experience, sets of entities which obey the same logical scheme as his mathematical entities, then he has applied his mathematics to the external world; he has created a branch of science. Aspects of Science: Second Series (1926), 92. The maxim is, that whatever can be affirmed (or denied) of a class, may be affirmed (or denied) of everything included in the class. This axiom, supposed to be the basis of the syllogistic theory, is termed by logicians the dictum de omni et nullo. A System of Logic, Ratiocinative and Inductive (1858), 117. The most obvious and easy things in mathematics are not those that come logically at the beginning; they are things that, from the point of view of logical deduction, come somewhere in the middle. Just as the easiest bodies to see are those that are neither very near nor very far… In Introduction to Mathematical Philosophy (1920), 2. The most ordinary things are to philosophy a source of insoluble puzzles. In order to explain our perceptions it constructs the concept of matter and then finds matter quite useless either for itself having or for causing perceptions in a mind. With infinite ingenuity it constructs a concept of space or time and then finds it absolutely impossible that there be objects in this space or that processes occur during this time ... The source of this kind of logic lies in excessive confidence in the so-called laws of thought. 'On Statistical Mechanics' (1904), in Theoretical Physics and Philosophical Problems (1974), 164-5. The most striking characteristic of the written language of algebra and of the higher forms of the calculus is the sharpness of definition, by which we are enabled to reason upon the symbols by the mere laws of verbal logic, discharging our minds entirely of the meaning of the symbols, until we have reached a stage of the process where we desire to interpret our results. The ability to attend to the symbols, and to perform the verbal, visible changes in the position of them permitted by the logical rules of the science, without allowing the mind to be perplexed with the meaning of the symbols until the result is reached which you wish to interpret, is a fundamental part of what is called analytical power. Many students find themselves perplexed by a perpetual attempt to interpret not only the result, but each step of the process. They thus lose much of the benefit of the labor-saving machinery of the calculus and are, indeed, frequently incapacitated for using it. In 'Uses of Mathesis', Bibliotheca Sacra (Jul 1875), 32, 505. The name is not the thing named but is of different logical type, higher than that of the thing named. In Angels Fear: Towards an Epistemology of the Sacred (1979, 1987), 209. The New Logic—It would be nice if it worked. Ergo, it will work. In A Mencken Chrestomathy (1949, 1956), 615. The only hope [of science] ... is in genuine induction. Aphorism 14. In Francis Bacon and Basil Montagu, The Works of Francis Bacon (1831), Vol. 14, 32. The only hope of science is genuine induction. In Maturin Murray Ballou, Edge-Tools of Speech (1899), 440. The peculiar taste both in pure and in mixed nature of those relations about which it is conversant, from its simple and definite phraseology, and from the severe logic so admirably displayed in the concatenation of its innumerable theorems, are indeed immense, and well entitled to separate and ample illustration. In Philosophy of the Human Mind (1816), Vol. 2, Chap. 2, Sec. 3, 157. The philosopher of science is not much interested in the thought processes which lead to scientific discoveries; he looks for a logical analysis of the completed theory, including the establishing its validity. That is, he is not interested in the context of discovery, but in the context of justification. 'The Philosophical Significance of the Theory of Relativity' (1938). Collected in P.A. Schillp (ed.). Albert Einstein: Philosopher-Scientist (1949, 1970), 292. Cited in G. Holton, Thematic Origins of Scientific Thought (1973), 7. The philosopher of science is not much interested in the thought processes which lead to scientific discoveries; he looks for a logical analysis of the completed theory, including the relationships establishing its validity. That is, he is not interested in the context of discovery, but in the context of justification. In'The Philosophical Significance of the Theory of Relativity' (1949), collected in P.A. Schilpp (ed), Albert Einstein: Philosopher-Scientist (1969), 292. As quoted and cited in Stanley Goldberg, Understanding Relativity: Origin and Impact of a Scientific Revolution (1984, 2013), 306. The principles of logic and mathematics are true universally simply because we never allow them to be anything else. And the reason for this is that we cannot abandon them without contradicting ourselves, without sinning against the rules which govern the use of language, and so making our utterances self-stultifying. In other words, the truths of logic and mathematics are analytic propositions or tautologies. Language, Truth and Logic (1960), 77. The purely formal sciences, logic and mathematics, deal with such relations which are independent of the definite content, or the substance of the objects, or at least can be. In particular, mathematics involves those relations of objects to each other that involve the concept of size, measure, number. In Theorie der Complexen Zahlensysteme, (1867), 1. Translated by Webmaster using Google Translate from the original German, “Die rein formalen Wissenschaften, Logik und Mathematik, haben solche Relationen zu behandeln, welche unabhängig von dem bestimmten Inhalte, der Substanz der Objecte sind oder es wenigstens sein können.” The purely formal Sciences, logic and mathematics, deal with those relations which are, or can be, independent of the particular content or the substance of objects. To mathematics in particular fall those relations between objects which involve the concepts of magnitude, of measure and of number. In Theorie der Complexen Zahlensysteme (1867), 1. As translated in Robert Édouard Moritz, Memorabilia Mathematica; Or, The Philomath’s Quotation-book (1914), 4. From the original German, “Die rein formalen Wissenschaften, Logik und Mathematik, haben solche Relationen zu behandeln, welche unabhängig von dem bestimmten Inhalte, der Substanz der Objecte sind oder es wenigstens sein können. Der Mathematik fallen ins Besondere diejenigen Beziehungen der Objecte zu einander zu, die den Begriff der Grösse, des Maasses, der Zahl involviren.” The sciences are taught in following order: morality, arithmetic, accounts, agriculture, geometry, longimetry, astronomy, geomancy, economics, the art of government, physic, logic, natural philosophy, abstract mathematics, divinity, and history. From Ain-i-Akbery (c.1590). As translated from the original Persian, by Francis Gladwin in 'Akbar’s Conduct and Administrative Rules', 'Regulations For Teaching in the Public Schools', Ayeen Akbery: Or, The Institutes of the Emperor Akber (1783), Vol. 1, 290. Note: Akbar (Akber) was a great ruler; he was an enlightened statesman. He instituted a great system for general education. The scientific method is a potentiation of common sense, exercised with a specially firm determination not to persist in error if any exertion of hand or mind can deliver us from it. Like other exploratory processes, it can be resolved into a dialogue between fact and fancy, the actual and the possible; between what could be true and what is in fact the case. The purpose of scientific enquiry is not to compile an inventory of factual information, nor to build up a totalitarian world picture of Natural Laws in which every event that is not compulsory is forbidden. We should think of it rather as a logically articulated structure of justifiable beliefs about nature. It begins as a story about a Possible World—a story which we invent and criticise and modify as we go along, so that it ends by being, as nearly as we can make it, a story about real life. Induction and Intuition in Scientific Thought (1969), 59. The scientist values research by the size of its contribution to that huge, logically articulated structure of ideas which is already, though not yet half built, the most glorious accomplishment of In The Art of the Soluble (1967), 126. Also 'Two Conceptions of Science', collected in The Strange Case of the Spotted Mice and Other Classic Essays on Science (1996), 70. The scientist, if he is to be more than a plodding gatherer of bits of information, needs to exercise an active imagination. The scientists of the past whom we now recognize as great are those who were gifted with transcendental imaginative powers, and the part played by the imaginative faculty of his daily life is as least as important for the scientist as it is for the worker in any other field—much more important than for most. A good scientist thinks logically and accurately when conditions call for logical and accurate thinking—but so does any other good worker when he has a sufficient number of well-founded facts to serve as the basis for the accurate, logical induction of generalizations and the subsequent deduction of consequences. ‘Imagination in Science’, Tomorrow (Dec 1943), 38-9. Quoted In Barbara Marinacci (ed.), Linus Pauling In His Own Words: Selected Writings, Speeches, and Interviews (1995), 82. The self-fulfilling prophecy is, in the beginning, a false definition of the situation evoking a new behavior which makes the originally false conception come true. The specious validity of the self-fulfilling prophecy perpetuates a reign of error. For the prophet will cite the actual course of events as proof that he was right from the very beginning. … Such are the perversities of social In article, 'The Self-Fulfilling Prophecy', The Antioch Review (Summer 1948), 8, No. 2, 195-196. Included as Chap. 7 of Social Theory and Social Structure (1949), 181-195. Note: Merton coined the expression “self-fulfilling prophecy.” The sense for style … is an aesthetic sense, based on admiration for the direct attainment of a foreseen end, simply and without waste. Style in art, style in literature, style in science, style in logic, style in practical execution have fundamentally the same aesthetic qualities, namely, attainment and restraint. The love of a subject in itself and for itself, where it is not the sleepy pleasure of pacing a mental quarter-deck, is the love of style as manifested in that study. Here we are brought back to the position from which we started, the utility of education. Style, in its finest sense, is the last acquirement of the educated mind; it is also the most useful. It pervades the whole being. The administrator with a sense for style hates waste; the engineer with a sense for style economises his material; the artisan with a sense for style prefers good work. Style is the ultimate morality of the mind. In 'The Aims of Education', The Aims of Education and Other Essays (1929), 23.
{"url":"https://todayinsci.com/QuotationsCategories/L_Cat/Logic-Quotations.htm","timestamp":"2024-11-13T12:10:13Z","content_type":"text/html","content_length":"1049476","record_id":"<urn:uuid:0e331dba-2502-4169-bd99-6d4ed0ef34b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00244.warc.gz"}
Transactions Online Koichi SASAKI, Masaru SHIMIZU, Yasuo WATANABE, "Coordinate Transformation by Nearest Neighbor Interpolation for ISAR Fixed Scene Imaging" in IEICE TRANSACTIONS on Electronics, vol. E84-C, no. 12, pp. 1905-1909, December 2001, doi: . Abstract: The reflection signal in the inverse synthetic aperture radar is measured in the polar coordinate defined by the object rotation angle and the frequency. The reconstruction of fixed scene images requires the coordinate transformation of the polar format data into the rectangular spatial frequency domain, which is then processed by the inverse Fourier transform. In this paper a fast and flexible method of coordinate transformation based on the nearest neighbor interpolation utilizing the Delauney triangulation is at first presented. Then, the induced errors in the transformed rectangular spatial frequency data and the resultant fixed scene images are investigated by simulation under the uniform plane wave transmit-receive mode over the swept frequency 120-160 GHz, and the results which demonstrate the validity of the current coordinate transformation are presented. URL: https://global.ieice.org/en_transactions/electronics/10.1587/e84-c_12_1905/_p author={Koichi SASAKI, Masaru SHIMIZU, Yasuo WATANABE, }, journal={IEICE TRANSACTIONS on Electronics}, title={Coordinate Transformation by Nearest Neighbor Interpolation for ISAR Fixed Scene Imaging}, abstract={The reflection signal in the inverse synthetic aperture radar is measured in the polar coordinate defined by the object rotation angle and the frequency. The reconstruction of fixed scene images requires the coordinate transformation of the polar format data into the rectangular spatial frequency domain, which is then processed by the inverse Fourier transform. In this paper a fast and flexible method of coordinate transformation based on the nearest neighbor interpolation utilizing the Delauney triangulation is at first presented. Then, the induced errors in the transformed rectangular spatial frequency data and the resultant fixed scene images are investigated by simulation under the uniform plane wave transmit-receive mode over the swept frequency 120-160 GHz, and the results which demonstrate the validity of the current coordinate transformation are presented.}, TY - JOUR TI - Coordinate Transformation by Nearest Neighbor Interpolation for ISAR Fixed Scene Imaging T2 - IEICE TRANSACTIONS on Electronics SP - 1905 EP - 1909 AU - Koichi SASAKI AU - Masaru SHIMIZU AU - Yasuo WATANABE PY - 2001 DO - JO - IEICE TRANSACTIONS on Electronics SN - VL - E84-C IS - 12 JA - IEICE TRANSACTIONS on Electronics Y1 - December 2001 AB - The reflection signal in the inverse synthetic aperture radar is measured in the polar coordinate defined by the object rotation angle and the frequency. The reconstruction of fixed scene images requires the coordinate transformation of the polar format data into the rectangular spatial frequency domain, which is then processed by the inverse Fourier transform. In this paper a fast and flexible method of coordinate transformation based on the nearest neighbor interpolation utilizing the Delauney triangulation is at first presented. Then, the induced errors in the transformed rectangular spatial frequency data and the resultant fixed scene images are investigated by simulation under the uniform plane wave transmit-receive mode over the swept frequency 120-160 GHz, and the results which demonstrate the validity of the current coordinate transformation are presented. ER -
{"url":"https://global.ieice.org/en_transactions/electronics/10.1587/e84-c_12_1905/_p","timestamp":"2024-11-12T00:38:27Z","content_type":"text/html","content_length":"61100","record_id":"<urn:uuid:b7ae56bd-eef7-49c5-97f0-5ce31dd94d92>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00427.warc.gz"}
Worksheet Function to Test if Range Is Sorted Did you ever need a worksheet function to determine if a range is sorted? Neither have I. But I’m all about answering questions that haven’t been asked. The SUMPRODUCT part compares every cell in the range to the one below it. If any cell is greater than the one below it, it returns TRUE and the double unary converts that to a 1. In this example, the SUMPRODUCT will return 1 – all cells are less than the one above it except A10 which is greater than A11 (blank). That brings us to the right side of the equation. If A11 is less than A10 (as it is in this example), the expression will evaluation to TRUE. Again the double unary coerces the Boolean into a 1. If A11 happened to be greater, it would return FALSE, or zero. We don’t really care if A11 is sorted, but we use the fact of whether it is to compare to the SUMPRODUCT result. If A11 is sorted, SUMPRODUCT will return zero for an otherwise sorted list. If not, SUMPRODUCT will return 1. If anything else if not sorted, SUMPRODUCT will return a larger number and the whole expression will be Up in the formula bar, I use Control+= (or F9) to evaluate portions of the formula =SUMPRODUCT(--({FALSE;FALSE;FALSE;FALSE;FALSE;FALSE;FALSE;FALSE;FALSE;TRUE}))=--(A11<a10 ) 14 thoughts on “Worksheet Function to Test if Range Is Sorted” 1. Any reason not to use the more obvious AND solution? Yeah, it’s an array formula but it’s soooo much more obvious. {grin} Not to mention that it will work even with data on the last row of the worksheet (yeah, like that’s ever happened to me). Suppose the data are in B2:B6. Then the *array*(1) formulas =AND(B2:B5< =B3:B6) indicates an ascending order. =AND(B2:B5>=B3:B6) indicates a descending order. =OR(AND(B2:B5< =B3:B6),AND(B2:B5>=B3:B6)) indicates either an ascending or a descending order. Remove the = to get strict order. And with the named formulas below, the result adjusts to a changing data range. (1) For those who might be new to array formulas, to complete an array formula use the CTRL+SHIFT+ENTER key combination and not just the ENTER or TAB key. If done correctly, Excel will show the formula enclosed in curly brackets { and } 2. Handy! 3. Define Lst = =’1′!$A$1:INDEX(‘1’!$A:$A,COUNTA(‘1’!$A:$A)-1) LstOff =’1′!$A$2:INDEX(‘1’!$A:$A,COUNTA(‘1’!$A:$A)) Sort =AND(AND(Lst<=LstOff)AND(Lst>=LstOff)) Works for both Asc and Desc 4. Sort=AND(AND(Lst<=LstOff)<>AND(Lst>=LstOff)) 5. Thanks for this! We were working with a large spreadsheet and wanted an easy way to conditionally format the column that was being sorted – and we had to do it without VBA or macros due to security settings. This was exactly what we needed. 6. This is an awesome formula (and also works for text!). However, it doesn’t work when there is a duplicate number (or text) in the data. Is there a modification that might solve that? Thanks! 7. This seems to work if there are duplicates. 8. That does it, thanks!! Although now I’m seeing that it won’t work if there are blanks? Any ideas on that? 9. This will work with blanks I think. 10. That latest one doesn’t seem to be working for blanks. I’m thinking this would require creating a temporary list in the array function that excludes blanks, no matter how many (for, say rows 1:10), and then compare that to the same logic applied to say, rows 2:11? 11. When I say a temporary list, I mean creating those two lists “in the formula” without having to use a helper column, if that’s possible. Thanks again! 12. It works for blanks unless the blank is the first or the last cell. I didn’t test the extremes, I guess. This seems to work for any blanks 13. Nope. That won’t work if the first two are blank. 14. This might work for numbers, but not text Posting code? Use <pre> tags for VBA and <code> tags for inline.
{"url":"http://dailydoseofexcel.com/archives/2011/04/20/worksheet-function-to-test-if-range-is-sorted/","timestamp":"2024-11-12T12:17:10Z","content_type":"text/html","content_length":"91726","record_id":"<urn:uuid:080cc070-bbab-4819-bbb5-f52e9ec3b9d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00102.warc.gz"}
Cosmological Models of the Early Universe Using an Asymmetric Scalar Higgs Doublet with Potential Interactions Core Concepts This paper explores a class of cosmological models based on an asymmetric scalar Higgs doublet with potential interactions between its components, highlighting their potential to explain phenomena like the early universe's inflationary expansion and the formation of supermassive black holes. For interaction constant γ > 0.49, the models exhibit a finite future culminating in a Big Rip. For γ = 0.6, the Big Rip occurs at approximately t∞ ≈ 54.0. For γ = 1, the Big Rip occurs at approximately t∞ ≈ 27.3. In a model with γ = 0.49, a rebound point occurs at tb ≈ -11.25. How could these cosmological models be tested against observational data from the early universe, such as the cosmic microwave background radiation? Testing these cosmological models, specifically the Asymmetric Scalar Doublet (ASD) models with potential interaction (ASD(P)) against observational data, particularly the Cosmic Microwave Background (CMB) radiation, requires a multi-pronged approach: Predictions for Cosmological Parameters: ASD(P) models need to be evolved to calculate their predictions for key cosmological parameters like the spectral index (ns), its running (dns/dlnk), the tensor-to-scalar ratio (r), and the amplitude of scalar perturbations (As). These parameters are tightly constrained by CMB observations, especially from Planck. Discrepancies with observed values could rule out certain parameter spaces within the ASD(P) framework. Imprints on the CMB Power Spectrum: The "almost Euclidean cycle" phase, characterized by a nearly constant scale factor and low Hubble parameter, could leave distinct imprints on the CMB power spectrum. This phase might suppress the growth of perturbations at certain scales, leading to deviations from the standard ΛCDM model predictions. Analyzing these deviations and comparing them with high-precision CMB data from Planck and future missions like LiteBIRD and CMB-S4 is crucial. Generation of Primordial Gravitational Waves: The inflationary expansion and contraction phases in ASD(P) models could generate primordial gravitational waves. The amplitude and spectral shape of these waves depend on the model parameters. Detecting these waves, either directly through future space-based interferometers like LISA or indirectly through their imprint on the CMB B-mode polarization, would provide strong evidence for inflationary scenarios and constrain ASD(P) models. Formation of Supermassive Black Holes: The authors propose ASD(P) models as an alternative mechanism for the formation of supermassive black holes in the early universe. These models need to be further developed to make specific predictions about the mass distribution, clustering, and early evolution of these black holes. Comparing these predictions with observations of high-redshift quasars and the demographics of supermassive black holes can test the viability of this scenario. Connection to Particle Physics: ASD(P) models introduce new scalar fields and interactions. It's essential to explore their possible connections to particle physics beyond the Standard Model. For instance, investigating if these scalar fields could be incorporated into supersymmetric or string theory frameworks and if they could leave observable signatures in collider experiments like the LHC could provide further avenues for testing. Could alternative theories of gravity, such as modified gravity, provide a different explanation for the observed features of the early universe without requiring the introduction of scalar fields or modifications to the standard model of particle physics? Yes, alternative theories of gravity, collectively known as Modified Gravity, offer potential explanations for the observed features of the early universe without directly invoking new scalar fields or modifying the Standard Model of particle physics. Here are some examples: f(R) Gravity: This class of theories generalizes Einstein's General Relativity by replacing the Ricci scalar (R) in the Einstein-Hilbert action with a function of R, f(R). Appropriate choices of f(R) can lead to accelerated expansion in the early universe, mimicking inflation, without requiring a scalar field. Scalar-Tensor Theories: These theories involve a scalar field non-minimally coupled to gravity, unlike the minimally coupled scalar fields in ASD(P) models. This non-minimal coupling can lead to a time-varying effective gravitational constant, potentially explaining the observed accelerated expansion without directly modifying the matter sector. Higher-Dimensional Theories: Theories like Kaluza-Klein theory and braneworld scenarios propose that our universe is embedded in a higher-dimensional spacetime. The dynamics of extra dimensions can manifest as modifications to gravity in our 4-dimensional universe, potentially explaining early universe phenomena without introducing new fields in the 4D picture. Loop Quantum Cosmology: This approach applies principles of loop quantum gravity, a background-independent quantization of General Relativity, to cosmology. It suggests that the Big Bang singularity is replaced by a quantum bounce, potentially providing an alternative explanation for the origin of the universe's expansion. However, it's crucial to note that modified gravity theories also face challenges: Theoretical Consistency: Constructing theoretically consistent and well-motivated modified gravity theories is non-trivial. Many models suffer from instabilities, ghost degrees of freedom, or fine-tuning problems. Observational Constraints: Modified gravity theories need to pass the same stringent observational tests as standard cosmology, including CMB, supernovae data, and large-scale structure observations. Many models struggle to simultaneously fit all these datasets. If the universe did indeed go through a period of near-static existence in an "almost Euclidean cycle," what implications would this have for our understanding of the arrow of time and the origin of the universe's expansion? An "almost Euclidean cycle," a period of near-static existence in the early universe, would have profound implications for our understanding of the arrow of time and the origin of the universe's expansion: Arrow of Time: The arrow of time, the observed directionality from past to future, is often linked to the universe's expansion. A near-static phase challenges this connection. During this phase, the universe's entropy wouldn't necessarily increase, potentially leading to a temporary halt or even a reversal of the arrow of time. Understanding how the arrow of time could re-emerge from such a phase and reconnect with the subsequent expansion is a significant challenge. Origin of Expansion: The standard inflationary paradigm posits a rapid expansion driven by a scalar field's potential energy. An "almost Euclidean cycle" suggests a different picture. The universe might have existed in a quasi-stable state before transitioning to the current expansion phase. This transition could be triggered by quantum fluctuations, instabilities in the scalar field configuration, or other mechanisms yet to be understood. Cyclic Cosmology: The existence of an "almost Euclidean cycle" lends credence to cyclic cosmological models, where the universe undergoes periods of expansion and contraction. This phase could represent a transition point between these cycles, raising questions about the nature of the contracting phase and the mechanisms driving the bounce to a new expansion. Initial Conditions: The initial conditions of the universe, often considered fine-tuned for the observed universe, might need reevaluation. A near-static phase could erase or modify some of the initial imprints, potentially alleviating the fine-tuning problems but also requiring new explanations for the observed homogeneity and isotropy of the universe. Observational Signatures: Detecting specific observational signatures of this "almost Euclidean cycle" is crucial. These signatures could manifest as deviations from standard cosmology predictions in the CMB, the abundance of light elements, or the large-scale structure of the universe. Finding these signatures would revolutionize our understanding of the early universe.
{"url":"https://linnk.ai/insight/scientific-computing/cosmological-models-of-the-early-universe-using-an-asymmetric-scalar-higgs-doublet-with-potential-interactions-KEh-TPo9/","timestamp":"2024-11-02T18:48:00Z","content_type":"text/html","content_length":"289187","record_id":"<urn:uuid:f3c95746-cb96-4a62-b6dd-d6d62d4ae1bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00613.warc.gz"}
I have made a start on this already, . Really what I am doing here is just getting my toes wet - What I want to do is: • Algebra hierarchy -- build up monoids, groups, fields etc and develop the basic theory of these structures also give concrete implementations of things like integers, reals (isomorphisms with efficient representations would be nice too!) • Algebraic algorithms -- produce algorithms that can be used to help me do mathematics (e.g. factoring, transformations, solving classes of equations) • Wiki -- Dependently typed programming in the large has already been done, they came up with FTA and FTC -- this is a constructive proof that it works! There are some projects already working on this kind of thing, help make this list bigger! Ignoring these would be so dammed stupid I am not even going to think about it, but I want to point out that unless we make the most use of these projects as is possible then anything we create will be an anti-social library that nobody can use except me -- that is not desirable. I think the only way to get this done is by collaborating with other people in a formally checked wiki ( ). These projects are all 'beta' that don't let everyone get involved so that's not very useful. Lets write it in perhaps, using Coq as a subprocess. The directory structure of the Coq libraries would reflect that of the wiki. Edits will be accepted if they are checked and correct. I think once you have a spinal chord implemented, people will be able to add on parts quite easily. I see no reason why someone with a bit of time wouldn't pick up Disquisitiones Arithmeticae or whatever and fold it into the wiki. With a bit of work getting everything set up correctly I think we could make a useful (basic) algebra system.
{"url":"https://muaddibspace.blogspot.com/2010/03/","timestamp":"2024-11-14T20:08:52Z","content_type":"application/xhtml+xml","content_length":"62292","record_id":"<urn:uuid:ecf704a7-23ab-46a8-a90b-6490349944dc>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00692.warc.gz"}
FullMatrix () FullMatrix (const size_type m, const size_type n) void reinit (const size_type m, const size_type n) void reinit (Mat A) void clear () void set (const size_type i, const size_type j, const PetscScalar value) void set (const std::vector< size_type > &indices, const FullMatrix< PetscScalar > &full_matrix, const bool elide_zero_values=false) void set (const std::vector< size_type > &row_indices, const std::vector< size_type > &col_indices, const FullMatrix< PetscScalar > &full_matrix, const bool void set (const size_type row, const std::vector< size_type > &col_indices, const std::vector< PetscScalar > &values, const bool elide_zero_values=false) void set (const size_type row, const size_type n_cols, const size_type *col_indices, const PetscScalar *values, const bool elide_zero_values=false) void add (const size_type i, const size_type j, const PetscScalar value) void add (const std::vector< size_type > &indices, const FullMatrix< PetscScalar > &full_matrix, const bool elide_zero_values=true) void add (const std::vector< size_type > &row_indices, const std::vector< size_type > &col_indices, const FullMatrix< PetscScalar > &full_matrix, const bool void add (const size_type row, const std::vector< size_type > &col_indices, const std::vector< PetscScalar > &values, const bool elide_zero_values=true) void add (const size_type row, const size_type n_cols, const size_type *col_indices, const PetscScalar *values, const bool elide_zero_values=true, const bool MatrixBase & add (const PetscScalar factor, const MatrixBase &other) void clear_row (const size_type row, const PetscScalar new_diag_value=0) void clear_rows (const ArrayView< const size_type > &rows, const PetscScalar new_diag_value=0) void clear_rows_columns (const std::vector< size_type > &row_and_column_indices, const PetscScalar new_diag_value=0) void compress (const VectorOperation::values operation) PetscScalar operator() (const size_type i, const size_type j) const PetscScalar el (const size_type i, const size_type j) const PetscScalar diag_element (const size_type i) const size_type m () const size_type n () const size_type local_size () const std::pair< size_type, size_type > local_range () const bool in_local_range (const size_type index) const size_type local_domain_size () const std::pair< size_type, size_type > local_domain () const MPI_Comm get_mpi_communicator () const std::uint64_t n_nonzero_elements () const size_type row_length (const size_type row) const PetscReal l1_norm () const PetscReal linfty_norm () const PetscReal frobenius_norm () const PetscScalar matrix_norm_square (const VectorBase &v) const PetscScalar matrix_scalar_product (const VectorBase &u, const VectorBase &v) const PetscScalar trace () const MatrixBase & operator*= (const PetscScalar factor) MatrixBase & operator/= (const PetscScalar factor) void vmult (VectorBase &dst, const VectorBase &src) const void Tvmult (VectorBase &dst, const VectorBase &src) const void vmult_add (VectorBase &dst, const VectorBase &src) const void Tvmult_add (VectorBase &dst, const VectorBase &src) const PetscScalar residual (VectorBase &dst, const VectorBase &x, const VectorBase &b) const const_iterator begin () const const_iterator begin (const size_type r) const const_iterator end () const const_iterator end (const size_type r) const operator Mat () const Mat & petsc_matrix () void transpose () PetscBool is_symmetric (const double tolerance=1.e-12) PetscBool is_hermitian (const double tolerance=1.e-12) void write_ascii (const PetscViewerFormat format=PETSC_VIEWER_DEFAULT) void print (std::ostream &out, const bool alternative_output=false) const std::size_t memory_consumption () const template<class Archive > void serialize (Archive &ar, const unsigned int version) Classes derived from Subscriptor provide a facility to subscribe to this object. This is mostly used by the SmartPointer class. void subscribe (std::atomic< bool > *const validity, const std::string &identifier="") const void unsubscribe (std::atomic< bool > *const validity, const std::string &identifier="") const unsigned int n_subscriptions () const template<typename StreamType > void list_subscribers (StreamType &stream) const void list_subscribers () const Implementation of a sequential dense matrix class based on PETSc. All the functionality is actually in the base class, except for the calls to generate a sequential dense matrix. This is possible since PETSc only works on an abstract matrix type and internally distributes to functions that do the actual work depending on the actual matrix type (much like using virtual functions). Only the functions creating a matrix of specific type differ, and are implemented in this particular class. Definition at line 48 of file petsc_full_matrix.h. void PETScWrappers::MatrixBase::set ( const std::vector< size_type > & indices, const FullMatrix< PetscScalar > & full_matrix, inherited const bool elide_zero_values = false ) Set all elements given in a FullMatrix<double> into the sparse matrix locations given by indices. In other words, this function writes the elements in full_matrix into the calling matrix, using the local-to-global indexing specified by indices for both the rows and the columns of the matrix. This function assumes a quadratic sparse matrix and a quadratic full_matrix, the usual situation in FE If the present object (from a derived class of this one) happens to be a sparse matrix, then this function adds some new entries to the matrix if they didn't exist before, very much in contrast to the SparseMatrix class which throws an error if the entry does not exist. The optional parameter elide_zero_values can be used to specify whether zero values should be inserted anyway or they should be filtered away. The default value is false, i.e., even zero values are void PETScWrappers::MatrixBase::add ( const std::vector< size_type > & indices, const FullMatrix< PetscScalar > & full_matrix, inherited const bool elide_zero_values = true ) Add all elements given in a FullMatrix<double> into sparse matrix locations given by indices. In other words, this function adds the elements in full_matrix to the respective entries in calling matrix, using the local-to-global indexing specified by indices for both the rows and the columns of the matrix. This function assumes a quadratic sparse matrix and a quadratic full_matrix, the usual situation in FE calculations. If the present object (from a derived class of this one) happens to be a sparse matrix, then this function adds some new entries to the matrix if they didn't exist before, very much in contrast to the SparseMatrix class which throws an error if the entry does not exist. The optional parameter elide_zero_values can be used to specify whether zero values should be added anyway or these should be filtered away and only non-zero data is added. The default value is true, i.e., zero values won't be added into the matrix. PetscScalar PETScWrappers::MatrixBase::matrix_norm_square ( const VectorBase & v ) const inherited Return the square of the norm of the vector \(v\) with respect to the norm induced by this matrix, i.e. \(\left(v,Mv\right)\). This is useful, e.g. in the finite element context, where the \(L_2\) norm of a function equals the matrix norm with respect to the mass matrix of the vector representing the nodal values of the finite element function. Obviously, the matrix needs to be quadratic for this operation. The implementation of this function is not as efficient as the one in the MatrixBase class used in deal.II (i.e. the original one, not the PETSc wrapper class) since PETSc doesn't support this operation and needs a temporary vector. Note that if the current object represents a parallel distributed matrix (of type PETScWrappers::MPI::SparseMatrix), then the given vector has to be a distributed vector as well. Conversely, if the matrix is not distributed, then neither may the vector be. Definition at line 456 of file petsc_matrix_base.cc. void PETScWrappers::MatrixBase::mmult ( MatrixBase & C, const MatrixBase & B, protectedinherited const VectorBase & V ) const Base function to perform the matrix-matrix multiplication \(C = AB\), or, if a vector \(V\) whose size is compatible with B is given, \(C = A \text{diag}(V) B\), where \(\text{diag}(V)\) defines a diagonal matrix with the vector entries. This function assumes that the calling matrix \(A\) and \(B\) have compatible sizes. The size of \(C\) will be set within this function. The content as well as the sparsity pattern of the matrix \(C\) will be reset by this function, so make sure that the sparsity pattern is not used somewhere else in your program. This is an expensive operation, so think twice before you use this function. Definition at line 644 of file petsc_matrix_base.cc. void PETScWrappers::MatrixBase::Tmmult ( MatrixBase & C, const MatrixBase & B, protectedinherited const VectorBase & V ) const Base function to perform the matrix-matrix multiplication with the transpose of this, i.e., \(C = A^T B\), or, if an optional vector \(V\) whose size is compatible with \(B\) is given, \(C = A^T \ text{diag}(V) B\), where \(\text{diag}(V)\) defines a diagonal matrix with the vector entries. This function assumes that the calling matrix \(A\) and \(B\) have compatible sizes. The size of \(C\) will be set within this function. The content as well as the sparsity pattern of the matrix \(C\) will be changed by this function, so make sure that the sparsity pattern is not used somewhere else in your program. This is an expensive operation, so think twice before you use this function. Definition at line 652 of file petsc_matrix_base.cc.
{"url":"https://dealii.org/current/doxygen/deal.II/classPETScWrappers_1_1FullMatrix.html","timestamp":"2024-11-11T13:07:36Z","content_type":"application/xhtml+xml","content_length":"195322","record_id":"<urn:uuid:094c5609-4a37-4de4-83e7-8588624026c7>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00039.warc.gz"}
Ratios and Proportion Practice Questions: Questions Section & Answers Ratios and Proportions Ratios and Proportion Practice Questions Ratio and proportion hold a special place in competitive exams. The questions from this topic is a very common occurrence. It is important to have enough practice for this questions to answer them in the exam. Here below are the ratios and proportion practice questions. Browse more Topics under Ratios And Proportions Ratio and Proportion Practice Questions Part 1: Basic ratio and proportion questions. Directions: In this section, the questions asked are the basic ratio and proportion questions that can be asked in the exam. 1. Divide Rs. 1870 in three parts such that half of the first part, one-third of the second part and one-sixth of the third part are equal. A. 270, 840, 1160       B. 341, 243, 245      C. 400, 800, 670     D. None of the above 2. A and B are the two alloys of copper and brass prepared by mixing metals in the proportion of 7:2 and 7:11 respectively. If the equal quantities of two alloys are melted to form a third alloy called C, then the proportion of copper and brass in C will be, A. 5:9      B. 5:7       C. 7:5        D. 9:5 3. The incomes of X and Y are in the ratio of 3:2 and their expenditures are in the ratio of 5:3. If each of them saves Rs. 1000, then, A’s income can be, A. Rs. 3000     B. Rs. 4000      C. Rs. 9000       D. Rs. 6000 4. The students in three batches in school is in the ratio of 2:3:5. If 20 students in each batch are increased than the ratio changes to 4:5:7. The total number of students in the three before the increase was, A. 100       B. 10      C. 90        D. 150 5. Divide the amount of Rs. 500 between P, Q, R, and S such that P and Q together get the thrice as much as R and S together. Q gets four times of what R geta and R gets 1.5 times as much as S. Now the value that Q gets will be A. 75       B. 125       C. 150        D. 300 6. Rs. 2250 is divided among three friends Ajay, Vijay, and Raj in such a way that 1/6th of Ajay’s share, 1/4th of Vijay’s share and 2/5th of Raj’s share is equal. Find Ajay’s share. A. Rs. 1080      B. Rs. 720      C. Rs. 450       D. Rs. 1240 7. After an increase of 7 in both numerator as well as the denominator, the fraction changes to 3/4. What was the original fraction? A. 5/12       B. 7/9      C. 2/5      D. 3/8 8. Divide Rs. 680 among P, Q, and R such that P gets 2/3 of what Q gets and Q gets 1/4 of what R gets. Find the share of R. A. Rs. 480      B. Rs. 360       C. Rs. 420         D. Rs. 300 1.  D. None of the above 2. C. 7:5 3.  D. Rs. 6000 4. A. 100 5.  D. 300 6. A. Rs. 1080 7. C. 2/5 8. A. Rs. 480 Part 2: Proportion Practice Questions Directions: For this section, the questions related to proportions are asked 1. If a/(b + c) = b/(c + a) = c/(a + b), then each fraction will be equal to, A. (a + b + c)^2     B. ½        C. ¼       D. 0 2. If a:b = c:d, then the value of a2 + b2/c2 + d2 is, A. ½      B. (a + b)/(c + d)       C. (a – b)/(c – d)        D. ab/cd 3. If 6×2 + 6y2 = 13xy, what is the ratio of x to y? A. 1:4        B. 4:5       C. 3:2        D. 1:2 4. If a, b, c, d are in continued proportion then a – d/b – c>/x. What is the value of x? A. 2      B. 1      C. 0        D. 3 5. If P varies as R, and Q varies as R, then which of the following is false? A. (P + Q) varies R     B. (P – Q) varies 1/R      C. √PQ varies R      D. PQ varies R2 6. If a and b are positive integers than √2 always lies between: A. (a + b)/(a – b) and ab      B. a/b and (a + 2b)/(a + b)      C. a and b       D. ab/(a + b) and (a – b)/ab 1. B. ½ 2. D. ab/cd 3. C. 3:2 4. D. 3 5. B. (P – Q) varies 1/R 6.  B. a/b and (a + 2b)/(a + b) Part 3: Miscellaneous ratio and proportion questions Directions: In this section, various types of ratio and proportion questions are given. 1. If 4 examiners can examine a certain number of answer books in 8 days by working 5 hours a day, for how many hours a would 2 examiners have to work in order to examine twice the number of answer books in 20 days? A. 8     B. 6      C. 7½      D. 9 2. Three friends decided to rent a farm for Rs 7000 per year. A outs 110 cows in the farm for 3 months, B puts 110 cows for 6 months and C puts 440 cows for 3 months. Find the total percentage of expenditure that A should pay. A. 20%       B. 16.66%       C. 14.28%       D. 11.01% 3. At constant temperature, the pressure of a definite mass of a gas is inversely proportional to the volume. If the pressure given is reduced by 20%, find the respective change in the volume. A. +25%      B. -25%     C. +16.66%      D. -16.66% 4. A group of people row a certain course up the river in 84 minutes; they can row the same course downstream in 9 minutes less than they can row it in the still river. How long would they take to row down with the river? A. 45 or 23 minutes       B. 60 minutes        C. 19 minutes         D. 63 or 12 minutes 5. If 30 men working 7 hours a day can do a piece of work in 18 days, in how many days will 21 men working 8 hours a day do the same work? A. 30 days     B. 22.5 days       C. 24 days       D. 45 days 6. If the ratio of the sine of the angles of triangles is given as 1:1:√2, then the ratio of the square of the greatest side to the sum of the squares of the other two sides is A. 3:4       B. 2:1      C. 1:1      D. 1:2 7. A mixture of milk and water are in the ratio of 5:1. By adding 5 liters of water, the ratio of the milk to water becomes 5:2. The quantity of milk in the mixture is: A. 16 litres B. 25 litres C. 24 litres D. 22.75 litres 1. A. 8 2. C. 14.28% 3. A. +25% 4. D. 63 or 12 minutes 5. B. 22.5 days 6. C. 1:1 7. B. 25 litres hope says: could someone please explain the following questions and answers 1. Raju and Sanjay had 35% and 45% rupees more than Ajay respectively. What is the ratio of Raju and Sanjay’s money? A. 7:9 B. 27:29 C. 37:39 D. 27:39 The correct answer is C. 2. Two men earn a yearly salary in the ratio 10:13. If there spending is in the ratio of 4:5 and the man spending lesser of the two saves Rs. 6000 while the other one saves Rs. 8000, then find the salary of the person who is higher paid. A. Rs. 12000 B. Rs. 14000 C. Rs. 13000 D. Rs. 11000 The correct answer is C. 3. If the ratio of the ages of Priya and Sunanda is 6:5 at present, and after fifteen years from now, the ratio will be changed to 9:8, then find the Priya’s current age. A. 22 years B. 30 years C. 34 years D. 38 years The correct answer is B. 4. P, Q, and R played cricket. P’s runs are to Q’s runs and Q’s runs are to R’s runs at 3:2. All of them scored a total of 342 runs. How many runs did P make? A. 140 B. 154 C. 168 D. 162 The correct answer is D. Leave a Reply Cancel reply
{"url":"https://www.toppr.com/guides/quantitative-aptitude/ratios-and-proportions/ratios-and-proportion-practice-questions/","timestamp":"2024-11-05T09:53:46Z","content_type":"text/html","content_length":"186801","record_id":"<urn:uuid:affbf8f9-db36-4ded-9d76-5217b7877577>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00214.warc.gz"}
Two Crossover-Sample Means Help Aids Top Application: Consider a 2×2 cross-over design contains two sequences (treatment orderings) and two time periods (occasions). One sequence receives treatment A followed by treatment B. The other sequence receives B and then A. This procedure is used to test the following hypotheses: 1. Enter a) value of α, the probability of type I error b) value of β, the probability of type II error c) value of allowable difference d) value of population variance 2. Click the button “Calculate” to obtain result sample size of each group n. Formula: [] (*) α: The probability of type I error (significance level) is the probability of rejecting the true null hypothesis. β: The probability of type II error (1 – power of the test) is the probability of failing to reject the false null hypothesis. μ[2] – μ[1]: The value of allowable difference is the true mean difference between a test drugs (μ[2]) and a placebo control or active control agent (μ[1]). Example 1: Suppose a low density lipidproteins (LDLs) is considered of clinically meaningful difference. By using (*), assuming that the standard deviation is 10% (i.e., population variance is 0.01), the required sample size of each group to achieve an 80% power (β=0.2) at α=0.05 for correctly detecting such difference of μ[2] – μ[1]=0.5 change obtained by normal approximation as n=16. Reference: Chow, Shao and Wang, Sample Size Calculations In Clinical Research, Taylor & Francis, NY. (2003) Pages 64-65.
{"url":"https://www2.ccrb.cuhk.edu.hk/stat/mean/tsmc_equality.htm","timestamp":"2024-11-10T08:21:42Z","content_type":"text/html","content_length":"37023","record_id":"<urn:uuid:adffec15-cd33-4c47-a83d-48376ea5790f>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00374.warc.gz"}
Square-0 - Online 3D Puzzle - Grubiks The Square-0 (a.k.a. SQ0) is a cube-shaped twisty puzzle. It is a lower-order version of the famous shape-shifting puzzle Square-1. It was invented in 2015 by YouTuber DGCubes (Daniel Goodman) for a school project where students were given the task to invent something and then present it to the class in Spanish. To make the prototype, Goodman bought a Square-1 puzzle and simply "bandaged" each corner piece with an adjacent edge piece. We say "simply" because turning a Square-1 into a Square-0 is not hard to do, but we do think the original idea is GENIUS! Apparently we're also not alone on that since ShengShou decided to mass-produce the puzzle and nowadays you can buy it directly under the name "SengSo SQ0". It might not be very obvious at first glance, but the Square-0 can actually be thought of as a bandaged Tower Cube (2x2x3) shape-mod. The top and bottom faces each have 4 identical corner pieces, and the puzzle itself is made of 3 layers. The only difference between the mechanism of the Square-0 and the one found in the Tower Cube (2x2x3) is that the former only has two parts in its middle layer. This makes the Square-0 even easier to solve than the Tower Cube (2x2x3), which is already considered an easy puzzle to solve. Our computer simulation shows that the Square-0 only has around 80k possible combinations. The Square-0 is not an official WCA puzzle. Its predecessor, the Square-1, is the only shape-shifting official WCA puzzle. At the time of writing, we couldn't find any unofficial record for the fastest solve of this puzzle. Comments 54 If you continue your current puzzle will be lost.
{"url":"https://www.grubiks.com/puzzles/square-0/","timestamp":"2024-11-02T20:17:07Z","content_type":"text/html","content_length":"68465","record_id":"<urn:uuid:bf327976-5010-4297-901a-feee7481b5a2>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00239.warc.gz"}
Complementary Angle | Definition & Meaning Complementary Angle|Definition & Meaning Properties and Types of Complementary Angles There are different properties of Complementary angles that are following • If two angles sum is equal to 90 degrees, they are said to be complementary. • They might be adjacent or not. • Even though the total of three or more angles is 90 degrees, they cannot be complementary. • When two angles are complementary, each angle is called the “complement” or “complementary” of the other. • A right-angled triangle has two complimentary sharp angles. There are two types of complementary angles i.e. Adjacent Complementary angles and non-adjacent Complementary angles. Two angles whose sum is 90 and they have a common vertex and arm are known as adjacent complementary angles s shown in figure 3. In figure 3 ∠DAB is 18.12°and ∠CAD is 72.24° the sum of angles is 90° . If two complementary angles are not adjacent to each other they are non-adjacent complementary angles as illustrated in figure 4. In figure 4, both the angles are non-adjacent, and their sum will be equal to 90°. Complement of an Angle Each angle is referred to as a “complement” of the other, and we know that the total of two complimentary angles equals 90 degrees. Therefore, an angle’s complement may be calculated by deducting it from 90 degrees. 90-x° is the counterpart of x°. Let’s calculate the angle’s complement, which is 57°. By deducting 57° from 90°, you may find its counterpart, which is 33°: 90° – 57°. Therefore, 33° is the complement of a 57° angle. Proof of Complementary Angles Theorem There are pairs of complementary angles that add up to 90°. Consider a diagram illustrated in Figure 2 to prove the complementary angle theorem. We assume that <CBD is complementary to ∠ABC and ∠DBE. As per the definition of Complementary angles, ∠CBD +∠ABC=90°, and ∠CBD +∠DBE=90°. Now we can say that ∠ CBD +∠ABC=∠CBD +∠DBE. Now we can see that ∠ABC =∠ DBE; hence theorem is proved. Difference Between Supplementary and Complementary Angles Two angles are called supplementary if their sum is 180° and two angles are complementary of each other if their sum is 90°. The supplement of an angle x° is 180-x° whereas the complement of an angle x° is 90-x°. Both the supplementary angles can be joined to form a straight angle, whereas the complementary angles can be joined to form a right angle. Complementary Angles And Their Importance in Mathematics Complementary angles are one of the most important parts of trigonometry. Trigonometry is one of the most significant areas of mathematics that has a wide range of applications. The study of the connection between the sides and angles of the right-angle triangle is essentially the focus of the field of mathematics known as “trigonometry.” As complementary angles is the angles whose sum is 90, are used in vast fields like astronomical research. Architects, surveyors, astronauts, physicists, engineers, and even crime scene investigators use complementary angles in a variety of professions. Significance Of Right Angle Triangles More specifically, right-angled triangles with a 90° internal angle are the subject of trigonometry. We may use trigonometry to determine any missing or unknowable side lengths or angles in a triangle. All lengths in right-angled are not equal to each other. The side with 90° is known as perpendicular, the opposite side of the perpendicular is the hypotenuse, and the line segment on which they stand is the base of the triangle, as shown in figure 1. As illustrated in Figure 1 we can see that the angle between the line segment h and f is 45° and between g and h is 45°, and the sum of the two angles is 90°, so both angles are the complement of each other. The relationship between the acute and right angle of the right-angled triangle is defined by the trigonometric ratios, which are given below $\sin(\theta)$= $\dfrac{\text{Perpendicular}}{\text{Hypotenuse}}$ $\cos(\theta)$= $\dfrac{\text{Base}}{\text{Hypotenuse}}$ $\tan(\theta)$= $\dfrac{\text{Perpendicular}}{\text{Base}}$ $\csc(\theta)$= $\dfrac{\text{Hypotenuse}}{\text{Perpendicular}}$ $\sec(\theta)$= $\dfrac{\text{Hypotenuse}}{\text{Base}}$ $\cot(\theta)$= $\dfrac{\text{Perpendicular}}{\text{Hypotenuse}}$ Finding Angles Given That They Are Complementary Example 1 Calculate the values of two complementary angles A and B if A = (2x – 18)° and B = (5x – 52)°. ∠A = (2x – 18)° and ∠B = (5x – 52)° We know that, Sum of two complementary angles = 90° ∠A + ∠B = 90° (2x – 18)° + (5x – 52)° = 90° 7x – 70° = 90° 7x = 90° + 70° = 160° x = 160°/7 = 22.85° ∠A = (2 × (22.857) – 18) = 27.714° ∠B = (5 × (22.857) – 52) = 62.286° Hence, ∠A = 27.714° and ∠B = 62.286° Example 2 Find the value of X in given figure 5. As illustrated in figure both angles x and 41.36° are complementary angles so their sum is 90°. x + 41.36° = 90° x = 90° – 41.36° x = 48.64° Therefore, the value of angle ‘x’ is 48.64°. Example 3 Find x if the given angles illustrated in figure 6 are complementary angles. As we know the sum of two complementary angles is 90° $\dfrac{x}{2}$ + $\dfrac{x}{3}$ = 90° $\dfrac{5x}{6}$ = 90° x = 90° × $\dfrac{6}{5}$ = 108° Therefore, the value of x is 108°. Example 4 Find the values of angles A and B such that ∠A = (x – 25)° and ∠B = (2x − 25)° if both A and B are complementary of each other. As ∠A and ∠B both are complementary so their sum is 90°. ∠A + ∠B = 90° (x – 25)° + (2x – 25)° = 90° 3x – 50° = 90° 3x = 140° x = 46.6° Thus, ∠A = 46.6 – 25 = 21.6° and ∠B = 2 (46.6) – 25 = 68.2°. Therefore, ∠A and ∠B are 21.6°^ and 68.2°, respectively. All images/mathematical drawings were created with GeoGebra.
{"url":"https://www.storyofmathematics.com/glossary/complementary-angle/","timestamp":"2024-11-08T10:30:42Z","content_type":"text/html","content_length":"176432","record_id":"<urn:uuid:9e9587e2-2735-4c1c-9971-6313aea8b95d>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00709.warc.gz"}
NKS 2004 Abstracts.nb Where Are the Good CA Rules for the Density Classification Task? Pedro Paulo Balbi de Oliveira Mackenzie University A widely studied computational problem in the context of cellular automata (CAs) is the so-called Density Classification Task (DCT, for short). In its standard formulation it states that a binary, one-dimensional CA has to converge to a final configuration of all cells in state 1, when the initial configuration has more 1s than 0s, and to a configuration of all 0s, whenever the initial configuration has more 0s than 1s. Although the DCT was proposed in 1978 [Gacs, Kurdyumov and Levin, 1978], it was not until 1995 that the solution of the problem, as formulated, was proven not to be possible [Land and Belew, 1995]. Before the original proof was derived a few research groups were trying to find a rule that could solve the DCT. However, with such a possibility precluded the search shifted towards looking for the rule that could solve the DCT as perfectly as possible. Using different search techniques, mostly evolutionary computation-based methods, various groups have put their methods to test, and over time good rules have been found. For historical reasons, the search methods employed have been scanning the huge radius 3, 2-state CA space, the best rule found so far being due to Juillé and Pollack [1998], with a correct classification score of about 86%. The fact is that regardless of empirical and analytical efforts carried out during the last years, the best imperfect rule for the DCT remains unknown. Our concern herein is on ways that might help a search for better imperfect rules for the DCT than those currently known. And this is done by trying to answer the following questions: could good, known DCT rules be used as an indication of the most promising regions of the search space?; and, could conservative rules in the search space provide an indication of regions to be avoided during the search? These questions are addressed by relying on analyses of good, known rules from radius 3 space and on a search carried out in the radius 2 space. [Capcarrère and Sipper, 2001] M.S. Capcarrère and M. Sipper. “Necessary conditions for density classification by cellular automata.” Physical Review E, 64(3):036113/1-036113/4, 2001. [Fuks, 2000] H. Fuks. “A class of cellular automata equivalent to deterministic particle systems.” In: S. Feng, A.T. Lawniczak and S.R.S. Varadhan, eds. Hydrodynamic limits and related topics, Amer. Math. Soc., Providence, RI, USA, 57-69, 2000. [Gacs, Kurdyumov and Levin, 1978] P. Gacs, G.L. Kurdyumov and L.A. Levin. Problemy Peredachi. Informatsii, 14:92-98, 1978. [Juillé and Pollack, 1998] H. Juillé and J.B. Pollack. “Coevolving the ‘ideal’ trainer: Application to the discovery of cellular automata rules.” In: J.R. Koza, W. Banzhaf, K. Chellapilla, M. Dorigo, D.B. Fogel, M.H. Garzon, D.E. Goldberg, H. Iba and R.L. Riolo (eds.). Genetic Programming 1998: Proceedings of the Third Annual Conference, San Francisco, CA: Morgan Kaufmann, 1998. [Land and Belew, 1995] M.W.S. Land and R.K. Belew. “No two-state CA for density classification exists”. Physical Review Letters, 74(25):5148-51, 1995. Created by Mathematica (April 20, 2004)
{"url":"https://www.wolframscience.com/conference/2004/presentations/HTMLLinks/index_49.html","timestamp":"2024-11-15T03:35:32Z","content_type":"text/html","content_length":"12409","record_id":"<urn:uuid:275209b4-cfc0-4ff7-8717-ebdc3d7a70a0>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00153.warc.gz"}
s Law Worksheet Boyle's Law Worksheet Boyle's Law Worksheet - A piston having a certain pressure and volume (left piston) will have half the volume. Web boyle’s law worksheet name ________________ robert boyle observed the relationship between the pressure and. What is the volume when the. A gas occupies 12.3 liters at a pressure of 40.0 mm hg. Web boyle’s law problems class copy. Web about this quiz & worksheet. Web if so instructed by your teacher, print out a worksheet page for these problems. Some popular services for personal injury law include: Web edward jones | making sense of investing Boyle’s law and charles’s law name______________ 1. Chap 12 Boyle's Law Worksheet boyle's Law wksht 12 A piston having a certain pressure and volume (left piston) will have half the volume. 1 atm = 760.0 mm hg = 101.3 kpa. Web if so instructed by your teacher, print out a worksheet page for these problems. Some popular services for personal injury law include: Boyle's law is an important concept in basic physics, and this quiz/worksheet combo. Boyle's Law Worksheet Name Web boyle’s law problems class copy. Web boyle’s law worksheet name ________________ robert boyle observed the relationship between the pressure and. When ____________ is held constant, the pressure. Web if so instructed by your teacher, print out a worksheet page for these problems. Web this worksheet set guides students through the following topics:what is boyle's law and what factors does. Boyle law problems A piston having a certain pressure and volume (left piston) will have half the volume. What is the volume when the. 1 atm = 760.0 mm hg = 101.3 kpa. Web green masses are added. A gas occupies 12.3 liters at a pressure of 40.0 mm hg. Ideal Gas Law Worksheet Answer Key Web green masses are added. 1 atm = 760.0 mm hg = 101.3 kpa. Web boyle’s law problems class copy. When ____________ is held constant, the pressure. Web this worksheet set guides students through the following topics:what is boyle's law and what factors does it involve?explaining. Boyles And Charles Law Worksheet Free Worksheets Library Download and Web if so instructed by your teacher, print out a worksheet page for these problems. Web green masses are added. You can do the exercises online or download the worksheet as pdf. A gas occupies 12.3 liters at a pressure of 40.0 mm hg. When ____________ is held constant, the pressure. Boyle's Law Worksheet Answers Chapter 16 worksSheet list Fawn creek education lawyers represent colleges, school districts, other educational institutions, and students in. A gas occupies 12.3 liters at a pressure of 40.0 mm hg. Solve the following problems (assuming constant. Web this worksheet set guides students through the following topics:what is boyle's law and what factors does it involve?explaining. A gas occupies 12.3 liters at a pressure of. 50 Boyle's Law Worksheet Answers Chessmuseum Template Library Web edward jones | making sense of investing P1 v1 = p2 v2. What is the volume when the. 1) solve the following equations. What is the volume when the. 1) solve the following equations. You can do the exercises online or download the worksheet as pdf. Solve the following problems (assuming constant. Web if so instructed by your teacher, print out a worksheet page for these problems. When ____________ is held constant, the pressure. What is the volume when the. Boyle’s law and charles’s law name______________ 1. P1 v1 = p2 v2. Web green masses are added. Web this worksheet set guides students through the following topics:what is boyle's law and what factors does it involve?explaining. Fawn creek education lawyers represent colleges, school districts, other educational institutions, and students in. Web edward jones | making sense of investing Some popular services for personal injury law include: A gas occupies 12.3 liters at a pressure of 40.0 mm hg. Web boyle’s law problems class copy. Boyle's law is an important concept in basic physics, and this quiz/worksheet combo will help test your. Web boyle’s law worksheet name ________________ robert boyle observed the relationship between the pressure and. If 22.5 l of nitrogen at 748. A gas occupies 12.3 liters at a pressure of 40.0 mm hg. Web what are some popular services for personal injury law? Web Boyle’s Law Worksheet Name ________________ Robert Boyle Observed The Relationship Between The Pressure And. Web what are some popular services for personal injury law? Solve the following problems (assuming constant. Boyle's law is an important concept in basic physics, and this quiz/worksheet combo will help test your. When ____________ is held constant, the pressure. Boyle’s Law And Charles’s Law Name______________ 1. Web if so instructed by your teacher, print out a worksheet page for these problems. If 22.5 l of nitrogen at 748. What is the volume when the. You can do the exercises online or download the worksheet as pdf. 1 Atm = 760.0 Mm Hg = 101.3 Kpa. Web this worksheet set guides students through the following topics:what is boyle's law and what factors does it involve?explaining. A gas occupies 12.3 liters at a pressure of 40.0 mm hg. Fawn creek education lawyers represent colleges, school districts, other educational institutions, and students in. 1) solve the following equations. Some Popular Services For Personal Injury Law Include: Web about this quiz & worksheet. A piston having a certain pressure and volume (left piston) will have half the volume. Web green masses are added. Web if so instructed by your teacher, print out a worksheet page for these problems. Related Post:
{"url":"https://submit.bookwalterwines.com/en/boyle-s-law-worksheet.html","timestamp":"2024-11-04T05:53:56Z","content_type":"text/html","content_length":"32728","record_id":"<urn:uuid:536bfd9f-038a-451e-a027-dfd11d49e2bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00266.warc.gz"}
09-28-11 - Algorithm - Next Index with Lower Value You are given an array of integers A[] For each i, find the next entry j (j > i) such that the value is lower (A[j] < A[i]). Fill out B[i] = j for all i. For array size N this can be done in O(N). Here's how : I'll call this algorithm "stack of fences". Walk the array A[] from start to finish in one pass. At i, if the next entry (A[i+1]) is lower than the current (A[i]) then you have the ordering you want immediately and you just assign B[i] = i+1. If not, then you have a "fence", a value A[i] which is seeking a lower value. You don't go looking for it immediately, instead you just set the current fence_value to A[i] and move on via i++. At each position you visit when you have a fence, you check if the current A[i] < fence_value ? If so, you set B[fence_pos] = i ; you have found the successor to that fence. If you have a fence and find another value which needs to be a fence (because it's lower than its successor) you push the previous fence on a stack, and set the current one as the active fence. Then when you find a value that satisfies the new fence, you pop off the fence stack and also check that fence to see if it was satisfied as well. This stack can be stored in place in the B[] array, because the B[] is not yet filled out for positions that are fences. The pseudocode is : fence_val = fence_pos = none for(int i=1;i<size;i++) int prev = A[i-1]; int cur = A[i]; if ( cur > prev ) // make new fence and push stack B[i_prev] = fence_pos; fence_pos = i_prev; fence_val = prev; // descending, cur is good : B[i_prev] = i; while( cur < fence_val ) prev_fence = B[fence_pos]; B[fence_pos] = i; fence_pos = prev_fence; if ( fence_pos == -1 ) fence_val = -1; fence_val = A[fence_pos]; This is useful in string matching, as we will see forthwith. 5 comments: b0b0b0b said... I'm not sure this is O(N). What if the input is an array starting with even integers from 2..M and ending with odd integers from 1..M-1. It's pretty trivial to prove. Every iteration writes to B[i] at least once each B[i] can written to at most twice (once for a fence and once for its final correct value) there are N elements of B[] therefore # of operations is >= N and <= 2N The full 2N time is taken by I wish I could find the version of this algorithm that works for windowed ranges, not just monotonic conditions. Thanks for all your posts, they've been very helpful. I implemented this algorithm, and discovered that it was incomplete. I'm sure you handled this in your local implementation years ago, but I wanted to document it for others who find this helpful as well. The problem happens when the stack is not empty at the end of the loop. In this case, the stack holds all the values "i" that don't yet have any "j" such that "j > i && A[j] < A[i]". There are no more "j" values to consider, so this condition will never become true. In your notation, "B[i] = j", so we want the invariant on exit to be that "B[i] > i && A[B[i]] < A[i]". Since the stack is stored in the B[] array in the order encountered, and since i increments, these entries for B[i] do satisfy "B[i] > i". The fact that they are still on the stack means "!(B[i] > i && A[B[i]] < A[i])". Since "B[i] > i", this proves "A[B[i]] >= A[i]". If the "A[i]" array has no duplicates, as in the suffix array use case, this becomes a strict inequality: "A[B[i]] > A[i] Again, in the suffix array case, this means the array that was supposed to only hold pointers to longest *past* matches also has some pointers to *future* matches. There are two fixes I can think of. The most obvious is, after exiting the loop, just pop the stack until it is empty, setting B[i] to "none" as you go. Another solution is to have a sentinel token A[end] at the end of the list that satisfies A[end] < A[i] for all i != end. If "end" is the same as "none", these two solutions are effectively equivalent. The first solution is more obvious, the second solution uses less code and avoids special case handling. Yep, you're totally right, I failed to mention that. Both your solutions are good. Lots of stuff in suffix trees & suffix arrays works neatly if you have a sentinel token at the end, unfortunately we don't get a byte that's > 255 ;) So in practice the code gets a lot uglier with lots of special case handling for the end-of-string case that would've been handled very neatly with a sentinel. I use the first option of manually bubbling back a null entry at the end. It's in the String Match Test code that I released, in MakeNextLowerPosArray.
{"url":"https://cbloomrants.blogspot.com/2011/09/09-28-11-algorithm-next-index-with.html?showComment=1466097641846","timestamp":"2024-11-07T15:35:42Z","content_type":"application/xhtml+xml","content_length":"73113","record_id":"<urn:uuid:98d18604-8de3-49a5-99fb-6af77d1479de>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00389.warc.gz"}
LINEST function This article describes the formula syntax and usage of the LINEST function in Microsoft Excel. The LINEST function calculates the statistics for a line by using the "least squares" method to calculate a straight line that best fits your data, and then returns an array that describes the line. You can also combine LINEST with other functions to calculate the statistics for other types of models that are linear in the unknown parameters, including polynomial, logarithmic, exponential, and power series. Because this function returns an array of values, it must be entered as an array formula. Instructions follow the examples in this article. The equation for the line is: y = mx + b y = m1x1 + m2x2 + ... + b if there are multiple ranges of x-values, where the dependent y-values are a function of the independent x-values. The m-values are coefficients corresponding to each x-value, and b is a constant value. Note that y, x, and m can be vectors. The array that the LINEST function returns is {mn,mn-1,...,m1,b}. LINEST can also return additional regression statistics. LINEST(known_y's, [known_x's], [const], [stats]) The LINEST function syntax has the following arguments: • known_y's Required. The set of y-values that you already know in the relationship y = mx + b. □ If the range of known_y's is in a single column, each column of known_x's is interpreted as a separate variable. □ If the range of known_y's is contained in a single row, each row of known_x's is interpreted as a separate variable. • known_x's Optional. A set of x-values that you may already know in the relationship y = mx + b. □ The range of known_x's can include one or more sets of variables. If only one variable is used, known_y's and known_x's can be ranges of any shape, as long as they have equal dimensions. If more than one variable is used, known_y's must be a vector (that is, a range with a height of one row or a width of one column). □ If known_x's is omitted, it is assumed to be the array {1,2,3,...} that is the same size as known_y's. • const Optional. A logical value specifying whether to force the constant b to equal 0. □ If const is TRUE or omitted, b is calculated normally. □ If const is FALSE, b is set equal to 0 and the m-values are adjusted to fit y = mx. • stats Optional. A logical value specifying whether to return additional regression statistics. □ If stats is TRUE, LINEST returns the additional regression statistics; as a result, the returned array is {mn,mn-1,...,m1,b;sen,sen-1,...,se1,seb;r^2,sey;F,df;ssreg,ssresid}. □ If stats is FALSE or omitted, LINEST returns only the m-coefficients and the constant b. The additional regression statistics are as follows. Statistic Description se1,se2,...,sen The standard error values for the coefficients m1,m2,...,mn. seb The standard error value for the constant b (seb = #N/A when const is FALSE). The coefficient of determination. Compares estimated and actual y-values, and ranges in value from 0 to 1. If it is 1, there is a perfect correlation in the sample — there is no r^2 difference between the estimated y-value and the actual y-value. At the other extreme, if the coefficient of determination is 0, the regression equation is not helpful in predicting a y-value. For information about how r^2 is calculated, see "Remarks," later in this topic. sey The standard error for the y estimate. F The F statistic, or the F-observed value. Use the F statistic to determine whether the observed relationship between the dependent and independent variables occurs by chance. df The degrees of freedom. Use the degrees of freedom to help you find F-critical values in a statistical table. Compare the values you find in the table to the F statistic returned by LINEST to determine a confidence level for the model. For information about how df is calculated, see "Remarks," later in this topic. Example 4 shows use of F and df. ssreg The regression sum of squares. ssresid The residual sum of squares. For information about how ssreg and ssresid are calculated, see "Remarks," later in this topic. The following illustration shows the order in which the additional regression statistics are returned. • You can describe any straight line with the slope and the y-intercept: Slope (m): To find the slope of a line, often written as m, take two points on the line, (x1,y1) and (x2,y2); the slope is equal to (y2 - y1)/(x2 - x1). Y-intercept (b): The y-intercept of a line, often written as b, is the value of y at the point where the line crosses the y-axis. The equation of a straight line is y = mx + b. Once you know the values of m and b, you can calculate any point on the line by plugging the y- or x-value into that equation. You can also use the TREND function. • When you have only one independent x-variable, you can obtain the slope and y-intercept values directly by using the following formulas: Slope: =INDEX(LINEST(known_y's,known_x's),1) Y-intercept: =INDEX(LINEST(known_y's,known_x's),2) • The accuracy of the line calculated by the LINEST function depends on the degree of scatter in your data. The more linear the data, the more accurate the LINEST model. LINEST uses the method of least squares for determining the best fit for the data. When you have only one independent x-variable, the calculations for m and b are based on the following formulas: where x and y are sample means; that is, x = AVERAGE(known x's) and y = AVERAGE(known_y's). • The line- and curve-fitting functions LINEST and LOGEST can calculate the best straight line or exponential curve that fits your data. However, you have to decide which of the two results best fits your data. You can calculate TREND(known_y's,known_x's) for a straight line, or GROWTH(known_y's, known_x's) for an exponential curve. These functions, without the new_x's argument, return an array of y-values predicted along that line or curve at your actual data points. You can then compare the predicted values with the actual values. You may want to chart them both for a visual • In regression analysis, Excel calculates for each point the squared difference between the y-value estimated for that point and its actual y-value. The sum of these squared differences is called the residual sum of squares, ssresid. Excel then calculates the total sum of squares, sstotal. When the const argument = TRUE or is omitted, the total sum of squares is the sum of the squared differences between the actual y-values and the average of the y-values. When the const argument = FALSE, the total sum of squares is the sum of the squares of the actual y-values (without subtracting the average y-value from each individual y-value). Then regression sum of squares, ssreg, can be found from: ssreg = sstotal - ssresid. The smaller the residual sum of squares is, compared with the total sum of squares, the larger the value of the coefficient of determination, r^2, which is an indicator of how well the equation resulting from the regression analysis explains the relationship among the variables. The value of r^2 equals ssreg/sstotal. • In some cases, one or more of the X columns (assume that Y’s and X’s are in columns) may have no additional predictive value in the presence of the other X columns. In other words, eliminating one or more X columns might lead to predicted Y values that are equally accurate. In that case these redundant X columns should be omitted from the regression model. This phenomenon is called “collinearity” because any redundant X column can be expressed as a sum of multiples of the non-redundant X columns. The LINEST function checks for collinearity and removes any redundant X columns from the regression model when it identifies them. Removed X columns can be recognized in LINEST output as having 0 coefficients in addition to 0 se values. If one or more columns are removed as redundant, df is affected because df depends on the number of X columns actually used for predictive purposes. For details on the computation of df, see Example 4. If df is changed because redundant X columns are removed, values of sey and F are also affected. Collinearity should be relatively rare in practice. However, one case where it is more likely to arise is when some X columns contain only 0 and 1 values as indicators of whether a subject in an experiment is or is not a member of a particular group. If const = TRUE or is omitted, the LINEST function effectively inserts an additional X column of all 1 values to model the intercept. If you have a column with a 1 for each subject if male, or 0 if not, and you also have a column with a 1 for each subject if female, or 0 if not, this latter column is redundant because entries in it can be obtained from subtracting the entry in the “male indicator” column from the entry in the additional column of all 1 values added by the LINEST function. • The value of df is calculated as follows, when no X columns are removed from the model due to collinearity: if there are k columns of known_x’s and const = TRUE or is omitted, df = n – k – 1. If const = FALSE, df = n - k. In both cases, each X column that was removed due to collinearity increases the value of df by 1. • When entering an array constant (such as known_x's) as an argument, use commas to separate values that are contained in the same row and semicolons to separate rows. Separator characters may be different depending on your regional settings. • Note that the y-values predicted by the regression equation may not be valid if they are outside the range of the y-values you used to determine the equation. • The underlying algorithm used in the LINEST function is different than the underlying algorithm used in the SLOPE and INTERCEPT functions. The difference between these algorithms can lead to different results when data is undetermined and collinear. For example, if the data points of the known_y's argument are 0 and the data points of the known_x's argument are 1: □ LINEST returns a value of 0. The algorithm of the LINEST function is designed to return reasonable results for collinear data and, in this case, at least one answer can be found. □ SLOPE and INTERCEPT return a #DIV/0! error. The algorithm of the SLOPE and INTERCEPT functions is designed to look for only one answer, and in this case there can be more than one answer. • In addition to using LOGEST to calculate statistics for other regression types, you can use LINEST to calculate a range of other regression types by entering functions of the x and y variables as the x and y series for LINEST. For example, the following formula: =LINEST(yvalues, xvalues^COLUMN($A:$C)) works when you have a single column of y-values and a single column of x-values to calculate the cubic (polynomial of order 3) approximation of the form: y = m1*x + m2*x^2 + m3*x^3 + b You can adjust this formula to calculate other types of regression, but in some cases it requires the adjustment of the output values and other statistics. • The F-test value that is returned by the LINEST function differs from the F-test value that is returned by the FTEST function. LINEST returns the F statistic, whereas FTEST returns the Example 1 - Slope and Y-Intercept Copy the example data in the following table, and paste it in cell A1 of a new Excel worksheet. For formulas to show results, select them, press F2, and then press Enter. If you need to, you can adjust the column widths to see all the data. Known y Known x Result (slope) Result (y-intercept) Formula (array formula in cells A7:B7) Example 2 - Simple Linear Regression Copy the example data in the following table, and paste it in cell A1 of a new Excel worksheet. For formulas to show results, select them, press F2, and then press Enter. If you need to, you can adjust the column widths to see all the data. Month Sales 1 $3,100 2 $4,500 3 $4,400 4 $5,400 5 $7,500 6 $8,100 Formula Result =SUM(LINEST(B1:B6, A1:A6)*{9,1}) $11,000 Calculates the estimate of the sales in the ninth month, based on sales in months 1 through 6. Example 3 - Multiple Linear Regression Copy the example data in the following table, and paste it in cell A1 of a new Excel worksheet. For formulas to show results, select them, press F2, and then press Enter. If you need to, you can adjust the column widths to see all the data. Floor space (x1) Offices (x2) Entrances (x3) Age (x4) Assessed value (y) 2310 2 2 20 $142,000 2333 2 2 12 $144,000 2356 3 1.5 33 $151,000 2379 3 2 43 $150,000 2402 2 3 53 $139,000 2425 4 2 23 $169,000 2448 2 1.5 99 $126,000 2471 2 2 34 $142,900 2494 3 3 23 $163,000 2517 4 4 55 $169,000 2540 2 3 22 $149,000 Formula (dynamic array formula entered in A19) Example 4 - Using the F and r^2 Statistics In the preceding example, the coefficient of determination, or r^2, is 0.99675 (see cell A17 in the output for LINEST), which would indicate a strong relationship between the independent variables and the sale price. You can use the F statistic to determine whether these results, with such a high r2 value, occurred by chance. Assume for the moment that in fact there is no relationship among the variables, but that you have drawn a rare sample of 11 office buildings that causes the statistical analysis to demonstrate a strong relationship. The term "Alpha" is used for the probability of erroneously concluding that there is a relationship. The F and df values in output from the LINEST function can be used to assess the likelihood of a higher F value occurring by chance. F can be compared with critical values in published F-distribution tables or the FDIST function in Excel can be used to calculate the probability of a larger F value occurring by chance. The appropriate F distribution has v1 and v2 degrees of freedom. If n is the number of data points and const = TRUE or omitted, then v1 = n – df – 1 and v2 = df. (If const = FALSE, then v1 = n – df and v2 = df.) The FDIST function — with the syntax FDIST(F,v1,v2) — will return the probability of a higher F value occurring by chance. In this example, df = 6 (cell B18) and F = 459.753674 (cell A18). Assuming an Alpha value of 0.05, v1 = 11 – 6 – 1 = 4 and v2 = 6, the critical level of F is 4.53. Since F = 459.753674 is much higher than 4.53, it is extremely unlikely that an F value this high occurred by chance. (With Alpha = 0.05, the hypothesis that there is no relationship between known_y’s and known_x’s is to be rejected when F exceeds the critical level, 4.53.) You can use the FDIST function in Excel to obtain the probability that an F value this high occurred by chance. For example, FDIST(459.753674, 4, 6) = 1.37E-7, an extremely small probability. You can conclude, either by finding the critical level of F in a table or by using the FDIST function, that the regression equation is useful in predicting the assessed value of office buildings in this area. Remember that it is critical to use the correct values of v1 and v2 that were computed in the preceding paragraph. Example 5 - Calculating the t-Statistics Another hypothesis test will determine whether each slope coefficient is useful in estimating the assessed value of an office building in Example 3. For example, to test the age coefficient for statistical significance, divide -234.24 (age slope coefficient) by 13.268 (the estimated standard error of age coefficients in cell A15). The following is the t-observed value: t = m4 ÷ se4 = -234.24 ÷ 13.268 = -17.7 If the absolute value of t is sufficiently high, it can be concluded that the slope coefficient is useful in estimating the assessed value of an office building in Example 3. The following table shows the absolute values of the 4 t-observed values. If you consult a table in a statistics manual, you will find that t-critical, two tailed, with 6 degrees of freedom and Alpha = 0.05 is 2.447. This critical value can also be found by using the TINV function in Excel. TINV(0.05,6) = 2.447. Because the absolute value of t (17.7) is greater than 2.447, age is an important variable when estimating the assessed value of an office building. Each of the other independent variables can be tested for statistical significance in a similar manner. The following are the t-observed values for each of the independent variables. Variable t-observed value Floor space 5.1 Number of offices 31.3 Number of entrances 4.8 Age 17.7 These values all have an absolute value greater than 2.447; therefore, all the variables used in the regression equation are useful in predicting the assessed value of office buildings in this area.
{"url":"https://support.microsoft.com/en-us/office/linest-function-84d7d0d9-6e50-4101-977a-fa7abf772b6d?ui=en-us&rs=en-us&ad=us","timestamp":"2024-11-05T20:08:20Z","content_type":"text/html","content_length":"174460","record_id":"<urn:uuid:91967b5c-c549-4fdf-bbce-df956c6b6989>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00267.warc.gz"}
Linear pair axiom : If a ray stands on a line, then the sum of two adjacent angles so formed is 180 degrees Jump to navigation Jump to search Two adjacent angles are said to be form a linear pair of angles, if their non-common arms are two opposite rays. Linear pair axiom of theorems are if a ray stands on a line , then the sum of two adjacent angles so formed is 180 degree Introduce children to linear pair of angles Estimated Time 30 minutes Prerequisites/Instructions, prior preparations, if any Prior knowledge of point, lines, angles Materials/ Resources needed Download this geogebra file from this link. Process (How to do the activity) • Prior hands on activity • Start with coinciding point C with the point B • What is the angle formed by the line • Move point C above and slowly rotate around point 0 • How many angles do you notice • Name the angles formed : what are their measure • Do the two angles together form a 180^o angle • Do the two angles form a linear pair • Record the values of the two angles for various positions of point C Sl No. ∠BOA ∠BOC ∠COA ∠BOC + ∠COA Dothe angles form a linear pair • Evaluation at the end of the activity
{"url":"https://karnatakaeducation.org.in/KOER/en/index.php/Linear_pair_axiom_:_If_a_ray_stands_on_a_line,_then_the_sum_of_two_adjacent_angles_so_formed_is_180_degrees","timestamp":"2024-11-12T02:40:48Z","content_type":"text/html","content_length":"36817","record_id":"<urn:uuid:376ce2ad-5feb-4f12-a3b2-7e899bbd0423>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00748.warc.gz"}
Logarithms Quiz: Challenge Your Exponentiation Skills Logarithms Basics Questions and Answers Test your knowledge and mastery of logarithmic concepts with our engaging Logarithms Quiz. This quiz is designed for students, educators, and anyone interested in enhancing their understanding of logarithms. It covers a wide range of topics, from the basics of logarithmic notation to more complex problems involving the properties and rules of logarithms. Questions range from solving simple logarithmic equations to applying logarithmic properties like the product, quotient, and power rules. Each question is designed to test your ability to apply logarithmic principles in various mathematical contexts. After completing the quiz, you'll receive feedback to help you identify areas for Read moreimprovement. Take the Logarithms Quiz now and see how well you really understand the world of logarithms! • 1. Log[3]x=2, x = ? Correct Answer C. 9 Logarithms are essentially the inverse of exponents. When you see log₃x = 2, it's asking the question: "To what power (exponent) must we raise the base (3) to get the result (x)?" Converting to Exponential Form: To solve, it's often helpful to rewrite the logarithmic equation in its equivalent exponential form. The general pattern is: logₐb = c <=> aᶜ = b Applying this to our equation: log₃x = 2 <=> 3² = x Solving: Now it's a simple calculation: 3² = 9, so x = 9. Answer: 9 • 2. Log[x]32 =5, x=? Correct Answer D. 2 This equation is a bit different because the unknown (x) is the base of the logarithm. Converting to Exponential Form: Again, let's rewrite in exponential form: logₓ32 = 5 <=> x⁵ = 32 Solving: To find x, we need to think: "What number, when raised to the power of 5, equals 32?" The answer is 2, since 2⁵ = 32. Answer: 2 • 3. Log[x]125=3, x = ? Correct Answer B. 5 Understanding: Similar to the previous question, the unknown is the base. We need to find the number that, when raised to the power of 3, equals 125. Converting to Exponential Form: logₓ125 = 3 <=> x³ = 125 Solving: The cube root of 125 is 5 (5 x 5 x 5 = 125), so x = 5. Answer: 5 • 4. Log[5]x = 3, x = ? □ A. □ B. □ C. □ D. Correct Answer B. 125 This is of the more common form where the unknown is the result. What do we get when we raise 5 to the power of 3? Converting to Exponential Form: log₅x = 3 <=> 5³ = x Solving: 5³ = 5 x 5 x 5 = 125, so x = 125. Answer: 125 • 5. Log[y]512 = 3, y = ? Correct Answer A. 8 The unknown is the base. We need to find the number that, when raised to the power of 3, results in 512. Converting to Exponential Form: logᵧ512 = 3 <=> y³ = 512 Solving: The cube root of 512 is 8 (8 x 8 x 8 = 512), so y = 8. Answer: 8 • 6. What is the value of log⁡[10]100? Correct Answer C. 2 Logarithms are the inverse operation of exponentiation. In other words, we need to find the exponent xxx such that: We know that 102=100, so the value of the logarithm is 2. Therefore, log₁₀ 100 = 2. • 7. Which of the following is the logarithmic form of 1000 = 10³? □ A. □ B. □ C. □ D. Correct Answer A. Log 1000 = 10³ To convert an exponential equation to logarithmic form, we use the following rule: by=x can be written as log⁡b x=y In the equation 1000=103, the base b is 10, the exponent y is 3, and the result x is 1000. Thus, the logarithmic form is: This means "10 raised to the power of 3 equals 1000." • 8. If logₐ x = 3, what is the value of x? □ A. □ B. □ C. □ D. Correct Answer A. X = a³ Logarithms are the inverse of exponentiation. Solving for x To find the value of x, we rewrite the logarithmic equation in exponential form: a³ = x Therefore, x = a³ • 9. Which of the following is the logarithmic property of logₐ(xy)? □ A. □ B. □ C. □ D. Correct Answer A. Logₐ x + logₐ y The logarithmic property of logₐ(xy) is based on the product rule, which states: This means that the logarithm of a product is equal to the sum of the logarithms of the factors. So if you have a product inside the logarithm (logₐ(xy)), you can break it down into the sum of two separate logarithms, logₐ x and logₐ y. • 10. What is the value of log₁₀ 0.01? Correct Answer A. -2 The logarithmic expression log₁₀ 0.01 asks, "To what power must 10 be raised to result in 0.01?" In other words, we need to find the exponent x such that: We know that: So, the value of log₁₀ 0.01 is -2. Therefore, the correct answer is -2.
{"url":"https://www.proprofs.com/quiz-school/quizzes/logarithms-test","timestamp":"2024-11-15T04:44:22Z","content_type":"text/html","content_length":"462403","record_id":"<urn:uuid:1628e6cc-3c9d-4d30-b72b-6fd7eb2e323b>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00378.warc.gz"}
Examine whether you can construct ∆DEF such that EF = 7.2 cm, m∠E = 110° and m∠F = 80°. Justify your answer A day full of math games & activities. Find one near you. A day full of math games & activities. Find one near you. A day full of math games & activities. Find one near you. A day full of math games & activities. Find one near you. Examine whether you can construct ∆DEF such that EF = 7.2 cm, m∠E = 110° and m∠F = 80°. Justify your answer For the given triangle we can find the third angle by using the angle sum property of a triangle If the angle sum property is satisfied, then it is possible to construct the triangle ∆DEF such that EF = 7.2 cm, m∠E = 110°, and m∠F = 80°, and if not then we cannot construct a triangle. By angle sum property of a triangle, ∠E + ∠F + ∠D = 180° 110° + 80° + ∠D = 180° So, ∠D = -10° The angle −10° is not possible as it is negative, thus we cannot construct triangle ΔDEF. ☛ Check: NCERT Solutions for Class 7 Maths Chapter 10 Video Solution: Examine whether you can construct ∆DEF such that EF = 7.2 cm, m∠E = 110° and m∠F = 80°. Justify your answer. Class 7 Maths NCERT Solutions Chapter 10 Exercise 10.4 Question 3 We have examined if ∆DEF can be constructed or not such that EF = 7.2 cm, m∠E = 110°, and m∠F = 80°. We get −10° as the third angle which is not possible, thus we cannot construct triangle ΔDEF. ☛ Related Questions: Math worksheets and visual curriculum
{"url":"https://www.cuemath.com/ncert-solutions/examine-whether-you-can-construct-def-such-that-ef-72-cm-me-110-and-mf-80-justify-your-answer/","timestamp":"2024-11-11T06:14:54Z","content_type":"text/html","content_length":"219662","record_id":"<urn:uuid:946db160-4fb1-4c09-a334-a6983fa22aa6>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00545.warc.gz"}
Texas Go Math Grade 6 Lesson 4.1 Answer Key Multiplying Decimals Refer to our Texas Go Math Grade 6 Answer Key Pdf to score good marks in the exams. Test yourself by practicing the problems from Texas Go Math Grade 6 Lesson 4.1 Answer Key Multiplying Decimals. Texas Go Math Grade 6 Lesson 4.1 Answer Key Multiplying Decimals Texas Go Math Grade 6 Lesson 1.1 Explore Activity Answer Key Use decimal grids or area models to find each product. (A) 0.3 × 0.5 0.3 × 0.5 represents 0.3 of 0.5. Use a decimal grid. Shade 5 rows of the grid to represent 0.5. Shade 0.3 of each 0.1 that is already shaded to represent 0.3 of _____________ . _____________ square(s) are double-shaded. This represents ________ hundredth(s), or 0.15. 0.3 × 0.5 = _____________ (B) 3.2 × 2.1 ______ Use an area model. Each row contains 3 wholes + 2 tenths. Each column contains _________ whole(s) + ________ tenth(s). The entire area model represents ________ whole(s) + ________ tenth(s) + ________ hundredth(s). 3.2 × 2.1 = ________ Question 1. Analyze Relationships How are the products 2.1 × 3.2 and 21 × 32 alike? How are they different? The products of 2.1 × 3.2 and 21 × 32 will be the number with the same digits, but at 2.1 × 3.2 the result will have two decimal places and at 21 × 32 the result will be the whole number, that is the difference. Products with the same digits, but the product of 2.1 × 3.2 will have two decimal places while the product of 21 × 32 will be a whole number Go Math Lesson 4.1 6th Grade Answer Key Question 2. Communicate Mathematical Ideas How can you use estimation to check that you have placed the decimal point correctly in your product? We can check it by multiplying whole numbers which are the nearest to the given decimals. The result is supposed to be close to the right result. We can check it by multiplying whole numbers which are the nearest to the given decimals. Your Turn Question 3. Here, in the first factor we have one decimal place as well as in the second factor. So, product will have two decimal places. We have the following: So, the result is 192.78 Question 4. In both factors there are two decimal places, so, the product will have four decimal places We have the following: So, the product is 4.4896 Question 5. In both factors there are two decimal places, so, the product will have four decimal places We have the following: So, the product is 48.4092 Go Math Grade 6 Lesson 4.1 Answer Key Question 6. In both factors there are two decimal places, so, the product will have four decimal places We have the following: So, the product is 95.0223 Question 7. Rico bicycles at an average speed of 15.5 miles per hour. What distance will Rico bicycle in 2.5 hours? ___________ miles In order to find what distance Roco will bicycle in 2.5 hours multiplying 15.5 by 2.5. First factor has one decimal place, the second has one decimal place, so, the product will has two decimal both factors there are two decimal places, so, the product will have four decimal places. We have the following: Conclusion is that Roco will bicycle 38.75 miles in 2.5 hours Question 8. Use estimation to show that your answer to 7 is reasonable. We can multiply 15 by 2 and get 30. After that, we can multiply 16 by 3 and get 48 We can add 30 and 48 and we get 78. After this, we can divide 78 by 2 and get 39. So, the answer is reasonable because 39 is close to 38.75. Multiply 15 by 2.16 by 3 and sum the product. Divide that sum by 2. Texas Go Math Grade 6 Lesson 4.1 Guided Practice Answer Key Question 1. Use the grid to multiply 0.4 × 0.7 0.4 × 0.7 = _______________ 0.4 × 0.7 represents 0.4 of 0.7. We will use a decimal grid and shade 7 rows of the grid to represent 0.7. Now, we will shade 0.4 of each 0.1 that is already shaded to represent 0.4 of 1. So, now we have 28 squares which are double-shaded. This actually represents 28 hundredths, or 28. So, we have the following: 0.4 × 0.7 = 0.28 Multiplying Decimals Grade 6 Lesson 4.1 Question 2. Draw an area model to multiply 1.1 × 2.4 1.1 × 2.4 _______________ We will use an area model. Here, each row contains 1 whole + 1 tenth. Also, each column contains 2 wholes + 4 tenths. So, the entire area model represents: 2 wholes + 6 tenths + 4 hundreths. Conclusion is that: 1.1 × 2.4 = 2.64 Question 3. 0.18 × 0.06 = _______________ In both factors there are two decimal places, so, the product will have four decimal places We have the following: So, the product is 0.0108 Question 4. 35.15 × 3.7 = _______________ Here, in the first factor we have two decimal. places, but in the second we have one decimal. place, so, the product will have three decimal places. So, the result is 130.055 Question 5. 0.96 × 0.12 = _______________ In both factors there are two decimal places, so, the product will have four decimal places. We have the following: So, the product is 0.1152 Go Math Lesson 4.1 Answer Key Multiplying Decimals 6th Grade Question 6. 62.19 × 32.5 = _______________ In both factors there are two decimal places, so, the product will have four decimal places. We have the following: So, the product is 2.021.75 Question 7. 3.4 × 4.37 = _______________ Here, in the first factor there is one decimal place, but in the second there are two decimal places, so, the product will has three decimal places. In both factors there are two decimal places, so, the product will have four decimal places. We have the following: So, the product is 14.858 Question 8. 3.762 × 0.66 = _______________ Here, in the first factor there are three decimal places, but in the second there are two, so, the product will has five decimal places. So, the product is 2.48292. Question 9. Chan Hee bought 3.4 pounds of coffee that cost $6.95 per pound. How much did he spend on coffee? $ ___________________ In order to calculate how much money Chan Hee spent on cotte, we need to multiply 3.4 by 6.95. First factor has one decimal place, but the second factor has two decimal places. So, the product will has three decimal places. So, chan Hee spent 23.630 $ on coffee. Question 10. Adita earns $9.40 per hour working at an animal shelter. How much money will she earn for 18.5 hours of work? $_______________ In order to calculate how much money Adita will earn for 18.5 hours, we need to multiply 9.40 by 18.5. In the first factor there are two decimal places but in the second there is one decimal place. So, the product will have three decimal places: So, Adita will earn 173.900$ for 18.5$ hours of work. Catherin tracked her gas purchases for one month. Go Math Lesson 4.1 6th Grade Multiplying Decimals Question 11. How much did Catherin spend on gas in week 2? $ ___________________________ In order to calculate how much Catherine spent on gas in week 2, we have to multiply 11.5 by 2.54 We can notice that the result will have three decimal places: Catherine spent 29.210 on gas in week 2$ Question 12. How much more did she spend in week 4 than in week 1? $ ____________________________ First, we have to calculate how much Catherine spent on gas in week 1. we have to multiply 10.4 by 2.65. Here, the first factor has one decimal place hut the second one has two. the product will has three decimal places. Conclusion is that Catherine spent $ 27.560 on gas in week 1. Now. we will calculate how much Catherine spent on gas in week 4. We actually need to multiply 10.6 by 2.70. We can notice that time result will hase three decimal places: We can see that Catherine spent $ 28.620 on gas in week 4. Finally, we will subtract 27.560 from 28.620 in order to find how much more Catherine spent on gas in week 4 than in week 1. 28.620 – 27.560 = 1.06 Catherine spent $ 1.06 more on gas in week 4 than in week 1. Essential Question Check-In Question 13. How can you check the answer to a decimal multiplication problem? We can check our answer to a decimal multiplication problem using the grid or draw an area model. Make a reasonable estimate for each situation. Question 14. A gallon of water weighs 8.354 pounds. Simon uses 11.81 gallons of water while taking a shower. About how many pounds of water did Simon use? In order to find how many pounds of water Simon used, we need to multiply 8.354 by 11.81. First factor has three decimals but the second one has two decimals, so, the result will has five decimals Simon used 98.66074 pounds of water. Question 15. A snail moves at a speed of 2.394 inches per minute. If the snail keeps moving at this rate, about how many inches will it travel in 7.489 minutes? In order to calculate how many inches snail wilL travel in 7.489 minutes, we need to multiply 2.394 by 7.489. Both factors have three decimal places, so, the product will has six decimals The snail will travel 17.928666 inches if he keeps moving at this rate. 6th Grade Go Math Multiplying Decimals Lesson 4.1 Question 16. Tricia’s garden is 9.87 meters long and 1.09 meters wide. What is the area of her garden? In order to calculate the area of Tricia’s garden, we need to multiply 9.87 by 1.09. Both factors have two decimals, so, the product will have four decimal places. So, the area of Tricia’s garden is 10.7583 square meters. Kaylynn and Amanda both work at the same store. The table shows how much each person earns, and the number of hours each person works in a week. Question 17. Estimate how much Kaylynn earns in a week. We will multiply 9, because 9 is closest to 8.75 by 37. So, we have the following: 9 × 37 = 333 So, kaylyum earns about 333$ per week Question 18. Estimate how much Amanda earns in a week. We will multiply 10, because 10 is closest to 10.25 by 31. So, we have the following: 10 × 31 = 310 So, Amanda earns about 310$ per week Question 19. Calculate the exact difference between Kaylynn and Amandas weekly salaries. We will multiply 8.75 by 37.5 in order to calculate Kaylyun’s weekly salanes. First factor has two decimals but the second one has decimal place, so, the product will has three decimals. So, Amanda earns $ 328.125 per week. Now. we will calculate how much Amanda earns per week. So. we will multiply 10.25 by 30.5. Here, the first factor has two decimals but the second one has one decimal. so, the product will has three decimal places. So, Amanda earns $ 812.625 per week. Now, we will calculate the exact difference between Kaylyun and Amanda’s weekly salanes subtracting 312.625 from 328.125 and get: 328.125 – 312.625 = 15.5 So the exact difference between their salaries is $ 15.5. Question 20. Victoria’s printer can print 8.804 pages in one minute. If Victoria prints pages for 0.903 minutes, about how many pages will she have? In order to calculate how many pages Victoria will have, we need to multiply 8.804 by 0.903. Both factors have three decimals, so, their product will have six decimals. So, Victoria will have 7.950012 pages. A taxi charges a flat fee of $4.00 plus $2.25 per mile. Question 21. How much will it cost to travel 8.7 miles? ___________________ In order to calculate how much it will cost to travel 8.7 miles, we need to multiply 8.7 by 2.25 and on that product add 4 flat fee. First factor has one decimal but these can done has two decimals. So, the product will have three decimals. 19.575 we will add 4.00 and get: 19.574 + 4.00 = 23.575$ So. it will cost $ 23.575 to travel 8.7 miles. Question 22. Multistep How much will the taxi driver earn if he takes one passenger 4.8 miles and another passenger 7.3 miles? Explain your process. If taxi driver takes one passenger 4.8 miles, we will calculate how much he will earn in this case. We will first multiply 4.8 by 2.25 and on that product will add 4.00 flat fee. The first factor has one decimal but the second has two decimals, so, the product will have three decimals. Now, on 10.800 we will add 4.00 and get 10.800 + 4.00 = 14.800 So, it will cost 14.800 if taxi driver takes this passenger 4.8 miles. Now, we will calculate how much he will earn if he takes another passenger 73 miles. We will first multiply 73 by 2.25 and on that product we will add 4.00 flat fee. The first factor has on decimal but the second has two decimals, so, the product will have three decimals. 16.425 we will add 4.00 and get: 16.425 + 4.00 = 20.425$ So, it will cost $ 20.425 if taxi driver takes this passenger 7.3 mites. If he takes both passengers, he will earn: 14.800 + 20.425 = 35.225 So, the taxi driver will earn 35.225$. Kay goes for several bike rides one week. The table shows her speed and the number of hours spent per ride. Question 23. How many miles did Kay bike on Thursday? In order to calculate how many mites Kay biked on Thursday, we need to multiply 10.75 by 1.9. First factor has two decimals but the second has one, so, the product will have three decimals. The conclusion is that Kay biked 20.425 miles on Thursday. Multiplying Decimals for 6th Grade Go Math Lesson 4.1 Question 24. On which day did Kay bike a whole number of miles? We can notice that on Friday Kay biked a whole number of mites. Really, to calculate it, we need to multiply 8.8 by 3.75. The product will have three decimal places, so, we have the following: We can see that kay biked 33 miles on Friday. Question 25. What is the difference in miles between Kay’s longest bike ride and her shortest bike ride? Kay’s Longest bike ride was on Monday. Really, we need to multiply 8.2 by 4.25 in order to calculate length of this bike ride. We can notice that the product will have three decimals. So, Kay biked 34.850 miles on Monday. Her shortest bike ride was on Thursday, we already calculated it in the task 23. According to it, Kay biked 20.425 miles on Thursday. Now we will calculate the difference in miles between Kay’s longest and her shortest bike ride subtracting miles on Monday and miles on Thursday and get the following: 34.850 – 20.425 = 14.425 So, required difference was 14.425 mites. Question 26. Check for Reasonableness Kay estimates that Wednesday’s ride was about 3 miles longer than Tuesday’s ride. Is her estimate reasonable? Explain. Yes, her estimate is reasonable. We will estimate first Kay’s bike ride on Tuesday. In order to estimate it, we will multiply 10 by 3 and get: 10 × 3 = 30 Now, we will estimate Kay’s bike side on Wednesday. In order to estimate it, we will multiply 11 by 3 and get: 11 × 3 = 33. So, according to it, Kay’s reasonable was correct H.O.T. Higher Order Thinking Question 27. Explain the Error To estimate the product 3.48 × 7.33, Marisa multiplied 4 × 8 to get 32. Explain how she can make a closer estimate. She can make a closer estimate by multiplying 3 by 7 because 3 is the closest whole number to 3.48 and 7 is the closest whole number to 7.33 Multiply 3 by 7 Question 28. Represent Real-World Problems A jeweler buys gold jewelry and resells the gold to a refinery. The jeweler buys gold for $1,235.55 per ounce and then resells it for $1,376.44 per ounce. How much profit does the jeweler make from buying and reselling 73.5 ounces of gold? We will first calculate how much the jeweler pays for 73.5 ounces of gold multiplying 1.235.55 by 73.5. The first factor has two decimals but the second one has one decimal, so, the product will have three decimals. So, the jeweler will pay $90,812.925. Now, we will calculate how much he will get if he resells 73.5 ounces multiplying 1,376.44 by 73.5. The first factor has two decimals but the second one has one decimal, so, the product will have three decimals. So, he will get $101, 168.340 if he resells gold. Now, we wilt calculate how much profit the jeweler will make from bying and reselling subtracting 101, 168.340 and 90, 812.925 and get: 101, 168.340 – 90,812.925 = 10, 355.415 So, his profit will be $10, 355.415. Question 29. Problem Solving To find the weight of the gold in a 22 karat gold object, multiply the object’s weight by 0.916. To find the weight of gold in a 14 karat gold object, multiply the object’s weight by 0.585. Which contains more gold, a 22 karat gold object or a 14 karat gold object that each weigh 73.5 ounces? How much more gold does it contain? First we will calculate contain of gold of 22 karat object multiplying the object’s weight, which is 73.5 ounces, by 0.916. The product will have four decimals: So, a 22 karat gold object contains 67.326 ounces of gold. Now we will calculate contain of gold of 14 karat gold object multiplying the object’s weight, which is 73.5 ounces, by 0.585. The product will have four decimals: So, a 14karat gold object contains 42.9975 ounces of gold. We can see that more gold contains a 22 karat gold object. Now we will calculate how much more gold it contains subtracting 42.9975 from 67.3260 and get: 67.3260 – 42.9975 = 24.3285 So, a 22 karat gold object contains 24.3285 ounces of gold more than a 14 karat gold object with equal weights. a 22 karat object; 24.3285 Leave a Comment You must be logged in to post a comment.
{"url":"https://gomathanswerkey.com/texas-go-math-grade-6-lesson-4-1-answer-key/","timestamp":"2024-11-05T03:36:39Z","content_type":"text/html","content_length":"269861","record_id":"<urn:uuid:0cf7118a-cf72-4cd4-ae37-2ecbe3170cac>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00449.warc.gz"}
How Amplitude Experiment uses sequential testing for statistical inference This article will help you: • Familiarize yourself with the statistical testing method used by Amplitude Experiment Amplitude Experiment uses a sequential testing method of statistical inference. Sequential testing has several advantages over T-tests, another widely-used method, chief among them being that you don’t need to know how many observations you’ll need to achieve significance before you start the experiment. Why is this important? With sequential testing, results are valid whenever you view them. That means you can decide to terminate an experiment early based on observations made to that point, and that the number of observations you’ll need to make an informed decision is, on average, much lower than the number you’d need when using a T-test or similar procedures. You can experiment more quickly, incorporating your new learnings into your product and escalating the pace of your experimentation program. This article will explain the basics of sequential testing, how it fits into Amplitude Experiment, and how you can make it work for you. Hypothesis testing in Amplitude Experiment When you run an A/B test, Experiment conducts a hypothesis test using a randomized control trial, in which users are randomly assigned to either a treatment variant or the control. The control represents your product as it currently is, while each treatment includes a set of potential changes to your current baseline product. With a predetermined metric, Experiment compares the performance of these two populations using a test statistic. In a hypothesis test, you’re looking for performance differences between the control and your treatment variants. Amplitude Experiment tests the null hypothesis , where states there’s no difference between treatment’s mean and control’s mean. For example, if you’re interested in measuring the conversion rate of a treatment variant, the null hypothesis posits that the conversion rates of your treatment variants and your control are the The alternative hypothesis states that there is a difference between the treatment and control. Experiment’s statistical model uses sequential testing to look for any difference between treatments and control. There are a number of different sequential testing options. Amplitude Experiment uses a family of sequential tests called mixture sequential probability ratio test (mSPRT). The weight function, H, is the mixing distribution. So we get the following mixture of likelihood ratios against the null hypothesis that Currently, Amplitude only supports a comparison of arithmetic means between the treatment and control variants for uniques, average totals, and sum of property. NOTE: Read more about sequential testing in this help center article on frequently asked questions, including how sequential testing compares to the T-test. Have more questions? Check out the Amplitude Community.
{"url":"https://amplitude.zendesk.com/hc/en-us/articles/4403176829709-How-Amplitude-Experiment-uses-sequential-testing-for-statistical-inference","timestamp":"2024-11-06T07:29:43Z","content_type":"text/html","content_length":"35708","record_id":"<urn:uuid:fe4225d6-6482-4f98-9934-3c2194afc33b>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00631.warc.gz"}
The number of maximal left-compressed intersecting families A family of sets intersecting if every pair of sets in star A hands on proof of the Erdős–Ko–Rado theorem use a tool called compression. A family left-compressed if for every There is a strong stability result for large intersecting families. The Hilton–Milner family consists of all sets that contain As part of an alternative proof of the Hilton–Milner theorem, Peter Borg partially answered the following question. Borg used that fact that this is true for Maximum hitting for I completed the classification of OEIS) such families respectively. In the rest of this post I’ll explain how I obtained these numbers. We want to count maximal left-compressed intersecting families of Here’s one concrete algorithm. 1. Generate a list of all sets from 2. Put all 3. Let 4. Repeat recursively on each of the two lists generated in the previous step. Stop on each branch whenever the list of remaining options is empty. The following is a fairly direct translation of this algorithm into Haskell that makes no attempt to store the families generated and just counts the number of possibilities. A source file with the necessary import’s and the choose function is attached to the end of this post. r = 5 simpleOptions = [a | a <- choose r [1..(2*r-1)], not [dollar-sign] a `simpleLeftOf` (simpleComplement a)] simpleLeftOf xs ys = all id [dollar-sign] zipWith (<=) xs ys simpleComplement a = [1..(2*r)] \\ a simpleCount [] = 1 simpleCount (a:as) = simpleCount take + simpleCount leave -- take a -- all pairs with b < a or b^c < a are forced -- second case never happens as b^c has 2r but a doesn't take = [b | b <- as, not [dollar-sign] b `simpleLeftOf` a] -- leave a, and so take a^c -- all pairs with b < a^c or b^c < a^c (equivalently, a < b) are forced c = simpleComplement a leave = [b | b <- as, not (b `simpleLeftOf` c || a `simpleLeftOf` b)] This will compute the number of maximal left-compressed intersecting families for The dream is to pack all of the elements of our list into a single machine word and perform each comparison in a small number of instructions. For example, we could encode an element of Edward Crane suggested that as the lists are so short and the elements are so small we can afford to be quite a lot more wasteful in our representation: we can write each element of our set in unary! The rest of this section should be considered joint work with him. The first iteration of the idea is to write each element Unfortunately this representation uses 72 bits in total, so won’t fit into a 64-bit machine word. Observing that we never use Identify each element of The value for The Haskell source is here. There are a few more performance tricks to do with the exact bit representation of the sets, which I’m happy to discuss if anything is unclear.
{"url":"https://babarber.uk/374/maximal-left-compressed-intersecting-families/","timestamp":"2024-11-12T19:55:16Z","content_type":"text/html","content_length":"67717","record_id":"<urn:uuid:ea10ac07-67b9-4b01-9745-e3314c76c043>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00512.warc.gz"}
Unit of measurement - Wikiwand A unit of measurement, or unit of measure, is a definite magnitude of a quantity, defined and adopted by convention or by law, that is used as a standard for measurement of the same kind of quantity. ^[1] Any other quantity of that kind can be expressed as a multiple of the unit of measurement.^[2] This article needs additional citations for verification (June 2019) The former Weights and Measures office in Seven Sisters, London Units of measurement, Palazzo della Ragione, Padua For example, a length is a physical quantity. The metre (symbol m) is a unit of length that represents a definite predetermined length. For instance, when referencing "10 metres" (or 10 m), what is actually meant is 10 times the definite predetermined length called "metre". The definition, agreement, and practical use of units of measurement have played a crucial role in human endeavour from early ages up to the present. A multitude of systems of units used to be very common. Now there is a global standard, the International System of Units (SI), the modern form of the metric system. In trade, weights and measures are often a subject of governmental regulation, to ensure fairness and transparency. The International Bureau of Weights and Measures (BIPM) is tasked with ensuring worldwide uniformity of measurements and their traceability to the International System of Units (SI). Metrology is the science of developing nationally and internationally accepted units of measurement. In physics and metrology, units are standards for measurement of physical quantities that need clear definitions to be useful. Reproducibility of experimental results is central to the scientific method. A standard system of units facilitates this. Scientific systems of units are a refinement of the concept of weights and measures historically developed for commercial purposes.^[3] Science, medicine, and engineering often use larger and smaller units of measurement than those used in everyday life. The judicious selection of the units of measurement can aid researchers in problem solving (see, for example, dimensional analysis). In the social sciences, there are no standard units of measurement. A unit of measurement is a standardized quantity of a physical property, used as a factor to express occurring quantities of that property. Units of measurement were among the earliest tools invented by humans. Primitive societies needed rudimentary measures for many tasks: constructing dwellings of an appropriate size and shape, fashioning clothing, or bartering food or raw materials. The earliest known uniform systems of measurement seem to have all been created sometime in the 4th and 3rd millennia BC among the ancient peoples of Mesopotamia, Egypt and the Indus Valley, and perhaps also Elam in Persia as well. Weights and measures are mentioned in the Bible (Leviticus 19:35–36). It is a commandment to be honest and have fair measures. In the Magna Carta of 1215 (The Great Charter) with the seal of King John, put before him by the Barons of England, King John agreed in Clause 35 "There shall be one measure of wine throughout our whole realm, and one measure of ale and one measure of corn—namely, the London quart;—and one width of dyed and russet and hauberk cloths—namely, two ells below the selvage..." As of the 21st century, the International System is predominantly used in the world. There exist other unit systems which are used in many places such as the United States Customary System and the Imperial System. The United States is the only industrialized country that has not yet at least mostly converted to the metric system.^[4] The systematic effort to develop a universally acceptable system of units dates back to 1790 when the French National Assembly charged the French Academy of Sciences to come up such a unit system. This system was the precursor to the metric system which was quickly developed in France but did not take on universal acceptance until 1875 when The Metric Convention Treaty was signed by 17 nations. After this treaty was signed, a General Conference of Weights and Measures (CGPM) was established. The CGPM produced the current SI, which was adopted in 1954 at the 10th Conference of Weights and Measures. Currently, the United States is a dual-system society which uses both the SI and the US Customary system.^[5]^[6] The use of a single unit of measurement for some quantity has obvious drawbacks. For example, it is impractical to use the same unit for the distance between two cities and the length of a needle. Thus, historically they would develop independently. One way to make large numbers or small fractions easier to read, is to use unit prefixes. At some point in time though, the need to relate the two units might arise, and consequently the need to choose one unit as defining the other or vice versa. For example, an inch could be defined in terms of a barleycorn. A system of measurement is a collection of units of measurement and rules relating them to each other. As science progressed, a need arose to relate the measurement systems of different quantities, like length and weight and volume. The effort of attempting to relate different traditional systems between each other exposed many inconsistencies, and brought about the development of new units and systems. Systems of units vary from country to country. Some of the different systems include the centimetre–gram–second, foot–pound–second, metre–kilogram–second systems, and the International System of Units, SI. Among the different systems of units used in the world, the most widely used and internationally accepted one is SI. The base SI units are the second, metre, kilogram, ampere, kelvin, mole and candela; all other SI units are derived from these base units.^[7]^[8]^:132 Systems of measurement in modern use include the metric system, the imperial system, and United States customary units. Traditional systems Historically many of the systems of measurement which had been in use were to some extent based on the dimensions of the human body. Such units, which may be called anthropic units, include the cubit , based on the length of the forearm; the pace, based on the length of a stride; and the foot and hand.^[9]^:25 As a result, units of measure could vary not only from location to location but from person to person. Units not based on the human body could be based on agriculture, as is the case with the furlong and the acre, both based on the amount of land able to be worked by a team of oxen. Legal control of weights and measures To reduce the incidence of retail fraud, many national statutes have standard definitions of weights and measures that may be used (hence "statute measure"), and these are verified by legal officers. Informal comparison to familiar concepts In informal settings, a quantity may be described as multiples of that of a familiar entity, which can be easier to contextualize than a value in a formal unit system. For instance, a publication may describe an area in a foreign country as a number of multiples of the area of a region local to the readership. The propensity for certain concepts to be used frequently can give rise to loosely defined "systems" of units.^[13]^[14] For most quantities a unit is necessary to communicate values of that physical quantity. For example, conveying to someone a particular length without using some sort of unit is impossible, because a length cannot be described without a reference used to make sense of the value given. But not all quantities require a unit of their own. Using physical laws, units of quantities can be expressed as combinations of units of other quantities. Thus only a small set of units is required. These units are taken as the base units and the other units are derived units. Thus base units are the units of the quantities which are independent of other quantities and they are the units of length, mass, time, electric current, temperature, luminous intensity and the amount of substance. Derived units are the units of the quantities which are derived from the base quantities and some of the derived units are the units of speed, work, acceleration, energy, pressure etc.^[7] Different systems of units are based on different choices of a set of related units including fundamental and derived units. Following ISO 80000-1,^[15] any value or magnitude of a physical quantity is expressed as a comparison to a unit of that quantity. The value of a physical quantity Z is expressed as the product of a numerical value {Z} (a pure number) and a unit [Z]: ${\displaystyle Z=\{Z\}\times [Z]}$ For example, let ${\displaystyle Z}$ be "2 metres"; then, ${\displaystyle \{Z\}=2}$ is the numerical value and ${\displaystyle [Z]=\mathrm {metre} }$ is the unit. Conversely, the numerical value expressed in an arbitrary unit can be obtained as: ${\displaystyle \{Z\}=Z/[Z]}$ The multiplication sign is usually left out, just as it is left out between variables in the scientific notation of formulas. The convention used to express quantities is referred to as quantity calculus . In formulas, the unit [ ] can be treated as if it were a specific magnitude of a kind of physical : see Dimensional analysis for more on this treatment. Units can only be added or subtracted if they are the same type; however units can always be multiplied or divided, as George Gamow used to explain. Let ${\displaystyle Z}$ be "2 metres" and ${\ displaystyle W}$ "3 seconds", then ${\displaystyle 2\,\mathrm {metres} \times 3\,\mathrm {seconds} =\{Z\}\{W\}\times [Z][W]=6\,\mathrm {metres} \times \mathrm {seconds} }$. There are certain rules that apply to units: • Only like terms may be added. When a unit is divided by itself, the division yields a unitless one. When two different units are multiplied or divided, the result is a new unit, referred to by the combination of the units. For instance, in SI, the unit of speed is metre per second (m/s). See dimensional analysis. A unit can be multiplied by itself, creating a unit with an exponent (e.g. m^2/s^2). Put simply, units obey the laws of indices. (See Exponentiation.) • Some units have special names, however these should be treated like their equivalents. For example, one newton (N) is equivalent to 1 kg⋅m/s^2. Thus a quantity may have several unit designations, for example: the unit for surface tension can be referred to as either N/m (newton per metre) or kg/s^2 (kilogram per second squared). Conversion of units is the conversion of the unit of measurement in which a quantity is expressed, typically through a multiplicative conversion factor that changes the unit without changing the quantity. This is also often loosely taken to include replacement of a quantity with a corresponding quantity that describes the same physical property. Unit conversion is often easier within a metric system such as the than in others, due to the system's and its metric prefixes that act as power-of-10 multipliers. One example of the importance of agreed units is the failure of the NASA Mars Climate Orbiter, which was accidentally destroyed on a mission to Mars in September 1999 (instead of entering orbit) due to miscommunications about the value of forces: different computer programs used different units of measurement (newton versus pound force). Considerable amounts of effort, time, and money were On 15 April 1999, Korean Air cargo flight 6316 from Shanghai to Seoul was lost due to the crew confusing tower instructions (in metres) and altimeter readings (in feet). Three crew and five people on the ground were killed. Thirty-seven were injured.^[18]^[19] In 1983, a Boeing 767 (which thanks to its pilot's gliding skills landed safely and became known as the Gimli Glider) ran out of fuel in mid-flight because of two mistakes in figuring the fuel supply of Air Canada's first aircraft to use metric measurements.^[20] This accident was the result of both confusion due to the simultaneous use of metric and Imperial measures and confusion of mass and volume measures. When planning his journey across the Atlantic Ocean in the 1480s, Columbus mistakenly assumed that the mile referred to in the Arabic estimate of 56+2/3 miles for the size of a degree was the same as the actually much shorter Italian mile of 1,480 metres. His estimate for the size of the degree and for the circumference of the Earth was therefore about 25% too small.^[21]^:1^:17
{"url":"https://www.wikiwand.com/en/articles/Units_of_measurement","timestamp":"2024-11-07T15:39:37Z","content_type":"text/html","content_length":"366796","record_id":"<urn:uuid:a7095f80-764b-48f2-b609-e8052c1aeeb6>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00877.warc.gz"}
Stewart's Theorem | Brilliant Math & Science Wiki In geometry, Stewart's theorem yields a relation between the side lengths and a cevian length of a triangle. It can be proved from the law of cosines as well as by the famous Pythagorean theorem. Its name is in honor of the Scottish mathematician Matthew Stewart who published the theorem in 1746 when he was believed to be a candidate to replace Colin Maclaurin as Professor of Mathematics at the University of Edinburgh. In \(\triangle ABC\), point \(D\) is a point on \(BC\) and \(AB=c, AC=b, BD=u, DC=v, AD=t.\) Stewart's theorem states that in this triangle, the following equation holds: Proof by the Law of Cosines By the law of cosines, we have \[ b^2&=v^2+t^2-2vt\cos \theta &\qquad (1)\\ c^2&=u^2+t^2+2ut\cos \theta. &\qquad (2) \] Now multiply (1) by \(u\) and multiply (2) by \(v\) to eliminate \(\cos \theta \): \[ b^2u&=uv^2+ut^2-2uvt\cos \theta &\qquad (3)\\ c^2v&=u^2v+vt^2+2uvt\cos \theta. &\qquad (4) \] Taking \((3)+(4)\) gives \[ b^2u+c^2v&=uv(u+v)+t^2(u+v)\\ \Rightarrow t^2&=\frac{b^2u+c^2v}{u+v} -uv.\ _\square \] Stewart's theorem can sometimes be rewritten as \(b^2u+c^2v=(u+v)(uv+t^2)\). Proof by the Pythagorean Theorem The proof below assumes \( \angle B \) and \( \angle C \) are both acute and \( u < v \) as in the figure above. Then we have \[ t^2 &= h^2 + x^2 \\ b^2 &= h^2 + (v-x)^2 \Rightarrow b^2u = h^2u + uv^ 2 - 2uvx + ux^2 \\ c^2 &= h^2 + (u+x)^2 \Rightarrow c^2v = h^2v + u^2v + 2uvx + vx^2, \] which implies \[ b^2u + c^2v &= h^2u + h^2v + uv^2 + u^2v - 2uvx + 2uvx + ux^2 + vx^2 \\ &= (u + v)(h^2 + uv + x^2) \\ &= (u + v)(t^2 + uv) \\ &= a \cdot (t^2 + uv). \ _\square \] Special case where \( \Delta ABC \) is Isosceles In the case where \( \Delta ABC \) is isosceles (see figure above), Stewart's theorem has a more simplified form: \[ a \cdot (t^2 + uv) &= b^2u + c^2v \\ &= b^2u + b^2v \\ &= b^2 (u + v) \\ &= ab^2 \ \ \Rightarrow b^2 &= t^2 + uv. \] This theorem is quite useful in calculating the length of standard cevians like median, angle bisector, etc. In triangle \(ABC\), \( \angle B = 90 ^ \circ, BE = 3, \) and \(BD = 4 \). If \( AE = ED = DC \), then the value of \(AC\) can be expressed as \( a \sqrt{b} \), where \(a\) and \(b \) are positive integers and \(b\) is square-free. Find \( a+b \). Triangle \(ABC\) with its centroid at \(G\) has side lengths \(AB=15, BC=18,AC=25\). \(D\) is the midpoint of \(BC\). The length of \(GD\) can be expressed as \( \frac{ a \sqrt{d} } { b} \), where \(a\) and \(b\) are coprime positive integers and \(d\) is a square-free positive integer. Find \( a + b + d + 1 \). In \(\triangle ABC\), \(AC=BC\), point \(D\) is on \(BC\) such that \(CD=3\times BD\), and \(E\) is the midpoint of \(AD\) such that \(CE=\sqrt 7\) and \(BE=3\). If the area of \(\triangle ABC\) is \(m\sqrt n\), where \(m\) and \(n\) are positive integers and \(n\) is square-free, find \(m+n\). Let \(ABCD\) be a square, and let \(E\) and \(F\) be points on \(AB\) and \(BC,\) respectively. The line through \(E\) parallel to \(BC\) and the line through \(F\) parallel to \(AB\) divide \(ABCD\) into two squares and two non-square rectangles. The sum of the areas of the two squares is \(\frac{9}{10}\) of the area of square \(ABCD.\) Find \(\frac{AE}{EB}+\frac{EB}{AE}.\) A cyclic quadrilateral \(ABCD\) is constructed within a circle such that \(AB = 3, BC = 6,\) and \(\triangle ACD\) is equilateral, as shown to the right. If \(E\) is the intersection point of both diagonals of \(ABCD\), what is the length of \(ED,\) the blue line segment in the diagram?
{"url":"https://brilliant.org/wiki/stewarts-theorem/","timestamp":"2024-11-10T17:28:38Z","content_type":"text/html","content_length":"59421","record_id":"<urn:uuid:97929719-d35f-4dd7-8a03-f18d14267509>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00760.warc.gz"}
Analog To Digital Conversion – Practical Considerations - Electronics-Lab.com Analog To Digital Conversion – Practical Considerations • Kamran Jalilinia • 15 min • 135 Views • 0 Comments In the previous articles about A/D conversion, we considered the concepts of ‘sampling’, ‘quantization’, and ‘binary encoding’ as building blocks of a PCM system. Now, we review those concepts from a practical perspective. To perform the analog-to-digital conversion, the first stage is ‘sampling’. There are different regular sampling methods: • Ideal Sampling: In ideal Sampling, the analog signal is sampled instantaneously by sampling impulses with near-zero durations. This is an ideal method that is usually considered in the theoretical model of sampling and cannot be easily implemented. We have already considered it. • Flat-top sampling: In this sampling technique, the analog signal is sampled instantaneously by sampling pulses with finite durations and the top of the samples remains constant by using a circuit. This is the most commonly applied sampling method. In practical ADCs, the quantization and the binary encoding procedures take a finite amount of time, which is called conversion time. This is the time required by the ADC to perform a complete conversion process. For ultimate accuracy, then, it is important that the measured input waveform does not change and hence, the sampled signal amplitude has to be held constant during the conversion interval. To do this, specialized sub-circuits called Sample-and-hold (S/H) are used which can be logically represented in Figure 1. Figure 1: A Sample & Hold symbolic circuit The electronic switch may be a relay, a bipolar transistor, a FET, or a MOSFET controlled by a gating signal. The capacitor holds the sampled measurement of the analog signal for at most ‘T[c]’ seconds while a quantized sample is available as an n-bit binary code at the output of the analog-to-digital converter. Obviously, conversion time (T[c]) is less than the sampling period (T)[.] In order to create an S/H, a pair of high-impedance buffers are normally used, along with an electronic switch element and a capacitor to hold the charge. A more realistic sample-and-hold circuit is shown in Figure 2. Figure 2: A Sample & Hold realistic circuit In this mechanism, the voltage-follower configuration is used as a building block which is a special case of an op-amp circuit. As shown in Figure 3, in a voltage-follower configuration, all of the output voltage is fed back to the inverting (-) input of the op-amp by a straight connection. The input signal is also inserted into the noninverting terminal of the op-amp. Figure 3: An op-amp voltage-follower configuration In this configuration, the straight feedback loop has a voltage gain of 1. Then, the overall closed-loop voltage gain of a noninverting amplifier is 1 as explained in Equation 1. Equation 1: The overall voltage gain of the voltage-follower The most important features of the voltage-follower configuration are its very high input impedance and its very low output impedance. These features make it a nearly ideal buffer amplifier for interfacing high-impedance sources and low-impedance loads. In the Sample-and-hold (S/H) mechanism in Figure 2, the MOSFET element (called Q) acts as a simple switch controlled by the gate terminal. When Q is turned on, it provides a low-impedance path to store the analog voltage sample across the capacitor C. When Q is off, the capacitor C does not have a complete path to discharge through and, hence, keeps the sampled voltage. Figure 4 shows the waveforms of the flat-top sampling method. The name comes from the shape of the final waveform. Figure 4: Flat-top sampling waveforms (a) analog signal; (b) sampling pulses; (c) sampled output A sample-and-hold (S/H) system is available on a single monolithic chipset in the market, with the storage capacitor added externally. After converting the analog signal from continuous form to a discrete-time signal (with a mechanism like S/H circuit in Figure 2), it is possible to convert it to digital form by an ADC method. There are several types of ADCs used in different applications. One of the famous circuit configurations for A/D systems is the Flash Conversion method which is generally used for high-speed tasks, such as video applications in which hundreds of megahertz sampling rates are common today. The Flash ADC utilizes comparators in its mechanism. Essentially, an analog comparator has two input voltages V[1] and V[2], and one output voltage V[o]. it can be implemented by an open-loop op-amp circuit as shown in Figure 5. Figure 5: An op-amp comparator Often, one input (V[2]) is a constant reference voltage (V[ref]), and the other is a time-varying signal (V[1]). As Equation 2 explains, the input signal of the comparator (V[i]) is the difference between the 2 signals of V[1] and V[ref]. Equation 2: Calculation of the input signal of the comparator The ideal comparator has the voltage transfer characteristic shown in Figure 6. Figure 6: Transfer characteristic of an ideal comparator Clearly, the input is compared with the reference and the output equals one of two states, V[Low ]or V[High]. It has a constant output voltage V[o] (= V[Low]) if V[i] < 0 and a different constant voltage V[o] (= V[High]) if V[i] > 0. Equation 3 explains it. Equation 3: The output voltage of an ideal comparator The block diagram of a simple 2-bit flash ADC is shown in Figure 7. The circuitry of a Flash ADC consists of a precision resistor ladder network which is connected to a set of analog comparators, and a binary priority encoder. The priority encoder is a combinational logic device (including logic gates) that produces a binary number on its output representing the highest value input. Figure 7: A 2-bit Flash A/D converter The input signal (V[in]) is passed to a number of analog comparators in parallel i.e., a bank of operational amplifiers. A reference voltage source (V[ref]) supplies the resistive voltage-divider network and a threshold voltage (V[th]) for each comparator is set by this network. Effectively, there is one comparator for each level. The input signal serves as one of the inputs to each of the comparators which is connected to the noninverting terminal of op-amps. The second input for each of the comparators, connected to the inverting terminal of op-amps, is a threshold voltage (V[th]), different for each comparator. The input voltage signal to be digitized is applied to all of the comparators simultaneously. Hence, the input voltage will be compared to all threshold voltages simultaneously. The analog comparators indicate whether or not the input voltage is above or below the threshold at each level. If the input signal level (V[in]) on the positive input of a comparator is greater than the level of the negative input, the output will be ‘High’. Otherwise, the comparator outputs a ‘Low’ voltage. When a given signal is applied, a number of comparators (towards the bottom of the string) in which V[in] is greater than their references, will produce a V[High]. Conversely, the comparators (towards the top) will indicate a V[LOW]. The set of comparator outputs is fed into a priority encoder that will turn this simple sequence (C[1] C[2] C[3]) into a normal binary word (B[1] B[0]). The final 2-bit codes are the digital equivalent of the input original analog signal. The bits at the output of the coding network can then be entered into a flip-flop register for storage. For an n-bit converter, we usually need 2^n resistors with equal resistances. The number of bits the flash ADC produces per each binary word (n), determines the number of comparators used. The number of comparators needed for n-bit A/D conversion is calculated by Equation 4. Equation 4: Calculation of the number of comparators for n-bit A/D For example, a 4-bit flash ADC would require 2^4 – 1 = 15 comparators to implement. The structure of the circuit in Figure 7 can be extendable for a higher number of bits. Threshold voltages (V[th]) for each comparator can be found by Equation 5. Equation 5: Calculation of threshold voltages where k is the index number of the threshold voltage. Referring to the circuit in Figure 7, for the present case of a two-bit A/D converter (n=2), the threshold voltages of the three comparators will be V[ref]/4, V[ref]/2, and 3V[ref] /4. The output status of various comparators depends upon the input signal V[in]. For instance, when the input level V[in] lies between V/4 and V/2, the C[1] output status is High whereas the C[2] and C[3] outputs are both Low. The final results of the priority encoder are summarized in Truth Table 1. Table 1: The 2-bit Flash ADC Truth table In practice, the op-amp should have good stability against temperature changes and supply voltage variations. As the main advantage, flash ADCs are the fastest converters because the internal comparisons are performed in parallel and at the same time. As the main disadvantage, the complexity of the hardware will be increased for large word lengths. Firstly, the resistors in the voltage divider chain have to be matched and manufactured with high precision for accuracy. Secondly, the number of comparators grows fast as the number of bits is increased. For example, to build a flash ADC circuit with 8 bits, we need to have 255 (= 2^8 – 1) comparators! Also, the power dissipation of the circuit will be considerable. Therefore, they are limited in the number of bits they can accommodate. Generally, flash converters are limited to 4-bit up through 10-bit digital output word lengths. • Voltage-follower is a closed-loop, noninverting op-amp with a voltage gain of 1. • In this designation, the op-amp will not draw current from the signal source and will not load down it because of its very high input resistance. • The storage time of the capacitor is called the A/D conversion time because it is during this time that the ADC converts the sample voltage to a binary code. • The primary benefit of an S/H amplifier is that it stores the analog voltage during the sampling interval. The S/H amplifier, however, stores the voltage on the capacitor; with the voltage constant during the sampling interval, quantizing will then be accurate. • Parallel ADC or Flash converter – uses many comparators connected in parallel, each with a different reference voltage. The outputs of these comparators are then inputted into a priority encoder. This encoder provides a binary output based on which comparator outputs are high and which are low. • The voltages tapped from the terminals of the resistors, which establish threshold voltages for each allowed quantization level, are then compared with the input voltage. • The threshold voltages to be used for comparators are in general V[ref]/2^n, 2V[ref]/2^n, 3V[ref]/2^n, 4V[ref]/2^n, and so on. Here, V[ref] is the maximum amplitude of the analog signal that the A/D converter can digitize, and n is the number of bits in the digitized output. • Flash-type (or parallel) ADCs are the fastest due to their short conversion time and can therefore be used for high sampling rates. • In this system, an 8-bit flash A/D converter requires 255 comparators. The cost of long word length A/D comparators escalates as the circuit complexity increases and as the number of analog comparators rises by 2^n – 1. • On the other hand, the larger the n, the more complex is the priority encoder. So, it is difficult and expensive to build the circuit for large word lengths. Thus, they have limited word length (10 bits or less). Inline Feedbacks View all comments
{"url":"http://precisepriceelectrical.com/index-80.html","timestamp":"2024-11-04T11:50:29Z","content_type":"text/html","content_length":"178210","record_id":"<urn:uuid:cf4b1442-cd93-4e83-b5ca-e6605707819f>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00857.warc.gz"}
Sparsity and the Bayesian perspective Issue A&A Volume 552, April 2013 Article Number A133 Number of page(s) 5 Section Numerical methods and codes DOI https://doi.org/10.1051/0004-6361/201321257 Published online 16 April 2013 A&A 552, A133 (2013) Sparsity and the Bayesian perspective ^1 Laboratoire AIM, UMR CEA-CNRS-Paris 7, Irfu, Service d’Astrophysique, CEA Saclay, 91191 Gif-sur-Yvette Cedex, France e-mail: jstarck@cea.fr ^2 Department of Statistics, Stanford University, Stanford CA, 94305, USA ^3 GREYC CNRS-ENSICAEN-Université de Caen, 6 Bd du Maréchal Juin, 14050 Caen Cedex, France ^4 Laboratoire d’Astrophysique, École Polytechnique Fédérale de Lausanne (EPFL), Observatoire de Sauverny, 1290 Versoix, Switzerland Received: 7 February 2013 Accepted: 15 February 2013 Sparsity has recently been introduced in cosmology for weak-lensing and cosmic microwave background (CMB) data analysis for different applications such as denoising, component separation, or inpainting (i.e., filling the missing data or the mask). Although it gives very nice numerical results, CMB sparse inpainting has been severely criticized by top researchers in cosmology using arguments derived from a Bayesian perspective. In an attempt to understand their point of view, we realize that interpreting a regularization penalty term as a prior in a Bayesian framework can lead to erroneous conclusions. This paper is by no means against the Bayesian approach, which has proven to be very useful for many applications, but warns against a Bayesian-only interpretation in data analysis, which can be misleading in some cases. Key words: methods: statistical / cosmic background radiation / methods: data analysis © ESO, 2013 1. Introduction Bayesian methodology is extremely popular in cosmology. It proposes a very elegant framework for dealing with uncertainties and for using our knowledge under the form of priors in order to solve a given inverse problem (Hobson et al. 2010; Mielczarek et al. 2009). The huge success of Bayesian methods in cosmology is well illustrated in Trotta (2008) with a figure on the number of papers with the word “Bayesian” in the title as a function of the publication year. Bayesian techniques have been used for many applications such as model selection (Kilbinger et al. 2010; Trotta 2012), primordial power spectrum analysis (Kawasaki & Sekiguchi 2010), galactic surveys design (Watkinson et al. 2012), or cosmological parameters estimations (March et al. 2011). Bayesian methods are now commonly used at almost every step in the cosmic microwave background (CMB) pipeline experiments, for point source removal (Argüeso et al. 2011; Carvalho et al. 2012), noise level estimation (Wehus et al. 2012), component separation (Dickinson et al. 2009), cosmological parameter estimation (Efstathiou et al. 2010), non-Gaussianity studies (Elsner & Wandelt 2010; Feeney et al. 2012), or inpainting (Bucher & Louis 2012; Kim et al. 2012). Sparsity has recently been proposed for CMB data analysis for component separation (Bobin et al. 2013) and inpainting (Abrial et al. 2007; Abrial et al. 2008; Starck et al. 2013). The sparsity-based inpainting approach has been successfully used for two different CMB studies, the CMB weak-lensing on Planck simulated data (Perotto et al. 2010; Plaszczynski et al. 2012), and the analysis of the integrated Sachs-Wolfe effect (ISW) on WMAP data (Dupé et al. 2011; Rassat et al. 2012). In both cases, the authors showed using Monte-Carlo simulations that the statistics derived from the inpainted maps can be trusted at a high confidence level, and that sparsity-based inpainting can indeed provide an easy and effective solution to the problem of large Galactic mask. However, even if these simulations have shown that sparsity-based inpainting does not destroy CMB weak-lensing, ISW signals, or large-scale anomalies in the CMB, the CMB community is very reluctant to use this concept. This has lead to very animated discussions in conferences. During these discussions it has come up that cosmologists often resort to a Bayesian interpretation of sparsity. Hence, they summarize the sparse regularization to the maximum a posteriori (MAP) estimator assuming the solution follows a Laplacian distribution. From this perspective, several strong arguments against the use of sparsity for CMB analysis were raised: • 1. Sparsity consists in assuming an anisotropic and anon-Gaussian prior, which is unsuitable for the CMB, which isGaussian and isotropic. • 2. Sparsity violates rotational invariance. • 3. The ℓ[1] norm that is used for sparse inpainting arose purely out of expediency because under certain circumstances it reproduces the results of the ℓ[0] pseudo-norm (which arises naturally in the context of strict, as opposed to weak, sparsity) without requiring combinatorial optimization. • 4. There is no mathematical proof that sparse regularization preserves/recovers the original statistics. The above arguments result from a Bayesian point of view of the sparsity concept. In this paper we explain in detail why the above arguments are not rigorously valid and why sparsity is not in contradiction with a Bayesian interpretation. 2. Sparse regularization of inverse problems 2.1. Inverse problem regularization Many data processing problems can be formalized as a linear inverse problem, $y=Ax+ε,$(1)where y ∈ F^m is a vector of noisy measurements (real with F=R, or complex F=C), ε is an m-dimensional vector of additive noise, x is the perfect n-dimensional unobserved vector of interest, and A:F^n → F^m is a linear operator. For example, the inpainting problem corresponds to the case where we want to recover some missing data, in which case A is a binary matrix with fewer rows than columns (m < n), and contains only one value equal to 1 per row, with all other values equal to 0. Finding x when the values of y and A are known is a linear inverse problem. When it does not have a unique and stable solution, it is ill-posed and a regularization is necessary to reduce the space of candidate solutions. A very popular regularization in astronomy is the well-known Bayesian maximum entropy method (MEM), which is based on the principle that we want to select the simplest solution which fits the data. Sparsity has recently emerged as very powerful approach for regularization (Starck et al. 2010). 2.2. Strict and weak sparsity A signal x considered as a vector in F^n is said to be sparse if most of its entries are equal to zero. If k numbers of the n samples are equal to zero, where k < n, then the signal is said to be k -sparse. Generally, signals are not sparse in direct space, but can be sparsified by transforming them to another domain. Think for instance of a purely sinusoidal signal which is 1-sparse in the Fourier domain, while it is clearly dense in the original one. In the so-called sparsity synthesis model, a signal can be represented as the linear expansion $x=Φα=∑i=1tφiαi,$(2)where α are the synthesis coefficients of x, Φ = (φ[1],...,φ[t]) is the dictionary whose columns are t elementary waveforms φ[i] also called atoms. In the language of linear algebra, the dictionary Φ is a b × t matrix whose columns are the normalized atoms, supposed here to be normalized to a unit ℓ[2]-norm, i.e., $∀i∈[1,t],∥φi∥22=∑n=1n|φi[n]|2=1$. A signal can be decomposed in many dictionaries, but the best one is the one with the sparsest (most economical) representation of the signal. In practice, it is convenient to use dictionaries with fast implicit transforms (such as the Fourier transform, wavelet transforms, etc.) which allow us to directly obtain the coefficients and reconstruct the signal from these coefficients using fast algorithms running in linear or almost linear time (unlike matrix-vector multiplications). The Fourier, wavelet, and discrete cosine transforms are among the most popular dictionaries. Most natural signals however are not exactly sparse but rather concentrated near a small set. Such signals are termed compressible or weakly sparse in the sense that the sorted magnitudes |α[(i)]|, i.e., |α[(1)]| > |α[(2)]|,..., > |α[(t)]|, of the sequence of coefficients α decay quickly according to a power law, i.e., , where C is a constant. The larger r is, the faster the amplitudes of coefficients decay, and the more compressible the signal is. In turn, the non-linear ℓ[2] approximation error of α (and x) from its M largest entries in magnitude also decrease quickly. One can think, for instance, of the wavelet coefficients of a smooth signal away from isotropic singularities, or the curvelet coefficients of a piecewise regular image away from smooth contours. A comprehensive account on sparsity of signals and images can be found in Starck et al. (2010). 2.3. Sparse regularization for inverse problems In the following, for a vector z we denote $∥z∥pp=∑i|zi|p$ for p ≥ 0. In particular, for p ≥ 1, this is the pth power of the ℓ[p] norm, and for p = 0, we get the ℓ[0] pseudo-norm which counts the number of non-zero entries in z. The ℓ[0] regularized problem amounts to minimizing $˜α∈argminα∥y−AΦα∥22+λ∥α∥0,$(3)where λ is a regularization parameter. A solution $˜x$ is reconstructed as $˜x=Φ˜α$. Clearly, the goal of Eq. (3) is to minimize the number of non-zero coefficients describing the sought after signal while ensuring that the forward model is faithful to the observations. Fig. 1 Left: amplitude (absolute value) of the spherical harmonic coefficients versus their index, when the coefficients are sorted from the largest amplitude to the smallest. Right: same plot with the y -axis in log. Solving Eq. (3) is however known to be NP-hard. The ℓ[1] norm has been proposed as a tight convex relaxation of Eq. (3) leading to the minimization problem $˜α∈argminα∥y−AΦα∥22+λ∥α∥1,$(4)where λ is again a regularization parameter different from that of Eq. (3). There has been a tremendous amount of work where researchers spanning a wide range of disciplines have studied the structural properties of minimizers of Eq. (4) and its equivalence with Eq. (3). Equation (4) is computationally appealing and can be efficiently solved, and it has also been proved that under appropriate circumstances, Eq. (4) produces exactly the same solutions as Eq. (3), see e.g., Donoho (2006b) and the overview in the monograph (Starck et al. 2010). 3. Sparsity prior and Bayesian prior 3.1. Bayesian framework In the Bayesian framework, a prior is imposed on the object of interest through a probability distribution. For instance, assume that coefficients α are i.i.d. Laplacian with the scale parameter τ, i.e., the density P[α](α) ∝ e^− τ∥α∥[1], and the noise ε is zero-mean white Gaussian with variance σ^2, i.e., the conditional density $PY|α(y)=(2πσ2)−m/2e−∥y−AΦα∥22/(2σ2)$. By traditional Bayesian arguments, the MAP estimator is obtained by maximizing the conditional posterior density P[α | Y](α) ∝ P[Y | α](y)P[α](α), or equivalently by minimizing its anti-log version $minα12σ2∥y−AΦα∥22+τ∥α∥1.$(5)This is exactly Eq. (4) by identifying λ = 2σ^2τ. This resemblance has led Bayesian cosmologists to raise the four criticisms mentioned in the introduction. But as we will discuss shortly, their central argument is not used in the right sense, which can yield misleading conclusions. 3.2. Should ℓ[1] regularization be the MAP? In Bayesian cosmology, the following shortcut is often made: if a prior is at the basis of an algorithm, then to use this algorithm, the resulting coefficients must be distributed according to this prior. But it is a false logical chain in general, and high-dimensional phenomena completely invalidate it. For instance, Bayesian cosmologists claim that ℓ[1] regularization is equivalent to assuming that the solution is Laplacian and not Gaussian, which would be unsuitable for the case of CMB analysis. This argument however assumes that a MAP estimate follows the distribution of the prior. But it is now well-established that MAP solutions substantially deviate from the prior model, and that the disparity between the prior and the effective distribution obtained from the true MAP estimate is a permanent contradiction in Bayesian MAP estimation (Nikolova 2007). Even the supposedly correct ℓ [2] prior would yield an estimate (Wiener, which coincides with the MAP and posterior conditional mean) whose covariance is not that of the prior. In addition, rigorously speaking, this MAP interpretation of ℓ[1] regularization is not the only possible interpretation. More precisely, it was shown in Gribonval et al. (2011a) and Baraniuk et al. (2010) that solving a penalized least squares regression problem with penalty ψ(α) (e.g. the ℓ[1] norm) should not necessarily be interpreted as assuming a Gibbsian prior Cexp(−ψ(α)) and using the MAP estimator. In particular, for any prior P[α], the conditional mean can also be interpreted as a MAP with some prior Cexp(−ψ(α)). Conversely, for certain penalties ψ(α), the solution of the penalized least squares problem is indeed the conditional posterior mean, with a certain prior P[α](α) which is generally different from Cexp(−ψ(α)). In summary, the MAP interpretation of such penalized least-squares regression can be misleading, and using a MAP estimation, the solution does not necessarily follow the prior distribution, and an incorrect prior does not necessarily lead to a wrong solution. What we are claiming here are facts that were stated and proved as rigorous theorems in the literature. 3.3. Compressed sensing: the Bayesian interpretation inadequacy A beautiful example to illustrate this is the compressed sensing scenario (Donoho 2006a; Candès & Tao 2006), which tells us that a k-sparse, or compressible n-dimensional signal x can be recovered either exactly, or to a good approximation, from much less random measurements m than the ambient dimension n, if m is sufficiently larger than the intrinsic dimension of x. Clearly, the underdetermined linear problem, y = Ax, where A is drawn from an appropriate random ensemble, with fewer equations than unknown, can be solved exactly or approximately, if the underlying object x is sparse or compressible. This can be achieved by solving a computationally tractable ℓ[1]-regularized convex optimization program. If the underlying signal is exactly sparse, in a Bayesian framework, this would be a completely absurd way to solve the problem, since the Laplacian prior is very different from the actual properties of the original signal (i.e., k coefficients different from zero). In particular, what compressed sensing shows is that we can have prior A be completely true, but utterly impossible to use for computation time or any other reason, and we can use prior B instead and get the correct results. Therefore, from a Bayesian point of view, it is rather difficult to understand not only that the ℓ[1] norm is adequate, but also that it leads to the exact solution. 3.4. The prior misunderstanding Considering the chosen dictionary Φ as the spherical harmonic transform, the coefficients α are now α = {a[l,m]}[l = 0,...,l[max],m = −l,...,l]. The standard CMB theory assumes that the values of a[ l,m] are the realizations of a collection of heteroscedastic complex zero-mean Gaussian random variables with variances C[l]/2, where C[l] is the true power spectrum. Using a ℓ[1] norm is then interpreted in the Bayesian framework as having a Laplacian prior on each a[l,m] which contradicts the underlying CMB theory. However, as argued above, from the regularization point of view, the ℓ[1] norm merely promotes a solution x such that its spherical harmonic coefficients are (nearly) sparse. There is no assumption at all on the properties related to a specific a[l,m]. In fact, there is no randomness here and the a[l,m] values do not even have to be interpreted as a realization of a random variable. Therefore, whatever the underlying distribution for a given a[l,m] (if its exists), it need not be interpreted as Laplacian under the ℓ[1] regularization. The CMB can be Gaussian or not, isotropic or not, and there will be no contradiction with the principle of using the ℓ[1] norm to solve the reconstruction problem (e.g., inpainting). What is important is that the sorted absolute values of the CMB spherical harmonics coefficients presents a fast decay. This is easily verified using a CMB map, data, or simulation. This is well illustrated by Fig. 1 which shows this decay for the nine-year WMAP data set. As we can see, the compressibility assumption is completely valid. 4. Discussion of the Bayesian criticisms • Sparsity consists in assuming an anisotropy and a non-Gaussianprior, which is unsuitable for the CMB, which is Gaussian andisotropic. We have explained in the previous section that thisMAP interpretation of ℓ[1] regularization is misleading, and also that there is no assumption at all on the underlying stochastic process. The CMB sparsity-based recovery is a purely data-driven regularization approach; the sorted absolute values of the spherical harmonic coefficients presents a fast decay, as seen on real data or simulations, and this motivates the sparse There is no assumption that the CMB is Gaussian or isotropic, but there is also no assumption that it is non-Gaussian or anisotropic. In this sense, using the ℓ[1]-regularized inpainting to test if the CMB is indeed Gaussian and isotropic may be better than other methods, including Wiener filtering, which in the Bayesian framework assumes Gaussianity and isotropy. Furthermore, the Wiener estimator will also require knowing the underlying power spectrum (i.e., the theoretical C[l]) which is an even stronger prior. • Sparsity violates rotational invariance. The criticicism here is that linear combinations of independent exponentials are not independent exponentials; therefore, isotropy is necessarily violated, unless the a[lm] are Gaussian. But again, our arguments for ℓ[1] regularization are borrowed from approximation theory and harmonic analysis, and this does not contradict the idea that the a[lm] coefficients can be realizations of a sequence of heteroscedastic Gaussian variables. The ℓ[1] norm regularization will be adequate if the sorted coefficients amplitudes follow a fast decay, which is always verified with CMB data. Indeed, as we already mentioned, a set of parameters x[i], where each x[i] is a realization of a Gaussian process of mean 0 and variance V[i], can present a fast decay when we plot the sorted absolute values of x[i]. In the case of CMB spherical harmonic coefficients verifying this is straightforward. • Theℓ[1] norm that is used for sparse inpainting arose purely out of expediency because under certain circumstances it reproduces the results of the ℓ[0] pseudo-norm. First, we would like to correct a possible misunderstanding: ℓ[1] regularization can provably recover both strictly and weakly sparse signals, while being stable to noise. In the strictly sparse case, ℓ[1] norm minimization can recover the right solution although the prior is not correct from a Bayesian point of view. In the compressible case, the recovery is up to a good approximation, as good as an oracle that would give the best M-term approximation of the unknown signal (i.e., ℓ[0]-solution). What is criticized as an “expedient” prior is basically at the heart of regularization theory, for instance here ℓ[1] provides strong theoretical recovery guarantees under appropriate conditions. A closer look at the literature of inverse problems shows that these guarantees are possible beyond the compressed sensing scenario. • There is no mathematical proof that sparse regularization preserves/recovers the original statistics. This is true, but this argument is not specific to ℓ[1] regularized inpainting. Even worse, as detailed and argued in the previous section, the posterior distribution generally deviates from the prior, and even if one uses the MAP with the correct Gaussian prior, the MAP estimate (Wiener) will not have the covariance of the prior. Another point to mention is that the CMB is never a pure Gaussian random field, even if the standard ΛCDM cosmological model is truly valid. We know that the CMB is at least contaminated by non-Gaussian secondary anisotropies such as weak-lensing or kinetic Sunyaev-Zel’dovich (SZ) effects. Therefore the original CMB statistics are likely to be better preserved by an inpainting method that does not assume Gaussianity (but nonetheless allows it), rather than by a method which has an explicit Gaussian assumption. Moreover, if the CMB is not Gaussian, then one can clearly anticipate that the Wiener estimate does not preserve the original statistics. Finally, it appears unfair to criticize sparsity-regularised inverse problems on the mathematical side. A quick look at the literature shows the vitality of the mathematics community, both pure and applied, and the large amount of theoretical guarantees (deterministic and frequentist statistical settings) that have been devoted to ℓ[1] regularization. In particular, theoretical guarantees of ℓ[1]-based inpainting can be found in King et al. (2013) on the Cartesian grid, and of sparse recovery on the sphere in Rauhut & Ward (2012). 5. Conclusions We have shown that Bayesian cosmologists’ criticisms about sparse inpainting are based on a false logical chain, which consists in assuming that if a prior is at the basis of an algorithm, then to use this algorithm, the resulting coefficients must be distributed according to this prior. Compressed sensing theory is a nice counter-example, where it is mathematically proved that other prior, than the true one can lead, under some conditions, to the correct solution. Therefore, we cannot understand how a regularization penalty has an impact on a solution of an inverse problem just by expressing the prior which derives this penalty. To understand this we also need to take into account the operator involved in the inverse problem, and this requires much deeper mathematical developments than a simple Bayesian interpretation. Compressed sensing theory shows that for some operators, beautiful geometrical phenomena allow us to recover perfectly the solution of an underdetermined inverse problem. Similar results were derived for a random sampling on the sphere (Rauhut & Ward 2012). We do not claim in this paper that sparse inpainting is the best solution for inpainting, but we have showed that the arguments raised against it are incorrect, and that if Bayesian methodology offers a very elegant framework that is extremely useful for many applications, we should be careful not to be monolithic in the way we address a problem. In practice, it may be useful to use several inpainting methods to better understand the CMB statistics, and it is clear that sparsity based inpainting does not require making any assumptions about the Gaussianity nor about the isotropy, nor does it need to have a theoretical C[ℓ] as an input. The authors thank Benjamin Wandelt, Mike Hobson, Jason McEwen, Domenico Marinucci, Hiranya Peiris, Roberto Trotta, and Tom Loredo for the useful and animated discussions. This work was supported by the French National Agency for Research (ANR -08-EMER-009-01), the European Research Council grant SparseAstro (ERC-228261), and the Swiss National Science Foundation (SNSF). All Figures Fig. 1 Left: amplitude (absolute value) of the spherical harmonic coefficients versus their index, when the coefficients are sorted from the largest amplitude to the smallest. Right: same plot with the y -axis in log. In the text Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform. Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days. Initial download of the metrics may take a while.
{"url":"https://www.aanda.org/articles/aa/full_html/2013/04/aa21257-13/aa21257-13.html","timestamp":"2024-11-14T13:37:12Z","content_type":"text/html","content_length":"138221","record_id":"<urn:uuid:bf8589b6-41d2-4e71-9e0b-bc41d283d67e>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00760.warc.gz"}
SnakeByte[17] The Metropolis Algorithm | Theodore C. Tanner Jr. Frame Grab From the movie Metropolis 1927 Who told you to attack the machines, you fools? Without them you’ll all die!! ~ Grot, the Guardian of the Heart Machine First, as always, Oh Dear Reader, i hope you are safe. There are many unsafe places in and around the world in this current time. Second, this blog is a SnakeByte[] based on something that i knew about but had no idea it was called this by this name. Third, relative to this, i must confess, Oh, Dear Reader, i have a disease of the bibliomaniac kind. i have an obsession with books and reading. “They” say that belief comes first, followed by admission. There is a Japanese word that translates to having so many books you cannot possibly read them all. This word is tsundoku. From the website (if you click on the word): “Tsundoku dates from the Meiji era, and derives from a combination of tsunde-oku (to let things pile up) and dokusho (to read books). It can also refer to the stacks themselves. Crucially, it doesn’t carry a pejorative connotation, being more akin to bookworm than an irredeemable slob.” Thus, while perusing a math-related book site, i came across a monograph entitled “The Metropolis Algorithm: Theory and Examples” by C Douglas Howard [1]. i was intrigued, and because it was 5 bucks (Side note: i always try to buy used and loved books), i decided to throw it into the virtual shopping buggy. Upon receiving said monograph, i sat down to read it, and i was amazed to find it was closely related to something I was very familiar with from decades ago. This finally brings us to the current The Metropolis Algorithm is a method in computational statistics used to sample from complex probability distributions. It is a type of Markov Chain Monte Carlo (MCMC) algorithm (i had no idea), which relies on Markov Chains to generate a sequence of samples that can approximate a desired distribution, even when direct sampling is complex. Yes, let me say that again – i had no idea. Go ahead LazyWeb^TM laugh! So let us start with how the Metropolis Algorithm and how it relates to Markov Chains. (Caveat Emptor: You will need to dig out those statistics books and a little linear algebra.) Markov Chains Basics A Markov Chain is a mathematical system that transitions from one state to another in a state space. It has the property that the next state depends only on the current state, not the sequence of states preceding it. This is called the Markov property. The algorithm was introduced by Metropolis et al. (1953) in a Statistical Physics context and was generalized by Hastings (1970). It was considered in the context of image analysis (Geman and Geman, 1984) and data augmentation (Tanner (I’m not related that i know of…) and Wong, 1987). However, its routine use in statistics (especially for Bayesian inference) did not take place until Gelfand and Smith (1990) popularised it. For modern discussions of MCMC, see e.g. Tierney (1994), Smith and Roberts (1993), Gilks et al. (1996), and Roberts and Rosenthal (1998b). Ergo, the name Metropolis-Hastings algorithm. Once again, i had no idea. A Markov Chain can be described by a set of states Provide The Goal: Sampling from a Probability Distribution In many applications (e.g., statistical mechanics, Bayesian inference, as mentioned), we are interested in sampling from a complex probability distribution Ok Now: The Metropolis Algorithm The Metropolis Algorithm is one of the simplest MCMC algorithms to generate samples from The key steps of the algorithm are: Start with an initial guess Proposal Step From the current state Acceptance Probability Calculate the acceptance probability In the case where the proposal distribution is symmetric (i.e., Acceptance or Rejection Generate a random number Repeat the proposal, acceptance, and rejection steps to generate a Markov Chain of samples. Convergence and Stationary Distribution: Over time, as more samples are generated, the Markov Chain converges to a stationary distribution. The stationary distribution is the target distribution The Metropolis Algorithm is widely used in various fields such as Bayesian statistics, physics (e.g., in the simulation of physical systems), machine learning, and finance. It is especially useful for high-dimensional problems where direct sampling is computationally expensive or impossible. Key Features of the Metropolis Algorithm: • Simplicity: It’s easy to implement and doesn’t require knowledge of the normalization constant of • Flexibility: It works with a wide range of proposal distributions, allowing the algorithm to be adapted to different problem contexts. • Efficiency: While it can be computationally demanding, the algorithm can provide high-quality approximations to complex distributions with well-chosen proposals and sufficient iterations. The Metropolis-Hastings Algorithm is a more general version that allows for non-symmetric proposal distributions, expanding the range of problems the algorithm can handle. Now let us code it up: i am going to assume the underlying distribution is Gaussian with a time-dependent mean import numpy as np import matplotlib.pyplot as plt # Time-dependent mean function (example: sinusoidal pattern) def mu_t(t): return 10 * np.sin(0.1 * t) # Target distribution: Gaussian with time-varying mean mu_t and fixed variance def target_distribution(x, t): mu = mu_t(t) sigma = 1.0 # Assume fixed variance for simplicity return np.exp(-0.5 * ((x - mu) / sigma) ** 2) # Metropolis Algorithm for time-series sampling def metropolis_sampling(num_samples, initial_x, proposal_std, time_steps): samples = np.zeros(num_samples) samples[0] = initial_x # Iterate over the time steps for t in range(1, num_samples): # Propose a new state based on the current state x_current = samples[t - 1] x_proposed = np.random.normal(x_current, proposal_std) # Acceptance probability (Metropolis-Hastings step) acceptance_ratio = target_distribution(x_proposed, time_steps[t]) / target_distribution(x_current, time_steps[t]) acceptance_probability = min(1, acceptance_ratio) # Accept or reject the proposed sample if np.random.rand() < acceptance_probability: samples[t] = x_proposed samples[t] = x_current return samples # Parameters num_samples = 10000 # Total number of samples to generate initial_x = 0.0 # Initial state proposal_std = 0.5 # Standard deviation for proposal distribution time_steps = np.linspace(0, 1000, num_samples) # Time steps for temporal evolution # Run the Metropolis Algorithm samples = metropolis_sampling(num_samples, initial_x, proposal_std, time_steps) # Plot the time series of samples and the underlying mean function plt.figure(figsize=(12, 6)) # Plot the samples over time plt.plot(time_steps, samples, label='Metropolis Samples', alpha=0.7) # Plot the underlying time-varying mean (true function) plt.plot(time_steps, mu_t(time_steps), label='True Mean Output of Python Script Figure 1.0 Ok, What’s going on here? For the Target Distribution: The function The function target_distribution(x, t) models a Gaussian distribution with mean Metropolis Algorithm: The metropolis_sampling function implements the Metropolis algorithm. It iterates over time, generating samples from the time-varying distribution. The acceptance probability is calculated using the target distribution at each time step. Proposal Distribution: A normal distribution centered around the current state with standard deviation proposal_std is used to propose new states. Temporal Evolution: The time steps are generated using np.linspace to simulate temporal evolution, which can be used in time-series analytics. Plot The Results: The results are plotted, showing the samples generated by the Metropolis algorithm as well as the true underlying mean function The plot shows the Metropolis samples over time, which should cluster around the time-varying mean Now you are probably asking “Hey is there a more pythonic library way to to this?”. Oh Dear Reader i am glad you asked! Yes There Is A Python Library! AFAIC PyMC started it all. Most probably know it as PyMc3 (formerly known as…). There is a great writeup here: History of PyMc. We are golden age of probabilistic programming. ~ Chris Fonnesbeck (creator of PyMC) Lets convert it using PyMC. Steps to Conversion: 1. Define the probabilistic model using PyMC’s modeling syntax. 2. Specify the Gaussian likelihood with the time-varying mean 3. Use PyMC’s built-in Metropolis sampler. 4. Visualize the results similarly to how we did earlier. import pymc as pm import numpy as np import matplotlib.pyplot as plt # Time-dependent mean function (example: sinusoidal pattern) def mu_t(t): return 10 * np.sin(0.1 * t) # Set random seed for reproducibility # Number of time points and samples num_samples = 10000 time_steps = np.linspace(0, 1000, num_samples) # PyMC model definition with pm.Model() as model: # Prior for the time-varying parameter (mean of Gaussian) mu_t_values = mu_t(time_steps) # Observational model: Normally distributed samples with time-varying mean and fixed variance sigma = 1.0 # Fixed variance x = pm.Normal('x', mu=mu_t_values, sigma=sigma, shape=num_samples) # Use the Metropolis sampler explicitly step = pm.Metropolis() # Run MCMC sampling with the Metropolis step samples_all = pm.sample(num_samples, tune=1000, step=step, chains=5, return_inferencedata=False) # Extract one chain's worth of samples for plotting samples = samples_all['x'][0] # Taking only the first chain # Plot the time series of samples and the underlying mean function plt.figure(figsize=(12, 6)) # Plot the samples over time plt.plot(time_steps, samples, label='PyMC Metropolis Samples', alpha=0.7) # Plot the underlying time-varying mean (true function) plt.plot(time_steps, mu_t(time_steps), label='True Mean When you execute this code you will see the following status bar: It will be a while. Go grab your favorite beverage and take a walk….. Output of Python Script Figure 1.1 Key Differences from the Previous Code: PyMC Model Usage Definition: In PyMC, the model is defined using the pm.Model() context. The x variable is defined as a Normal distribution with the time-varying mean PyMC handles this automatically with the specified sampler. Metropolis Sampler: PyMC allows us to specify the sampling method. Here, we explicitly use the Metropolis algorithm with pm.Metropolis(). Samples Parameter: We specify shape=num_samples in the pm.Normal() distribution to indicate that we want a series of samples for each time step. The resulting plot will show the sampled values using the PyMC Metropolis algorithm compared with the true underlying mean, similar to the earlier approach. Now, samples has the same shape as time_steps (in this case, both with 10,000 elements), allowing you to plot the sample values correctly against the time points; otherwise, the x and y axes would not align. NOTE: We used this library at one of our previous health startups with great success. Optimizations herewith include several. There is a default setting in PyMC which is called NUTS. No need to manually set the number of leapfrog steps. NUTS automatically determines the optimal number of steps for each iteration, preventing inefficient or divergent sampling. NUTS automatically stops the trajectory when it detects that the particle is about to turn back on itself (i.e., when the trajectory “U-turns”). A U-turn means that continuing to move in the same direction would result in redundant exploration of the space and inefficient sampling. When NUTS detects this, it terminates the trajectory early, preventing unnecessary steps. Also the acceptance rates on convergence are There are several references to this set of algorithms. It truly a case of both mathematical and computational elegance. Of course you have to know what the name means. They say words have meanings. Then again one cannot know everything. Until Then, #iwishyouwater <- Of all places Alabama getting the memo From Helene 2024 𝕋𝕖𝕕 ℂ. 𝕋𝕒𝕟𝕟𝕖𝕣 𝕁𝕣. (@tctjr) / X Music To Blog By: View From The Magicians Window, The Psychic Circle [1] The Metropolis Algorithm: Theory and Examples by C Douglas Howard [2] The Metropolis-Hastings Algorithm: A note by Danielle Navarro [3] Github code for Sample Based Inference by bashhwu Entire Metropolis Movie For Your Viewing Pleasure. (AFAIC The most amazing Sci-Fi movie besides BladeRunner)
{"url":"https://www.tedtanner.org/snakebyte17-the-metropolis-algorithm/","timestamp":"2024-11-13T12:37:58Z","content_type":"text/html","content_length":"82451","record_id":"<urn:uuid:9245abd4-ebeb-4310-83be-fb0121ede96a>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00309.warc.gz"}
In their comment, Yang and Koike (2005), referred to as YK05 henceforth) analyzed two recent versions of empirical formulations for soil thermal conductivity (which are considered to be more accurate) in comparison with the old version used in Eq. (4) of Zhang et al. (2004), referred to as Z04 henceforth). As shown in YK05’s Fig. 2, the soil thermal diffusivities computed by these recent versions of formulations do not always increase monotonically with the soil water content. Based on this, YK05 discussed the possible existence of multiple solutions if the recent formulations are used by the method of Z04 to estimate the daily averaged soil water content from observed diurnal variations of soil temperatures. While we thank YK05 for drawing attention to this problem, we would like to add some comments of our own on the applicability and limitation of the adaptive Kalman filter method of Z04. Applicability of the method YK05 showed that the soil thermal conductivity computed by the empirical formulation in Eq. (4) of Z04 increases too rapidly and becomes unrealistically high as the soil water content approaches the saturation point. Because of this, the soil water content was underestimated for wet soil in Z04. This explains why the estimated soil water content could not reach the peak values that were measured in the shallow soil layer on the heavy rain days (see Figs. 1–3 of Z04). This problem was discussed qualitatively in the last paragraph of section 3 of Z04, which is consistent with the quantitative results in Fig. 1 of YK05. Here, we only need to discuss the applicability of the adaptive Kalman filter method with the improved formulations in Eqs. (5)–(13) of YK05. As explained in the introduction of Z04, the work of Z04 was motivated by the previous study of Xu and Zhou (2003), referred to as XZ03 henceforth). Although the adaptive Kalman filter method is a significant improvement over the simple linear-regression method of XZ03, both methods depend on the variabilities of the soil heat capacity and thermal conductivity as functions of the soil water content w (or θ, as in YK05). Because the above-mentioned problems can be easily seen for the method of XZ03, they are discussed below in connection with XZ03 first and then Z04. In the method of XZ03, the vertical variation of the soil thermal conductivity D (or λ, as in YK05) is neglected, so D can be combined with the soil heat capacity C into a single parameter, that is, the soil thermal diffusivity k = D/C. In this case, the thermal diffusion equation is simplified into dT/dt = kd^2T/dz^2. With this simplification, as shown in XZ03, the daily averaged soil thermal diffusivity k = D/C can be estimated for a soil layer by the linear regression from observed diurnal variations of soil temperatures at three (or two) different depths. Then, the averaged soil water content w is estimated by inverting the function form of k(w). When the improved formulations in Eqs. (5)–(7) or Eqs. (8)–(13) of YK05 are used, there are two obvious problems for the inversion—(a) the inversion becomes inaccurate and ill posed when the k(w) curve becomes flat (see Figs. 2b,c of YK05) and, thus, k is insensitive to w; and (b) the inversion yields two estimates of w when the estimated value of k is intercepted twice by the k(w) curve on the two sides of the maximum. The latter (b) is the problem mentioned by YK05 that causes multiple solutions. The adaptive Kalman filter method of Z04 considers the random part of the equation error, so the required sensitivities of C and D to w should be significantly lower than those required by the method of XZ03. This means that the adaptive Kalman filter method should be less severely affected by the above-mentioned problem (a) than the method of XZ03. Furthermore, the adaptive Kalman filter method uses the original form of the soil thermal diffusion equation in which C and D cannot be generally combined into a single parameter, such as k = D/C. Because the method estimates the vertical profile (rather than a single value) of the daily averaged soil water content from observed diurnal variations of soil temperatures at different depths, it is unlikely to have the above-mentioned problem (b). To see this, we rewrite the original form of soil thermal diffusion equation into dT/dt = kd^2T/dz^2 + [(dD/dz)/C]dT/dz. The two terms on the right-hand side of this equation indicate that both k and (dD/dz)/C need to be estimated to match the observed diurnal variations of soil temperatures at different depths. When the improved formulations in Eqs. (5)–(7) or Eqs. (8)–(13) of YK05 are used, the estimated vertical profile of k may correspond to two different profiles of w, denoted by w[1](z) and w[2](z). The estimated (dD/dz)/C, however, can match only one of the two profiles, in general. This implies that the adaptive Kalman filter method can have only one solution. Based on the above discussion, a rigorous proof of the uniqueness of the solution for the adaptive Kalman filter method can be given as follows. Assume that there are two solutions (z) and (z), satisfying both Note that Eq. can be satisfied only if )] and )] are on two sides of the maximum of a ) curve in Fig. 2b or 2c of . In this case, must have opposite signs. Then, must also have opposite signs, because ( ) = ( ) as derived from the vertical derivative of Eq. . Note that are always positive, and so the left-hand side of Eq. , that is, { )] = ( )] has the same sign as . Similarly, the right-hand side of Eq. has the same sign as . But, as required by Eq. must have opposite signs, as well as the two sides of Eq. . Thus, Eqs. cannot be simultaneously satisfied unless = 0 over the entire depth. Only one solution, ) or ), can be truly optimal. The applicability and reliability of the adaptive Kalman filter method proposed by Z04 depend on the variabilities of the soil heat capacity and thermal conductivity as functions of the soil water content. Clearly, if the soil heat capacity and thermal conductivity, and, thus, the diurnal variations of soil temperatures, were not affected by the soil water content, there would be no way to estimate the daily averaged soil water contents from observed diurnal variations of soil temperatures. Fortunately, the soil heat capacity is a linear function of the soil water content, so the applicability and reliability of the adaptive Kalman filter method are only partially affected by the soil thermal conductivity. Nevertheless, the method may have some difficulties when the recent formulations in Eqs. (5)–(8) or Eqs. (9)–(13) of YK05 are used to replace the old version used in Eq. (4) of Z04. In particular, because these recent formulations show that the soil thermal conductivity should increase much more slowly than indicated by the old formulation as the soil water content approaches the saturation point, the adaptive Kalman filter method may become less reliable or even inapplicable when the soil is very wet. In this case, a reliable estimate of the soil water content may need additional information (from direct observations or a prior estimate). This problem is subject to further investigation, as suggested by YK05. • Xu, Q. and B. Zhou. 2003. Retrieving soil moisture from soil temperature measurements by using linear regression. Adv. Atmos. Sci. 20:849–858. • Yang, K. and T. Koike. 2005. Comments on “Estimating soil water contents from soil temperature measurements by using an adaptive Kalman filter.”. J. Appl. Meteor. 44:546–550. • Zhang, S. W., C. J. Qiu, and Q. Xu. 2004. Estimating soil water contents from soil temperature measurements by using an adaptive Kalman filter. J. Appl. Meteor. 43:379–389.
{"url":"https://journals.ametsoc.org/view/journals/apme/44/4/jam2216.1.xml","timestamp":"2024-11-02T12:44:49Z","content_type":"text/html","content_length":"311480","record_id":"<urn:uuid:19a6f494-4499-48e4-ad17-2c69d823e1f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00097.warc.gz"}
Analog Output 4-20mA Circuits One of the most common analog signals used in process control applications is the 4-20 mA signal. There are times where a 4-20 mA has to be generated and transmitted by a controller. Here is picture of an 8 channel 4-20mA output interface, the VP-8AI. This interface is powered by a PIC18F45K20 micro, has RS485 Modbus RTU protocol, and the output dual DA Converters are the MCP4922 DAC which has 12 bit resolution and an SPI interface. Here is a simplified schematic of one of the outputs from this interface: Here is how the math works: 1. Typically, the DAC will run from 3.3VDC and the reference will also be 3.3VDC 2. To allow for overange and "headroom", the full scale Vin will be 3 VDC which corresponds to 37214 DA Counts (3/3.3 * 4096) 3. Basic Op Amp theory states 3 VDC Vin will then appear across R1 (let's say the standard 499R resistor is actually 500 Ohms to make everything cleaner) 4. Current flow through R1 will be 3VDC/500 Ohms = 0.006 A or 6 mA. 5, The same 6 mA will flow through R2, giving a voltage across R2 of .006A * 330 Ohm = 1.98 V 6. Basic Op Amp theory states that the voltage across R2 will equal the same voltage across R3, therefore the current flowing through R3 = 1.98 V / 100 R = 19.8 mA for Aout With a little bit of tweaking on the Max defined DA Counts for 20 mA, the above circuit works exceptionally well. We use this in various products in our industrial line and also in the Widgetlords product offering as follows: The V+ power for this type of circuit is typically 24 VDC to allow for output "Drive". Typically process instrumentation that accepts 4-20mA signal will have "Load" resistor of 250 Ohms (or less). At 24 VDC power and given that the above circuit will have a voltage loss of 2 VDC (give or take), the above wold be able to "Drive" up to 4 "Loads" in series. (Each Load resistor at 250 Ohms with 20 mA flow will have 5 VDC across the load). Simple, and clean. Join our newsletter for information on projects and new product releases.
{"url":"https://widgetlords.com/blogs/news/analog-output-4-20ma-circuits","timestamp":"2024-11-12T16:37:26Z","content_type":"text/html","content_length":"117688","record_id":"<urn:uuid:aac5f7b8-e083-4854-911c-7cdbdf798da3>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00121.warc.gz"}
OpenSSL Curve Names, Algorithms, and Options Information about the OpenSSL curve names and options supported by DSG. Curve Name Description secp112r1 SECG/WTLS curve over a 112-bit prime field secp112r2 SECG curve over a 112-bit prime field secp128r1 SECG curve over a 128-bit prime field secp128r2 SECG curve over a 128-bit prime field secp160k1 SECG curve over a 160-bit prime field secp160r1 SECG curve over a 160-bit prime field secp160r2 SECG/WTLS curve over a 160-bit prime field secp192k1 SECG curve over a 192-bit prime field secp224k1 SECG curve over a 224-bit prime field secp224r1 NIST/SECG curve over a 224-bit prime field secp256k1 SECG curve over a 256-bit prime field secp384r1 NIST/SECG curve over a 384-bit prime field secp521r1 NIST/SECG curve over a 521-bit prime field prime192v1 NIST/X9.62/SECG curve over a 192-bit prime field prime192v2 X9.62 curve over a 192-bit prime field prime192v3 X9.62 curve over a 192-bit prime field prime239v1 X9.62 curve over a 239-bit prime field prime239v2 X9.62 curve over a 239-bit prime field prime239v3 X9.62 curve over a 239-bit prime field prime256v1 X9.62/SECG curve over a 256-bit prime field sect113r1 SECG curve over a 113-bit binary field sect113r2 SECG curve over a 113-bit binary field sect131r1 SECG/WTLS curve over a 131-bit binary field sect131r2 SECG curve over a 131-bit binary field sect163k1 NIST/SECG/WTLS curve over a 163-bit binary field sect163r1 SECG curve over a 163-bit binary field sect163r2 NIST/SECG curve over a 163-bit binary field sect193r1 SECG curve over a 193-bit binary field sect193r2 SECG curve over a 193-bit binary field sect233k1 NIST/SECG/WTLS curve over a 233-bit binary field sect233r1 NIST/SECG/WTLS curve over a 233-bit binary field sect239k1 SECG curve over a 239-bit binary field sect283k1 NIST/SECG curve over a 283-bit binary field sect283r1 NIST/SECG curve over a 283-bit binary field sect409k1 NIST/SECG curve over a 409-bit binary field sect409r1 NIST/SECG curve over a 409-bit binary field sect571k1 NIST/SECG curve over a 571-bit binary field sect571r1 NIST/SECG curve over a 571-bit binary field c2pnb163v1 X9.62 curve over a 163-bit binary field c2pnb163v2 X9.62 curve over a 163-bit binary field c2pnb163v3 X9.62 curve over a 163-bit binary field c2pnb176v1 X9.62 curve over a 176-bit binary field c2tnb191v1 X9.62 curve over a 191-bit binary field c2tnb191v2 X9.62 curve over a 191-bit binary field c2tnb191v3 X9.62 curve over a 191-bit binary field c2pnb208w1 X9.62 curve over a 208-bit binary field c2tnb239v1 X9.62 curve over a 239-bit binary field c2tnb239v2 X9.62 curve over a 239-bit binary field c2tnb239v3 X9.62 curve over a 239-bit binary field c2pnb272w1 X9.62 curve over a 272-bit binary field c2pnb304w1 X9.62 curve over a 304-bit binary field c2tnb359v1 X9.62 curve over a 359-bit binary field c2pnb368w1 X9.62 curve over a 368-bit binary field c2tnb431r1 X9.62 curve over a 431-bit binary field wap-wsg-idm-ecid-wtls1 WTLS curve over a 113-bit binary field wap-wsg-idm-ecid-wtls3 NIST/SECG/WTLS curve over a 163-bit binary field wap-wsg-idm-ecid-wtls4 SECG curve over a 113-bit binary field wap-wsg-idm-ecid-wtls5 X9.62 curve over a 163-bit binary field wap-wsg-idm-ecid-wtls6 SECG/WTLS curve over a 112-bit prime field wap-wsg-idm-ecid-wtls7 SECG/WTLS curve over a 160-bit prime field wap-wsg-idm-ecid-wtls8 WTLS curve over a 112-bit prime field wap-wsg-idm-ecid-wtls9 WTLS curve over a 160-bit prime field wap-wsg-idm-ecid-wtls10 NIST/SECG/WTLS curve over a 233-bit binary field wap-wsg-idm-ecid-wtls11 NIST/SECG/WTLS curve over a 233-bit binary field wap-wsg-idm-ecid-wtls12 WTLS curve over a 224-bit prime field Options Description OP_ALL Enables workarounds for various bugs present in other SSL implementations. This option is set by default. It does not necessarily set the same flags as OpenSSL’s SSL_OP_ALL constant. OP_NO_SSLv2 Prevents an SSLv2 connection. This option is only applicable in conjunction with PROTOCOL_SSLv23. It prevents the peers from choosing SSLv2 as the protocol version. OP_NO_SSLv3 Prevents an SSLv3 connection. This option is only applicable in conjunction with PROTOCOL_SSLv23. It prevents the peers from choosing SSLv3 as the protocol version. OP_NO_TLSv1 Prevents a TLSv1 connection. This option is only applicable in conjunction with PROTOCOL_SSLv23. It prevents the peers from choosing TLSv1 as the protocol version. OP_NO_TLSv1_1 Prevents a TLSv1.1 connection. This option is only applicable in conjunction with PROTOCOL_SSLv23. It prevents the peers from choosing TLSv1.1 as the protocol version. Available only with openSSL version 1.0.1+. OP_NO_TLSv1_2 Prevents a TLSv1.2 connection. This option is only applicable in conjunction with PROTOCOL_SSLv23. It prevents the peers from choosing TLSv1.2 as the protocol version. Available only with openSSL version 1.0.1+. OP_CIPHER_SERVER_PREFERENCE Use the server’s cipher ordering preference, rather than the client’s. This option has no effect on client sockets and SSLv2 server sockets. OP_SINGLE_DH_USE Prevents re-use of the same DH key for distinct SSL sessions. This improves forward secrecy but requires more computational resources. This option only applies to server OP_SINGLE_ECDH_USE Prevents re-use of the same ECDH key for distinct SSL sessions. This improves forward secrecy but requires more computational resources. This option only applies to server OP_NO_COMPRESSION Disable compression on the SSL channel. This is useful if the application protocol supports its own compression scheme. This option is only available with OpenSSL 1.0.0 and later
{"url":"https://docs.protegrity.com/docs/dsg/dsg_app_openssl/","timestamp":"2024-11-13T11:43:38Z","content_type":"text/html","content_length":"413867","record_id":"<urn:uuid:bb1f20e3-f63a-4e59-832d-aec29db0f3d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00701.warc.gz"}
Convert degrees to radians Please provide values below to convert degree [°] to radian [rad], or vice versa. Definition: A degree (symbol: °) is a unit of angular measurement defined by a full rotation of 360 degrees. Because a full rotation equals 2π radians, one degree is equivalent to π/180 radians. Although a degree is not an SI (International System of Units) unit, it is an accepted unit within the SI brochure. History/origin: The origin of the degree as a unit of rotation and angles is not clear. One of the theories suggests that 360 is readily divisible, has 24 divisors, and is divisible by every number from one to ten, except for seven, making the number 360 a versatile option for use as an angle measure. Current use: The degree is widely used when referencing angular measures. While the degree might be more prevalent in common usage, and many people have a more practical understanding of angles in terms of degrees, the radian is the preferred measurement of angle for most math applications. This is because the radian is based on the number π which is heavily used throughout mathematics, while the degree is largely based on the arbitrary choice of 360 degrees dividing a circle. Definition: A radian (symbol: rad) is the standard unit of angular measure. It is a derived unit (meaning that it is a unit that is derived from one of the seven SI base units) in the International System of Units. An angle's measurement in radians is numerically equal to the length of a corresponding arc of a unit circle. One radian is equal to 180/π (~57.296) degrees. History/origin: Measuring angles in terms of arc length has been used by mathematicians since as early as the year 1400. The concept of the radian specifically however, is credited to Roger Cotes who described the measure in 1714. Although he described the unit, Cotes did not name the radian, and it was not until 1873 that the term "radian" first appeared in print. Current use: The radian is widely used throughout mathematics as well as in many branches of physics that involve angular measurements. Although the symbol "rad" is the accepted SI symbol, in practice, radians are often written without the symbol since a radian is a ratio of two lengths and is therefore, a dimensionless quantity. As such, when angle measures are written, the lack of a symbol implies that the measurement is in radians, while a ° symbol would be added if the measurement were in degrees. Degree to Radian Conversion Table Degree [°] Radian [rad] 0.01 ° 0.0001745329 rad 0.1 ° 0.0017453293 rad 1 ° 0.0174532925 rad 2 ° 0.034906585 rad 3 ° 0.0523598776 rad 5 ° 0.0872664626 rad 10 ° 0.1745329252 rad 20 ° 0.3490658504 rad 50 ° 0.872664626 rad 100 ° 1.745329252 rad 1000 ° 17.4532925199 rad How to Convert Degree to Radian 1 ° = 0.0174532925 rad 1 rad = 57.2957795131 ° Example: convert 15 ° to rad: 15 ° = 15 × 0.0174532925 rad = 0.2617993878 rad Popular Angle Unit Conversions Convert Degree to Other Angle Units
{"url":"https://coonbox.com/degrees-to-radians.html","timestamp":"2024-11-08T04:52:40Z","content_type":"text/html","content_length":"10232","record_id":"<urn:uuid:fc8dc24a-e92e-429a-bbf6-80a229557829>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00211.warc.gz"}
Relative concentration vs. altitude • Thread starter kasse • Start date In summary, the ratio of nitrogen to oxygen molecules at sea level in the Earth's atmosphere is 78:21. At an altitude of 10km, the ratio becomes approximately 104:33 due to the decrease in pressure and temperature. This result is qualitatively reasonable based on the barometric formula and the properties of gases in the atmosphere. Earth's atmosphere has roughly four molecules of nitrogen for every oxygen molecule at sea level; more precisely, the ratio is 78:21. Assuming a constant temperature at all altitudes (not really very accurate), what is the ratio at an altitude of 10km? Explain why your result is qualitative reasonable. I found this formula, describing nanoparticles in air. I guess it then also can be used for gas molecules? That way I get approx. 104:33 Last edited: Yes, the formula for nanoparticles in air can also be used for gas molecules. It is called the barometric formula and it describes the relationship between pressure, temperature, and altitude in the Earth's atmosphere. Using this formula, we can calculate the ratio of nitrogen to oxygen molecules at different altitudes. At sea level, the ratio of nitrogen to oxygen molecules is 78:21. As we go higher in the atmosphere, the pressure decreases and the temperature also decreases. This means that there are fewer molecules in a given volume of air at higher altitudes. Therefore, the ratio of nitrogen to oxygen molecules will also decrease. At an altitude of 10km, the pressure is approximately 100 times lower than at sea level and the temperature is around -50 degrees Celsius. Using the barometric formula, we can calculate that the ratio of nitrogen to oxygen molecules at 10km would be approximately 104:33. This means that there are still more nitrogen molecules than oxygen molecules, but the ratio is slightly closer to 3:1 instead of 4:1. This result is qualitatively reasonable because as we go higher in the atmosphere, the concentration of gases becomes more uniform due to mixing and diffusion. This means that the ratio of nitrogen to oxygen molecules will become more similar at higher altitudes. Additionally, the decrease in pressure and temperature also affect the ratio of molecules, as mentioned before. Therefore, our calculated ratio at 10km is a reasonable estimate based on the barometric formula and the properties of gases in the Earth's atmosphere. FAQ: Relative concentration vs. altitude 1. What is relative concentration? Relative concentration refers to the amount of a substance present in a given volume or area, compared to its surroundings. It is often used to describe the concentration of gases or particles in the Earth's atmosphere. 2. How does relative concentration change with altitude? The relative concentration of substances in the atmosphere generally decreases as altitude increases. This is because the higher you go, the less air there is above you, and therefore the fewer gas molecules or particles there are present. 3. What factors affect relative concentration at different altitudes? The main factors that affect relative concentration at different altitudes include atmospheric pressure, temperature, and the source and rate of emission of the substance. Winds and other weather conditions can also play a role in the distribution of substances in the atmosphere. 4. What are some examples of substances with varying relative concentrations at different altitudes? Oxygen and nitrogen are two gases with relatively constant concentrations at all altitudes, while substances like water vapor, ozone, and carbon dioxide have varying concentrations depending on altitude. Particles such as dust, pollen, and pollutants also have different relative concentrations at different altitudes. 5. How do scientists measure relative concentration vs. altitude? Scientists use a variety of methods to measure relative concentration vs. altitude, including satellite observations, weather balloons, and ground-based instruments such as lidar or air quality monitors. These methods allow for the collection of data on the distribution of substances in the atmosphere and their changes with altitude.
{"url":"https://www.physicsforums.com/threads/relative-concentration-vs-altitude.288881/","timestamp":"2024-11-07T10:55:26Z","content_type":"text/html","content_length":"80272","record_id":"<urn:uuid:694085e6-6024-4b9e-b973-fcb63a15dc40>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00395.warc.gz"}
A Simple “Conforming” Tetrahedral Mesh of a Cube My adviser told me about this meshing of a cube (or any hexahedral) into 6 different tetrahedrons which is easy to draw. For the sake of exposition, we will consider the cube $(-1,1)^3$ The procedure is as follows: 1. Draw a diagonal from $(-1,-1, -1)$ to $(1,1,1)$. 2. Now, project the diagonal to each of the 6 faces of the cube, which will result in a mesh of the cube. While the procedure is simple enough, the individual tetrahedrons were a bit difficult to visualize. To help with that, I’ve made a small Mathematica script that one can play with: From that, we can easily see that mesh now. So what does the “conforming” part of the title mean? Of course, there is an easier way to tile the cube using only 5 tetrahedrons, but if you put together multiple cubes, one have to be careful of how you orient them. Using the above meshing, as long as the cubes are not too distorted and can easily create the tetrahedral mesh by drawing the diagonal in the same direction. For example, below we have a eight hexahedral elements laid in a cube, but there are three slab, three columns, and two cubes (with one significantly smaller). This whole thing was needed so that I can construct something as anisotropic as the mesh below without resorting to fancy software. This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://marshalljiang.com/a-simple-conforming-tetrahedral-mesh-of-a-cube/","timestamp":"2024-11-06T20:06:18Z","content_type":"text/html","content_length":"95711","record_id":"<urn:uuid:66fb4558-9122-4329-bebd-1ce6140e2619>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00466.warc.gz"}
Project Euler problem 220 - Heighway Dragon This document goes through a Java solution for Project Euler problem 220 . If you want to achieve the pleasure of solving the unfamiliarity and you don't have a solution yet, PLEASE STOP READING UNTIL YOU FIND A SOLUTION. Dragon Curve . The first thing came to my mind, is to DFS traverse a 50 level tree by 10^12 steps, during which it keeps track of a direction and a coordinate. Roughly estimate, this solution takes a 50 level recursion, which isn't horrible, and 10^12 switch/case calls. Written by a lazy and irresponsible Java engineer, this solution vaguely looks like: Traveler traveler = new Traveler(new Coordinate(0, 0), Direction.UP); void main() { try { traverse("Fa", 0); catch (TerminationSignal signal) { print signal; void traverse(String plan, int level) { foreach(char c:plan) { switch(c) { case 'F': traveler.stepForward(); break; case 'L': traveler.turnLeft(); break; case 'R': traveler.turnRight(); break; case 'a': if(level < 50) { traverse("aRbFR", level+1); case 'b': if(level < 50) { traverse("LFaLb", level+1); if(traveler.steps == 10^12) { throw new TerminationSignal("Coordinate after 10^12 steps is " + traveler.coordinate); I was quite satisfied with this approach, I thought it was neat, efficient and simple until I ran it. It ran out of all my 20 minutes of patience before I killed the process. It took 10 seconds to calculate for 10^9 steps, it'd take 3 hours for 10^12 steps. Since I only care the coordinate after 10^12 steps, the solution doesn't have to traverse to the end one step after another if it can predict the coordinate delta after a series of steps. Particularly, since we know "FaRbFR" takes 2 steps and changes direction backwards, we don't have to really go through all 5 characters in "FaRbFR" to know the result. Similarly, we know the coordinate delta brought by FaRbFRRLFaLbFR, or letter 'a' and 'b' in any level. Therefore when the code expand 'a' to "FaRbFR" and traverse it, it can also remember what has been done so that next time it doesn't have to expand it again. Map cachedPaths = new HashMap(); void traverse(String plan, int level) { foreach(char c:plan) { switch(c) { case 'F': traveler.stepForward(); break; case 'L': traveler.turnLeft(); break; case 'R': traveler.turnRight(); break; case 'a': case 'b': expandSubstitution(c, level); if(traveler.steps == 10^12) { throw new TerminationSignal("Coordinate after 10^12 steps is " + traveler.coordinate); void expandSubstitution(char c, int level) { if(level >= 50) { return; } String pathKey = c + "-" + level; Path path = cachedPaths.get(pathKey); if(path != null && path.steps + traveler.steps < 10^12) { Traveler begin = traveler.snapshot(); if(c == 'a') { traverse("aRbFR", level+1); } else { traverse("LFaLb", level+1); path = traveler.pathFrom(begin); cachedPaths.put(pathKey, path); This solution takes 50 * 5 * 2 switch/case calls to figure out 100 paths for 'a' and 'b' in 50 levels, at most. With 100 known paths, rest of work is to call walk() for log2(10^12) times. This solution only took 4 milliseconds to finish on my computer (Intel(R) Core(TM)2 CPU 6600 @2.40GHz). jiaqi@rattlesnake:~/workspace/eulerer$ mvn exec:java [INFO] Scanning for projects... >>>>>> Runnining solution of problem 220 Coordinate after 10^12 steps is ####76,####04 <<<<<< Solution 220 took 4.089997 ms For obvious reason, I masked the final result in the output above. If you are not bored enough and wonder what the exact Java code looks like, the source code is hosted in under cyclops-group project in SourceForge.net. For several weeks I've been trying to put together an Angular application served Java Spring MVC web server in Bazel. I've seen the Java, Angular combination works well in Google, and given the popularity of Java, I want get it to work with open source. How hard can it be to run arguably the best JS framework on a server in probably the most popular server-side language with the mono-repo of planet-scale ? The rest of this post walks through the headaches and nightmares I had to get things to work but if you are just here to look for a working example, github/jiaqi/angular-on-java is all you need. https://github.com/jiaqi/angular-on-java Java web application with Appengine rule Surprisingly there isn't an official way of building Java web application in Bazel, the closest thing is the Appengine rule and Spring MVC seems to work well with it. 3 Java classes, a JSP and an appengine.xml was all I need. At this point, the server starts well but I got "No JPA annotation is like a subset of Hibernate annotation, this means people will find something available in Hibernate missing in JPA. One of the important missing features in JPA is customized ID generator. JPA doesn't provide an approach for developer to plug in their own IdGenerator. For example, if you want the primary key of a table to be BigInteger coming from sequence, JPA will be out of solution. Assume you don't mind the mixture of Hibernate and JPA Annotation and your JPA provider is Hibernate, which is mostly the case, a solution before JPA starts introducing new Annotation is, to replace JPA @SequenceGenerator with Hibernate @GenericGenerator. Now, let the code talk. /** * Ordinary JPA sequence. * If the Long is changed into BigInteger, * there will be runtime error complaining about the type of primary key */ @Id @Column(name = "id", precision = 12) @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "XyzIdGenerator") @SequenceGe • Get link • Facebook • Twitter • Pinterest • Email • Other Apps 4 comments Amazon AWS Simple Workflow AWS Simple Workflow(SWF) from Amazon is a unique workflow solution comparing to traditional workflow products such as JBPM and OSWorkflow. SWF is extremely scalable and engineer friendly(in that flow is defined with Java code) while it comes with limitations and lots of gotchas. Always use Flow Framework The very first thing to know is, it's almost impossible to build a SWF application correctly without Flow Framework . Even though the low level SWF RESTful service API is public and available in SDK, for most workflow with parallelism, timer or notification, consider all possibilities of how each event can interlace with another, it's beyond manageable to write correct code with low-level API to cover all use cases. For this matter SWF is quite unique comparing to other thin-client AWS technologies. The SWF flow framework heavily depends on AspectJ for various purposes. If you are not familiar with AspectJ in Eclipse and Maven, this article
{"url":"https://blog.cyclopsgroup.org/2009/11/project-euler-problem-220.html","timestamp":"2024-11-04T15:32:22Z","content_type":"text/html","content_length":"141355","record_id":"<urn:uuid:711bb2a8-21a9-461a-adf7-0c1847817378>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00687.warc.gz"}
Radio Frequency - Components and Systems - Conductors In radio frequency (RF) circuits, basic components like resistors, capacitors, and inductors exhibit more than just their individual resistive, capacitive, or inductive properties. We'll begin our analysis with the most fundamental component. There are various forms of conductors in RF circuits, and the behavior of conductors in the RF spectrum largely depends on their diameter and length. In the AWG (American Wire Gauge) specification, each wire gauge is associated with a specific diameter (with a 6-unit difference in AWG value corresponding to a twofold difference in the English system diameter): AWG Value Diameter (mil) Skin Effect At low frequencies, the flow of electrons within a conductor covers the entire cross-section of the conductor. As the frequency increases, the strengthening of the magnetic field at the center of the cross-section introduces impedance to the electron flow, causing electrons to be pushed toward the edge. This results in a lower current density at the center compared to the edge, a phenomenon known as the skin effect. The skin effect is applicable to all conductors, including the pins of resistors, capacitors, and inductors. The depth at which the current density in a conductor decreases to \(\frac{1}{e} (37\%)\) at the edge is referred to as the skin depth. This depth is a function of frequency, magnetic permeability of the conductor, and conductivity. Therefore, different conductors have different skin depths. The impact of the skin effect is a reduction in the effective cross-sectional area of the conductor, resulting in increased AC impedance. For copper foil, the skin depth is approximately 0.85 cm at 60 Hz and about 0.007 cm at 1 MHz (meaning that 63% of the RF current flows within a width of 0.007 cm from the surface). Straight-Line Inductance Any current-carrying medium generates a magnetic field, and if it carries AC current, this magnetic field will alternate, inducing a voltage along the conductor that opposes changes in current. This phenomenon is known as self-inductance, and components exhibiting this property are referred to as inductors. While straight-line inductance may seem insignificant, it becomes significant at high The magnitude of straight-line inductance depends on length and diameter, and it is calculated using the following formula: \[ L=0.002l[2.3\log(\frac{4l}{d})-0.75] \mu H \] Where \(L\) is the inductance in microhenrys (\(\mu H\)), and the units for conductor length (\(l\)) and diameter (\(d\)) are centimeters (cm). Inductance is a factor that cannot be overlooked in RF design. All inductors and RF circuits, including connecting wires and pins, exhibit inductance. References and Acknowledgments • "RF-Circuit-Design(second-edition)_Chris-Bowick" Original: https://wiki-power.com/ This post is protected by CC BY-NC-SA 4.0 agreement, should be reproduced with attribution. This post is translated using ChatGPT, please feedback if any omissions.
{"url":"https://mkdocs.wiki-power.com/en/%E5%B0%84%E9%A2%91-%E7%BB%84%E4%BB%B6%E4%B8%8E%E7%B3%BB%E7%BB%9F-%E5%AF%BC%E7%BA%BF/","timestamp":"2024-11-12T20:34:57Z","content_type":"text/html","content_length":"111612","record_id":"<urn:uuid:c65b383a-1d95-4731-9357-096faa5054c7>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00758.warc.gz"}
[QSMS Seminar 14,16 Dec] A brief introduction to differential graded Lie algebras I, II • Date : 12월 14일(화), 16일(목) 16:00-17:30 • Place : Zoom (ID: 642 675 5874) • Speaker : 조창연 (QSMS, SNU) • Title : A brief introduction to differential graded Lie algebras I, II • Abstract : The importance of differential graded Lie algebras goes back at least to Quillen’s rational homotopy theory, which also motivated their applications to deformation theory. Later, such an idea was developed further by Deligne, Drinfeld, and Feigin, and influenced many including Kontsevich and Soibelman. The purpose of these talks is to give a short introduction to the notion of differential graded Lie algebras and its relationship to deformation theory. These talks are intended to be an elementary introduction to the subject, but due to the current nature of it, I’ll say something about the theory of infinity-categories. The first talk will be devoted to exploring some of the fundamentals of differential graded Lie algebras and infinity-categories, and the application to deformation theory will be covered in the later half of the second talk.
{"url":"https://qsms.math.snu.ac.kr/index.php?mid=board_sjXR83&listStyle=viewer&order_type=desc&l=en&sort_index=title&page=2&document_srl=2055","timestamp":"2024-11-15T00:18:16Z","content_type":"text/html","content_length":"21918","record_id":"<urn:uuid:036971ff-6f04-43f2-89ad-a47f2304b608>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00526.warc.gz"}
What is the discriminant of a quadratic function? | Socratic What is the discriminant of a quadratic function? 2 Answers The discriminant of a quadratic function is given by: $\Delta = {b}^{2} - 4 a c$ What is the purpose of the discriminant? Well, it is used to determine how many REAL solutions your quadratic function has If $\Delta > 0$, then the function has 2 solutions If $\Delta = 0$, then the function has only 1 solution and that solution is considered a double root If $\Delta < 0$, then the function has no solution (you can't square root a negative number unless it's complex roots) Given by the formula $\Delta = {b}^{2} - 4 a c$, this is a value computed from the coefficients of the quadratic that allows us to determine some things about the nature of its zeros... Given a quadratic function in normal form: $f \left(x\right) = a {x}^{2} + b x + c$ where $a , b , c$ are real numbers (typically integers or rational numbers) and $a \ne 0$, then the discriminant $\Delta$ of $f \left(x\right)$ is given by the formula: $\Delta = {b}^{2} - 4 a c$ Assuming rational coefficients, the discriminant tells us several things about the zeros of $f \left(x\right) = a {x}^{2} + b x + c$: • If $\Delta > 0$ is a perfect square then $f \left(x\right)$ has two distinct rational real zeros. • If $\Delta > 0$ is not a perfect square then $f \left(x\right)$ has two distinct irrational real zeros. • If $\Delta = 0$ then $f \left(x\right)$ has a repeated rational real zero (of multiplicity $2$). • If $\Delta < 0$ then $f \left(x\right)$ has no real zeros. It has a complex conjugate pair of non-real zeros. If the coefficients are real but not rational, the rationality of the zeros cannot be determined from the discriminant, but we still have: • If $\Delta > 0$ then $f \left(x\right)$ has two distinct real zeros. • If $\Delta = 0$ then $f \left(x\right)$ has a repeated real zero (of multiplicity $2$). The discriminant occurs in the quadratic formula for the zeros of $a {x}^{2} + b x + c$, namely: $x = \frac{- b \pm \sqrt{{b}^{2} - 4 a c}}{2 a} = \frac{- b \pm \sqrt{\Delta}}{2 a}$ from which you can understand why the zeros have the nature they do for different values of $\Delta$. What about cubics, etc.? Polynomials of higher degree also have discriminants, which when zero imply the existence of repeated zeros. The sign of the discriminant is less useful, except in the case of cubic polynomials, where it allows us to identify cases quite well... $f \left(x\right) = a {x}^{3} + b {x}^{2} + c x + d$ with $a , b , c , d$ being real and $a \ne 0$. The discriminant $\Delta$ of $f \left(x\right)$ is given by the formula: $\Delta = {b}^{2} {c}^{2} - 4 a {c}^{3} - 4 {b}^{3} d - 27 {a}^{2} {d}^{2} + 18 a b c d$ • If $\Delta > 0$ then $f \left(x\right)$ has three distinct real zeros. • If $\Delta = 0$ then $f \left(x\right)$ has either one real zero of multiplicity $3$ or two distinct real zeros, with one being of multiplicity $2$ and the other being of multiplicity $1$. • If $\Delta < 0$ then $f \left(x\right)$ has one real zero and a complex conjugate pair of non-real zeros. Impact of this question 14360 views around the world
{"url":"https://api-project-1022638073839.appspot.com/questions/what-is-the-determinant-of-a-quadratic-function","timestamp":"2024-11-11T04:37:08Z","content_type":"text/html","content_length":"38600","record_id":"<urn:uuid:f344ba4e-00cc-4220-acf4-5df034229d37>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00467.warc.gz"}
Year 3 Math Tutoring | Dymocks Tutoring Lesson breakdown Review and Reinforcement Review and reinforce the previous week's lesson to improve retention. Students also complete a weekly assessment to assess their understanding of the previous week's content. Explain and Explore Students are guided through a detailed analysis of the scheduled weekly material by the tutor. The course material is broken into bite-sized ‘chunks’ so it can be manageably understood and retained by students. Practice and Perform Students are guided on how to answer exercises and are offered feedback on their responses. Recap and Synthesise Students review the material covered in the lesson and are provided with weekly feedback identifying areas for improvement. Term course program Dymocks Term 1 Students Study whole numbers, addition and subtraction, multiplication and division and fractions and decimals. View the lesson plan I went from getting C’s and B’s to A’s in English so Dymocks Tutoring has definitely helped a lot. I really like how they teach to the syllabus. When I go into class, I know that what I’ve learned at Dymocks Tutoring is going to directly correlate to what I’m learning at school. Sara, Year 12 The Dymocks Learning System Improve your mark and rank at school by getting a head start on your subject. Potentia courses are three months (one term) in advance of the school year. Vertical streaming ensures that all students can be extended to their level of ability. As you achieve success build confidence in your ability. This will help you not just at high school but in your journey of lifelong learning. Continue to build a solid foundation for your future success both at school and in life. Learning at Dymocks Tutoring helps you to be expert in your subject which prepares you for post-school the average improvement reported by our 2019 HSC graduates. of students say their confidence improved significantly of students say that studying with Dymocks Tutoring made school easier Ready to try a better way of learning? Frequently Asked Questions What is covered in year 3 maths? After completing year 3 math, students should be confident in their problem-solving skills and ability to use technology to investigate mathematical concepts and check solutions. They should be able to use addition, subtraction, multiplication, division, fractions and money and recognise measurements like depth, weight, capacity and time units. They will also start to learn basic graphing techniques. These skills are essential to achieve excellent results in the upcoming maths NAPLAN. Our opportunity classes are geared towards students demonstrating high academic potential. They will learn with students of similar ability to help them secure a coveted Selected School placement. Our year 3 opportunity classes focus on multiple choice practice and weekly tests to prepare for the opportunity class test. How do I start studying maths for year 3? Students can start studying once they’ve successfully enrolled in year 3 maths. They will review the previous week’s lesson at the beginning of each lesson and recap the lesson at the end to ensure they’ve engaged with the concepts taught to succeed in their NAPLAN. How many terms are in year 3 math? Year 3 maths has four terms, including: • Term 1 — Students will study whole numbers, addition, subtraction, multiplication, division, fractions and decimals. • Term 2 — Students will learn patterns, algebra, lengths, area, volume and capacity. • Term 3 — Students will be taught mass, time, and 3D and 2D spaces, including symmetry. • Term 4 — Students will learn angles, position, data and chance. How much does maths year 3 cost? Dymocks Tutoring in Australia offers small group or private year 3 maths tutoring at $395.00 per term. Save 10% with discounts to support your child at a fraction of the original cost. It includes nine weekly lessons per term for 1.5 hours per lesson and one-on-one support. Our tutors are previous students and professional teachers selected for their subject matter expertise to mentor children so they can achieve their highest potential in maths NAPLAN. Dymocks Tutoring provides a free trial to all students to try year 3 maths tutoring for themselves to see if it’s the right fit. How do I enrol in year 3 maths tutoring? Submit our enrolment form to enrol in year 3 maths tutoring today. Choose the year 3 maths course and enter your payment to get started. With 99% of students saying Dymocks Tutoring boosted their confidence with their studies, you can count on our tutors to help your child succeed throughout their studies. If you have any queries, contact our customer service team, who will be happy to help.
{"url":"https://www.dymockstutoring.edu.au/year-3-courses/maths/","timestamp":"2024-11-04T14:48:27Z","content_type":"text/html","content_length":"186988","record_id":"<urn:uuid:457e1a2b-9327-48c1-a211-276d68644b26>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00883.warc.gz"}
Eighth Grade Number and Operations (NCTM) Understand numbers, ways of representing numbers, relationships among numbers, and number systems. Understand and use ratios and proportions to represent quantitative relationships. Compute fluently and make reasonable estimates. Develop, analyze, and explain methods for solving problems involving proportions, such as scaling and finding equivalent ratios. Geometry (NCTM) Analyze characteristics and properties of two- and three-dimensional geometric shapes and develop mathematical arguments about geometric relationships. Understand relationships among the angles, side lengths, perimeters, areas, and volumes of similar objects. Create and critique inductive and deductive arguments concerning geometric ideas and relationships, such as congruence, similarity, and the Pythagorean relationship. Apply transformations and use symmetry to analyze mathematical situations. Describe sizes, positions, and orientations of shapes under informal transformations such as flips, turns, slides, and scaling. Measurement (NCTM) Apply appropriate techniques, tools, and formulas to determine measurements. Solve problems involving scale factors, using ratio and proportion. Grade 8 Curriculum Focal Points (NCTM) Geometry and Measurement: Analyzing two- and three-dimensional space and figures by using distance and angle Students use fundamental facts about distance and angles to describe and analyze figures and situations in two- and three-dimensional space and to solve problems, including those with multiple steps. They prove that particular configurations of lines give rise to similar triangles because of the congruent angles created when a transversal cuts parallel lines. Students apply this reasoning about similar triangles to solve a variety of problems, including those that ask them to find heights and distances. They use facts about the angles that are created when a transversal cuts parallel lines to explain why the sum of the measures of the angles in a triangle is 180 degrees, and they apply this fact about triangles to find unknown measures of angles. Students explain why the Pythagorean Theorem is valid by using a variety of methods - for example, by decomposing a square in two different ways. They apply the Pythagorean Theorem to find distances between points in the Cartesian coordinate plane to measure lengths and analyze polygons and polyhedra.
{"url":"https://newpathworksheets.com/math/grade-8/similarity-and-scale?dictionary=perimeter&did=52","timestamp":"2024-11-08T06:11:46Z","content_type":"text/html","content_length":"50427","record_id":"<urn:uuid:35b5d499-82b0-4cba-b7ea-4caff601d427>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00192.warc.gz"}
Properties of number 343343 343343 has 20 divisors (see below), whose sum is σ = 470568. Its totient is φ = 246960. The previous prime is 343337. The next prime is 343373. It is a happy number. 343343 is nontrivially palindromic in base 10. It is not a de Polignac number, because 343343 - 2^4 = 343327 is a prime. It is a congruent number. It is not an unprimeable number, because it can be changed into a prime (343303) by changing a digit. It is a polite number, since it can be written in 19 ways as a sum of consecutive naturals, for example, 26405 + ... + 26417. 2^343343 is an apocalyptic number. 343343 is a deficient number, since it is larger than the sum of its proper divisors (127225). 343343 is an equidigital number, since it uses as much as digits as its factorization. 343343 is an evil number, because the sum of its binary digits is even. The sum of its prime factors is 52 (or 31 counting only the distinct ones). The product of its digits is 1296, while the sum is 20. The square root of 343343 is about 585.9547764120. The cubic root of 343343 is about 70.0233255599. It can be divided in two parts, 343 and 343, that added together give a palindrome (686). The spelling of 343343 in words is "three hundred forty-three thousand, three hundred forty-three", and thus it is an iban number.
{"url":"https://www.numbersaplenty.com/343343","timestamp":"2024-11-12T13:55:40Z","content_type":"text/html","content_length":"8281","record_id":"<urn:uuid:fbb3b8b0-29e8-4c65-b761-a47860c0d62b>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00756.warc.gz"}
Precalculus (6th Edition) Blitzer Chapter 7 - Section 7.6 - Linear Programming - Exercise Set - Page 872 5 a. See graph b. $z(0,4)= 8$, $z(0,8)= 16$, $z(4,0)= 12$, c. Maximum $ 16$, at $(0,8)$. Work Step by Step a. We can graph the system of inequalities representing the constraints as shown in the figure where the solution region is a triangle in the first quadrant. b. With the corner points indicated in the figure, we can find the values of the objective function as $z(0,4)=3(0)+2(4)=8$, $z(0,8)=3(0)+2(8)=16$, $z(4,0)=3(4)+2(0)=12$, c. We can determine the maximum value of the objective function as $z(0,8)= 16$, which occurs at $(0,8)$.
{"url":"https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-7-section-7-6-linear-programming-exercise-set-page-872/5","timestamp":"2024-11-04T18:01:40Z","content_type":"text/html","content_length":"65227","record_id":"<urn:uuid:ee88b72b-3e34-4977-a735-cb4bbe4c3d71>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00077.warc.gz"}
Multiverse, Hidden Variables or Random Chance? Which interpretation of quantum mechanics describes reality? The following is an article that originally appeared in the "Studium Integrale Journal" (Jahrgang 30 Heft 1) of the Studiengemeinschaft Wort und Wissen. Quantum mechanics is considered one of the best confirmed theories. Nevertheless, it continues to puzzle scientists. Even almost 100 years after its formulation, there is still disagreement about how its statements should be interpreted. There are many interpretations, all of which are consistent with the scientific data. This multitude of equally valid interpretations of the same theory leads us to a surprising analogy with world views. The double-slit experiment shows that quantum objects have both wave and particle properties (wave-particle duality). In addition, the type of the measurement has an influence on the outcome of the experiment (measurement problem). Mathematically, these observations can be described using various approaches, e.g. the Schrödinger picture, the Heisenberg picture and Feynman's path integral formulation. According to the Schrödinger picture, the quantum objects are in a superposition of several states and are only fixed to one state by the measurement. The probability with which they are measured in a given state can be determined from Schrödinger's wave equation. The interpretation of these equations is the subject of controversial debate and it has been shown that they can be interpreted in different ways. The most common is the Copenhagen interpretation, which states that nature actually behaves in this way. The measurement problem therefore shows the limit of what science can describe. The De Broglie-Bohm theory assumes that there are additional hidden variables that determine the state of the particles. These move on a pilot wave given by the Schrödinger equation. In the many-worlds theory, every state is assumed after a measurement, only not in the same universe. Each measurement here requires the creation of many new worlds, in each of which one of the possible states is assumed. Despite their differences, all these interpretations can conclusively explain the double-slit experiment. Similarly, many natural phenomena can also be explained both by naturalistic evolution as well as supernatural creation. At the end of the 19th century, many scientists were of the opinion that the open questions of physics would soon be solved. Physics would soon be completed. For this reason, Max Planck was advised not to study physics (Planck 1943). Today we know that this was far from the was far from the truth. The 20th century began with a downright explosion of new discoveries and the resulting subfields within physics. On the one hand, Albert Einstein's special theory of relativity in 1905 revolutionized our understanding of space and time (Einstein 1905). Later Einstein developed the more general form of the theory of relativity, which includes gravity (Einstein 1915). On the other hand the same Max Planck with his description of blackbody radiation laid the foundation for quantum mechanics (Planck 1901). The blackbody problem was one of the open questions in physics. A blackbody is an idealized construct that absorbs any radiation that hits it. According to the laws of radiation theory, this means that it emits all the radiation that it produces due to its temperature. Our sun, for example, can be approximately regarded as such a black If you try to use the classical laws of radiation to calculate what intensity of the light is emitted by the blackbody at a given temperature and frequency, the result is that it tends towards infinity at high (ultraviolet) frequencies. Such a solution not only makes no physical sense, but also contradicts the measurement results. After all an oven or a light bulb does not emit large amounts of ultraviolet light. Also, the spectrum of light from the sun has a comparatively low UV content. This problem was known as the "ultraviolet catastrophe". Planck was able to solve it by introducing an auxiliary constant known today as Planck's constant. Originally, he intended to cancel this at the end of his calculations. However, he only obtained results with a non-vanishing Planck constant. This means that energy does not occur in arbitrary quantities, but in quantized packets of fixed size. This realization gave rise to the concept of quantum mechanics. Even more than the theory of relativity, quantum theory has put the imagination and understanding of scientists to the test. Further discoveries, such as the photoelectric effect, showed that light not only acts as a wave, as was previously assumed, but also has the properties of particles. Newton had already suspected this. Conversely, it was shown that objects previously regarded as pure particles, such as electrons, also have a wave nature. Even more than the theory of relativity, quantum theory has put the imagination and understanding of scientists to the test. This situation became even stranger when it was discovered that the measurement itself has an influence on whether the measured object behaves like a particle or like a wave. This phenomenon, which contains the essential statements of quantum mechanics, is particularly easy to illustrate using the double-slit experiment. The Double-Slit Experiment The structure of the double-slit experiment consists of two walls: The front wall is provided with two slits. The rear wall resembles a screen that absorbs incoming objects. Such an experimental setup is interesting because particle and wave objects behave differently here. It can be used to determine whether objects such as light behave like a wave or a particle. With particle-like objects, the behavior is simple: the particles that do not bounce off the front wall will pass through one of the two slits and therefore also arrive in the area around these two slits on the rear wall (cf. Fig. 1). With wave-like particles, the behavior is more complicated: a wave will, after passing the double slit, split into two waves (one per slit). These meet again behind the double slit. Here two effects can occur in principle: On the one hand, a wave trough can meet a wave trough or a wave crest can meet a wave crest. On the other hand, a wave crest can meet a wave trough or a wave trough can meet a wave crest. The result is the sum of the amplitudes of the waves hitting each other. This means that in the former case, the amplitude will double, while the amplitudes cancel each other out in the latter case. This has a pattern with a large maximum in the center, followed by smaller maxima towards the outside. This effect is called interference and the resulting pattern is called an interference pattern (cf. Fig. 1). We now execute this experiment with photons or electrons, for example. It results in an interference pattern in both cases. Problem solved, we are dealing with waves, you might think. But if you turn down the intensity of the radiation, you notice that individual points are always detected on the screen, which in turn indicates a particle nature. This seems contradictory. How can an object act simultaneously as a particle and as a wave? It is as if the particle passes through both slits, interferes with itself and then hits the screen. The question is what exactly happens at the double slit. Therefore, we now position a detector at the front wall, which checks which slit the particle goes through. And this is where it gets strange: if we measure which slit the object passes through, we know that it will only go through one slit or the other. At the same time, an interference pattern no longer appears on the screen, but the pattern that would have been expected in the case of particles. In other words, by measuring which slit the object passes through, we have changed the outcome of the experiment, as if the objects knew that we were observing them and would therefore behave differently. This effect is called the measurement Mathematical Description Even though the new findings were very strange and, as we will see later, no satisfactory interpretation has yet been found, a mathematical description was discovered relatively quickly: In 1925/26, Erwin Schrödinger with his wave mechanics (Schrödinger 1926) and Werner Heisenberg with his matrix mechanics (Heisenberg 1925) independently found a suitable description of the effects. Schrödinger established the wave equation named after him, which can describe the wave behavior of quantum objects at any point in time. However, an interpretation had to be found had to be found as to what this wave physically describes. Max Born interpreted it such that the magnitude squared of the wave function indicates the probability of finding a particle in a certain state. Quantum mechanics therefore works in such that quantum objects are in a superposition of several different states. During a measurement, however, the object is always always found in a single state. The wave function is a measure of the probability of finding a quantum object in a certain state at a certain time. Since a measurement cancels the superposition, this is referred to as the collapse of the wave function. With this description it is possible to describe the behavior on a microscopic level. However, this is not completely possible: only probabilities can be given as to whether something will happen or In classical physics, i.e. physics without quantum mechanics, if the initial conditions of an object and the forces at work are known, you can predict their behavior definitely and at any point in This is no longer possible in quantum mechanics. Quantum mechanics is therefore not deterministic. One of the questions that arises is whether this means that quantum mechanics is an incomplete theory or whether nature actually behaves in this way and the processes at a fundamental level are left to chance. Before we pursue this question, it should be mentioned that there are various mathematical descriptions of quantum mechanics. In addition to the Schrödinger picture described before there is the Heisenberg picture, which is similar and which can be converted into the Schrödinger picture through a mathematical transformation. In addition, Richard Feynman developed the so-called path integral formulation of quantum mechanics in his doctoral thesis (Feynman 1948). In classical physics (physics without quantum mechanics) an object chooses the path on which the action (defined as energy times time) is minimized. In Feynman's path integral formalism, every possible path that an object can choose contributes to the total probability that it will move from A to B. The smaller the action of the respective path from A to B, the higher the is the contribution to the overall probability. This formulation is also equivalent to the other descriptions and can be derived from them. Various Interpretations Since its formulation, quantum mechanics has been the subject of controversial debate. Its usefulness was undeniable. Even today it is considered one of the best confirmed theories. However, many of its statements, such as indeterminism, were and still are a thorn in the side of many scientists. This was also the case for Albert Einstein, who used the words "God does not play dice" to express his displeasure with quantum mechanics, which is based on probabilities (Einstein & Born 1972). After a mathematical description of quantum mechanics was found, we came to the actual problem of interpreting these findings appropriately. No conclusive result has yet been achieved in this area. This is also due to the fact that the same predictions can be determined from each of the different interpretations and the "correct" interpretation can therefore only be tested experimentally to a limited extent. The most common interpretations are briefly described below. The Copenhagen Interpretation The first interpretation, known as the Copenhagen was initially formulated as early as 1927 by Niels Bohr and Werner Heisenberg (see Fig. 2). In principle, it states that the phenomena of quantum mechanics do not indicate an incompleteness of the theory, but rather exist in nature. They therefore show where the limits of science lie: Only that which is measured is real. No statement can be made about events that occur between the measurements without getting caught up in contradictions. This can be seen, for example, in the double slit described above: If you look at what happens before the measurement on the screen, it is to be expected that about one half of the half of the objects hitting the screen through the first slit and the other half through the second slit. Summing up the expected distributions, a pattern is obtained that does not correspond to the interference pattern (cf. Heisenberg 1979). In science, something objective that is independent of the observer becomes something subjective that has no clearly definable reality beyond our measurements. What happens between the measurements cannot be answered. With the wave function quantum mechanics provides a tool to describe what could happen. This can be used to determine the probability of what future measurements will reveal. But only what is measured can be certain. Thus in science, something objective which is independent of the observer becomes something subjective that has no clearly definable reality beyond our measurements. In physics, this interpretation is regarded as the standard. This is mainly because it is pragmatic and argues according to the principle "that's just the way it is". This is helpful in order to focus on scientific work without getting lost in endless philosophical quandaries. This view of the theory of quantum mechanics as a tool without a deeper reality can be classified as instrumentalism. In contrast to this is scientific realism, which assumes that reality is as predicted by our confirmed theories. Nevertheless, the Copenhagen interpretation can also be regarded as a real description of reality. Still, many scientists are dissatisfied with this interpretation and especially the resulting indeterminism, which is also due to the fact that until the advent of quantum mechanics, every theory made deterministic predictions. Therefore, over the course of time alternative interpretations were developed. Hidden Variable Theories Hidden-variable theories are an umbrella term for interpretations of quantum mechanics which assume that quantum mechanics is not a complete theory and further (hidden) variables exist that determine the system. If such a such a theory uses local hidden variables, it would it would fulfill Bell's inequalities (Shimony 2004). It was experimentally confirmed, however, that these inequalities are not fulfilled (Aspect 1999). Theories with hidden variables are thus variables are severely restricted. De Broglie-Bohm theory: The best-known non-local theory with hidden variables, and therefore not affected by these restrictions, is the De Broglie-Bohm theory (see Fig. 3). It states that the Schrödinger equation does not describe the entire system. According to this theory, each particle has a well-defined location and a well-defined velocity, which are determined by a by a so-called guiding equation. The wave function then acts as a guiding wave (pilot wave). Since the guiding equations are determined from a differential equation of the wave equation initial conditions must be found for an exact description. Since these cannot be determined, this leads to the apparent indeterminism of quantum mechanics (Dürr, Goldstein & Zanghí 1992). Many-Worlds Interpretation The many-worlds interpretation is probably the best known. This avoids the problem of indeterminism and the measurement problem by assuming that the Schrödinger equation is suitable for describing any closed system (Tegmark 2009). This means that at any point in time, even during a measurement, the system can be completely described with the Schrödinger equation. Conversely, this means that there can be no collapse of the wave function and instead, after a measurement of a superposition state, all states are assumed, but in different universes (see Fig. 4). With each measurement process, there is for each possible outcome a new branch of a universe in which exactly this outcome has occurred. The idea of several universes in which one's own self exists again and has taken a different path is very interesting for many people. This concept of a multiverse has now reached pop culture and can be found in many books and films. The idea of several universes in which one's own self exists again and has taken a different path is very interesting for many people. Whether reality really looks like this is unclear due to the sheer number of conceivable universes, which in most cases are like two peas in a pod. This is just an overview of the many interpretations of quantum mechanics that have been developed since its formulation. They are all able to reproduce the results of quantum mechanics. This means that it is not possible to find out scientifically which is the correct interpretation. It may be possible at some point in the future to use more precise measurement to exclude one or the other interpretation. However, it is quite possible that further interpretations will be found in the meantime. Presumably one will never be able to decide which interpretation is the correct one using scientific methods alone. This shows the limits of science: When it comes to interpreting scientific findings, the scientific method can only provide limited answers. Other means must be used here. And even if you arrive at a reasonable interpretation that is consistent with the measurement data, this does not mean that it is the right one. There could be another interpretation that describes the measurement data just as well or even more accurately. Even if you arrive at a reasonable interpretation that is consistent with the measurement data, this does not mean that it is the correct one. One other thing stands out when considering of these different interpretations: Although the statements of quantum mechanics are not deterministic, as can be seen from the Copenhagen interpretation, most alternative interpretations allow for a deterministic world view. In a deterministic view of the world, everything is predetermined, even the processes in the human brain. Free will (in the sense of genuine libertarian freedom) does not exist according to this Furthermore, the many-worlds interpretation results in a view according to which each individual human being is nothing special, but only one of countless other variants of "his" life and basically only an arbitrarily short episode of a gigantic copying process. Decisions made would not represent anything worthy of recognition, since it would be pure coincidence that "one" would find oneself in a universe in which "one" would have made these decisions. What is more: The personal identity of the human subject - that I am the same, throughout time and thus through very many events of my life - becomes questionable here. These interpretations obviously stem from a fatalistic view of the world, which also denies the objective personal identity of the ego, as it corresponds to our natural understanding of ourselves. This does not follow from scientific data; after all, the Copenhagen interpretation is also in harmony with this data. Rather, it stems from the personal world view of those who postulated these postulated these theories. For the sake of completeness, it should be said that there are some interpretations besides the Copenhagen interpretation that do not contradict an indeterministic world view. The Free will theorem by John Conway, for example, even goes so far as to assign a degree of free will to particles under certain conditions (Conway & Kochen 2006). Different World Views This situation of different, empirically equivalent interpretations of a theory cannot only be applied to individual scientific theories, but also to the interpretation of science as a whole. This is often referred to as different world views. If we replace the findings and interpretations of quantum mechanics with those of science as a whole in our considerations, we essentially arrive at the same statement. On the one hand, there are the scientific facts (i.e. the pure measurement data), which are irrefutable within their respective uncertainties. On the other hand, there is an interpretation of these This can be strongly influenced by one's own world view. The more relevant these facts are for the decisive questions of our worldviews, the more this is the case. In this respect, it is hardly surprising that, as already stated, a fatalistic and deterministic worldview can be found in most theories, considering that many scientists share such a worldview. However, it is not so much the scientific facts that lead to such a world view but rather it is the presence of such a worldview among researchers that leads to such an interpretation of the facts. The multiplicity of different interpretations of quantum mechanics thus represents an analogy to the multiplicity of different worldviews, which are in line with the scientific data. It is not so much the scientific facts that lead to such a worldview, but rather the worldview that leads to a particular interpretation of the facts. Conversely, this means that it should be possible to find interpretations of scientific facts that are in harmony with other worldviews. It is also to be expected that the correct worldview yields an interpretation of the facts that goes beyond the current theories and leads to further (empirically sound) statements that can be used to confirm it. The reason why the Copenhagen interpretation is still considered the standard is that none of the alternative interpretations provide any added value in scientific terms. In the same way, alternative theories cannot be expected to replace the existing theories as long as they do not make more accurate predictions than these, which can then be tested experimentally. In summary, this situation of different, empirically equivalent interpretations of quantum mechanics shows once again that there are several ways of interpreting the facts in science. The predominance of one interpretation does not necessarily mean that this interpretation, let alone the world view on which it is based, is the correct one. Just as it is impossible to scientifically arrive at a unambiguous interpretation of quantum mechanics it is scientifically impossible to refute the world view of a created or randomly created world. This is beyond the limits of science. It is only possible to choose a provisional interpretation that best does justice to the findings as a whole. Aspect A (1999) Bell’s inequality test: more ideal than ever. Nature 398, 189–190. doi:10.1038/18296. Conway J & Kochen S (2006) The Free Will Theorem, Found. Phys. 36, 1441–1473. doi:10.1007/s10701-006-9068-6. Dürr D, Goldstein S & Zanghí N (1992) Quantum equilibrium and the origin of absolute uncertainty. J. Stat. Phys. 67, 843–907, doi:1007/BF01049004. Einstein A (1905) Zur Elektrodynamik bewegter Körper. Annalen der Physik und Chemie, 17, 891–921. Einstein A (1915) Erklärung der Perihelbewegung des Merkur aus der allgemeinen Relativitätstheorie, Sitzungsberichte der Preußischen Akademie der Wissenschaften, 831–839. Einstein A & Born M (1972) Briefwechsel 1916–1955. Rowohlt Taschenbuchverlag, Reinbek bei Hamburg, 97f. Feynman R (1948) Space-time approach to non-relativistic quantum mechanics. Rev. Mod. Phys. 20, 367–387. Heisenberg W (1925) Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen. Zeitschrift für Physik 33, 879–893. doi:10.1007/BF01328377. Heisenberg W (1979) Quantentheorie und Philosophie. Reclam, S. 55–56. Planck M (1901) Ueber das Gesetz der Energieverteilung im Normalspectrum. Annalen der Physik 309, 3, 553–563. doi:10.1002/andp.19013090310. Planck M (1943) Wege zur Physikalischen Erkenntnis, Reden und Vorträge, 1. Leipzig, S. Hirzel. Schrödinger E (1926) Quantisierung als Eigenwertproblem. Annalen der Physik 79, 361, 489; 80, 437 und 81, 109. Shimony A (2004) Bell’s Theorem. In: Zalta EN: The Stanford Encyclopedia of Philosophy, Stanford. Struyve W (2004) The De Broglie-Bohm pilot-wave interpretation of quantum theory. arXiv:quant-ph/0506243. Tegmark M (2009) Many Worlds in Context. arXiv:0905.2182.
{"url":"https://scifaith.substack.com/p/multiverse-hidden-variables-or-random","timestamp":"2024-11-05T23:31:25Z","content_type":"text/html","content_length":"220592","record_id":"<urn:uuid:b3ff3da6-105b-443a-8809-efa946ec963b>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00559.warc.gz"}
Cartan for Beginners: Differential Geometry via Moving Frames and Exterior Differential Systems, Second Editionsearch Item Successfully Added to Cart An error was encountered while trying to add the item to the cart. Please try again. Please make all selections above before adding to cart Cartan for Beginners: Differential Geometry via Moving Frames and Exterior Differential Systems, Second Edition Hardcover ISBN: 978-1-4704-0986-9 Product Code: GSM/175 List Price: $135.00 MAA Member Price: $121.50 AMS Member Price: $108.00 eBook ISBN: 978-1-4704-3591-2 Product Code: GSM/175.E List Price: $85.00 MAA Member Price: $76.50 AMS Member Price: $68.00 Hardcover ISBN: 978-1-4704-0986-9 eBook: ISBN: 978-1-4704-3591-2 Product Code: GSM/175.B List Price: $220.00 $177.50 MAA Member Price: $198.00 $159.75 AMS Member Price: $176.00 $142.00 Click above image for expanded view Cartan for Beginners: Differential Geometry via Moving Frames and Exterior Differential Systems, Second Edition Hardcover ISBN: 978-1-4704-0986-9 Product Code: GSM/175 List Price: $135.00 MAA Member Price: $121.50 AMS Member Price: $108.00 eBook ISBN: 978-1-4704-3591-2 Product Code: GSM/175.E List Price: $85.00 MAA Member Price: $76.50 AMS Member Price: $68.00 Hardcover ISBN: 978-1-4704-0986-9 eBook ISBN: 978-1-4704-3591-2 Product Code: GSM/175.B List Price: $220.00 $177.50 MAA Member Price: $198.00 $159.75 AMS Member Price: $176.00 $142.00 • Graduate Studies in Mathematics Volume: 175; 2016; 455 pp MSC: Primary 35; 37; 53; 58 Two central aspects of Cartan's approach to differential geometry are the theory of exterior differential systems (EDS) and the method of moving frames. This book presents thorough and modern treatments of both subjects, including their applications to both classic and contemporary problems in geometry. It begins with the classical differential geometry of surfaces and basic Riemannian geometry in the language of moving frames, along with an elementary introduction to exterior differential systems. Key concepts are developed incrementally, with motivating examples leading to definitions, theorems, and proofs. Once the basics of the methods are established, the authors develop applications and advanced topics. One notable application is to complex algebraic geometry, where they expand and update important results from projective differential geometry. As well, the book features an introduction to \(G\)-structures and a treatment of the theory of connections. The techniques of EDS are also applied to obtain explicit solutions of PDEs via Darboux's method, the method of characteristics, and Cartan's method of equivalence. This text is suitable for a one-year graduate course in differential geometry, and parts of it can be used for a one-semester course. It has numerous exercises and examples throughout. It will also be useful to experts in areas such as geometry of PDE systems and complex algebraic geometry who want to learn how moving frames and exterior differential systems apply to their fields. The second edition features three new chapters: on Riemannian geometry, emphasizing the use of representation theory; on the latest developments in the study of Darboux-integrable systems; and on conformal geometry, written in a manner to introduce readers to the related parabolic geometry perspective. Graduate students and researchers interested in differential geometry, in particular, in exterior systems and the moving frames method and in its applications in algebraic geometry, PDE, and other areas of mathematics □ Chapters □ Chapter 1. Moving frames and exterior differential systems □ Chapter 2. Euclidean geometry □ Chapter 3. Riemannian geometry □ Chapter 4. Projective geometry I: Basic definitions and examples □ Chapter 5. Cartan-Kähler I: Linear algebra and constant-coefficient homogeneous systems □ Chapter 6. Cartan-Kähler II: The Cartan algorithm for linear Pfaffian systems □ Chapter 7. Applications to PDE □ Chapter 8. Cartan-Kähler III: The general case □ Chapter 9. Geometric structures and connections □ Chapter 10. Superposition for Darboux-integrable systems □ Chapter 11. Conformal differential geometry □ Chapter 12. Projective geometry II: Moving frames and subvarieties of projective space □ Appendix A. Linear algebra and representation theory □ Appendix B. Differential forms □ Appendix C. Complex structures and complex manifolds □ Appendix D. Initial value problems and the Cauchy-Kowalevski theorem □ [T]his book, like the first edition, is an excellent source for graduate students and professional mathematicians who want to learn about moving frames and G-structures in trying to understand differential geometry. Thomas Garrity, Mathematical Reviews □ All the material is carefully developed, many examples supporting the understanding. The reviewer warmly recommends this volume to mathematical university libraries. Gabriel Eduard Vilcu, Zentralblatt MATH • Desk Copy – for instructors who have adopted an AMS textbook for a course Permission – for use of book, eBook, or Journal content • Book Details • Table of Contents • Additional Material • Reviews • Requests Volume: 175; 2016; 455 pp MSC: Primary 35; 37; 53; 58 Two central aspects of Cartan's approach to differential geometry are the theory of exterior differential systems (EDS) and the method of moving frames. This book presents thorough and modern treatments of both subjects, including their applications to both classic and contemporary problems in geometry. It begins with the classical differential geometry of surfaces and basic Riemannian geometry in the language of moving frames, along with an elementary introduction to exterior differential systems. Key concepts are developed incrementally, with motivating examples leading to definitions, theorems, and proofs. Once the basics of the methods are established, the authors develop applications and advanced topics. One notable application is to complex algebraic geometry, where they expand and update important results from projective differential geometry. As well, the book features an introduction to \(G\)-structures and a treatment of the theory of connections. The techniques of EDS are also applied to obtain explicit solutions of PDEs via Darboux's method, the method of characteristics, and Cartan's method of equivalence. This text is suitable for a one-year graduate course in differential geometry, and parts of it can be used for a one-semester course. It has numerous exercises and examples throughout. It will also be useful to experts in areas such as geometry of PDE systems and complex algebraic geometry who want to learn how moving frames and exterior differential systems apply to their fields. The second edition features three new chapters: on Riemannian geometry, emphasizing the use of representation theory; on the latest developments in the study of Darboux-integrable systems; and on conformal geometry, written in a manner to introduce readers to the related parabolic geometry perspective. Graduate students and researchers interested in differential geometry, in particular, in exterior systems and the moving frames method and in its applications in algebraic geometry, PDE, and other areas of mathematics • Chapters • Chapter 1. Moving frames and exterior differential systems • Chapter 2. Euclidean geometry • Chapter 3. Riemannian geometry • Chapter 4. Projective geometry I: Basic definitions and examples • Chapter 5. Cartan-Kähler I: Linear algebra and constant-coefficient homogeneous systems • Chapter 6. Cartan-Kähler II: The Cartan algorithm for linear Pfaffian systems • Chapter 7. Applications to PDE • Chapter 8. Cartan-Kähler III: The general case • Chapter 9. Geometric structures and connections • Chapter 10. Superposition for Darboux-integrable systems • Chapter 11. Conformal differential geometry • Chapter 12. Projective geometry II: Moving frames and subvarieties of projective space • Appendix A. Linear algebra and representation theory • Appendix B. Differential forms • Appendix C. Complex structures and complex manifolds • Appendix D. Initial value problems and the Cauchy-Kowalevski theorem • [T]his book, like the first edition, is an excellent source for graduate students and professional mathematicians who want to learn about moving frames and G-structures in trying to understand differential geometry. Thomas Garrity, Mathematical Reviews • All the material is carefully developed, many examples supporting the understanding. The reviewer warmly recommends this volume to mathematical university libraries. Gabriel Eduard Vilcu, Zentralblatt MATH Desk Copy – for instructors who have adopted an AMS textbook for a course Permission – for use of book, eBook, or Journal content You may be interested in... Please select which format for which you are requesting permissions.
{"url":"https://bookstore.ams.org/GSM/175","timestamp":"2024-11-12T23:01:37Z","content_type":"text/html","content_length":"118469","record_id":"<urn:uuid:d7b4c3bd-5827-4d14-b128-91bb1280f639>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00201.warc.gz"}
Jury CMO 2023 Pavel Kozhevnikov Chair of the Jury for the Caucasus Mathematical Olympiad, chair of the Problem Selection Committee for the Caucasus Mathematical Olympiad, associate professor at Moscow Institute of Physics and Technology, member of the Jury for the All-Russian Mathematical Olympiad for School Students, PhD in Physics and Math. Nazar Agakhanov Head of the Problem Selection Committee for the All-Russian Mathematical Olympiad for School Students, associate professor at Moscow Institute of Physics and Technology, PhD in Physics and Math. Egor Bakaev Mathematics teacher at the Letovo school; member of the Problem Selection Committee of the Tournament of Cities, the All-Russian Mathematical Olympiad for School Students, the Moscow Mathematical Olympiad; editor of the Quantum magazine. Dmitry Belov Mathematics teacher of the Shkolkovo online project, member of the Jury for the All-Russian Mathematical Olympiad for School Students. Andrei Biryuk Associate professor at the Department of Theory of Functions at the Faculty of Mathematics and Computer Sciences at Kuban State University, PhD in Physics and Math. Ilya Bogdanov Associate Professor at the Moscow Institute of Physics and Technology, member of the Jury of the All-Russian Mathematical Olympiad for School Students, member of the Problem Selection Committee of the International Mathematical Olympiad. Sergei Boichenko Senior lecturer at the Department of Applied Mathematics, Information Technologies and Information Security at Adyghe State University. Konstantin Bondarenko Teacher of mathematics at School No. 2086 (Moscow), club and field schools at the Two Times Two (2x2) Creative Laboratory, founder and chairman of the Organizing Committee of the Mobius tournament. Vsevolod Voronov Associate professor at the Department of Applied Mathematics, Information Technology and Information Security, PhD in Technical sciences. Evgeniy Dergachev Postgraduate student at the Department of Discrete Mathematics of the Moscow Institute of Physics and Technology, teaching assistant at the Department of Applied Mathematics, Information Technology and Information Security of Adyghe State University. Danila Dyomin Student at the Phystech School of Applied Mathematics and Informatics at Moscow Institute of Physics and Technology, gold medal winner of the International Mathematical Olympiad in 2020. Oleg Dmitriev Senior lecturer at Chernyshevsky Saratov State University, member of the Jury for the All-Russian Mathematical Olympiad for School Students. Vladimir Dolnikov Professor of the Moscow Institute of Physics and Technology, member of the Jury of the All-Russian Mathematical Olympiad for School Students, PhD in Physics and Math. Sergei Dorichenko Math teacher in schools No. 179 of Moscow, chairman of the Jury of the International Mathematical Tournament of Towns. Lev Emelyanov Senior lecturer at the Department of Higher Mathematics at Kaluga campus of Bauman Moscow State Technical University, member of the Jury for the All-Russian Mathematical Olympiad for School Students. Yuri Karpenko Senior lecturer at the Department of Algebra and Geometry at Adyghe State University. Yuri Kuzmenko Lecturer at the Department of Higher Mathematics of the Moscow Institute of Physics and Technology and Physics and Mathematics Lyceum No. 5 (Dolgoprudny). Nataliya Kuprienko Senior lecturer at the Department of Algebra and Geometry at Adyghe State University. Ivan Kukharchuk Student of the Faculty of Mechanics and Mathematics at Moscow State University (MSU). Karine Kuyumzhiyan Associate Professor at the Faculty of Mathematics of the Higher School of Economics. Andrei Reznikov Senior lecturer at the Department of Applied Mathematics, Information Technologies and Information Security at Adyghe State University, PhD in Physics and Math. Alexander Semenov Teacher at the Altair Regional Center for Identifying, Supporting and Developing Abilities and Talents in Children and Youth (Republic of Dagestan). Andrey Solynin Teacher at the All-Russian Summer Mathematical Schools, member of the Jury of the final stage of the All-Russian Mathematical Olympiad for School Students, member of the Jury of the International Mathematical Olympiad (2020, 2021). Kirill Sukhov Math teacher at the Presidential School No. 239 specializing in Physics and Math, member of the Jury of the All-Russian Mathematical Olympiad for School Students, main coach of the Russian team at the International Mathematical Olympiad. Alina Sukhova Supplementary education teacher at the Presidential Lyceum № 239 (St. Petersburg), member of the Jury at the All-Russian Mathematical Olympiad. Oleg Yuzhakov Director of the Center for Continuous Mathematical Education (city of Kurgan), member of the Jury of the All-Russian Mathematical Olympiad for School Students.
{"url":"http://cmo.adygmath.ru/en/node/233","timestamp":"2024-11-02T07:40:27Z","content_type":"text/html","content_length":"21079","record_id":"<urn:uuid:05c75f85-c897-46bc-84ed-40513a60080d>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00198.warc.gz"}
Ordered Logit Jump to navigation Jump to search Related Online Training modules Generally it is best to access online training from within Q by selecting Help > Online Training From Q5 onward, Ordered Logit models are best performed using Regression - Ordered Logit. This page describes the legacy functionality. Ordered Logit is estimated in Q when the Dependent question is a Pick One question and its Variable Type is Ordered Categorical and contains three or more categories. Additional outputs With ordered logit, additional parameters are shown at the beginning of the output which relate to the thresholds for the categories. An understanding of the working of these thresholds is most readily obtained by checking the option for Construct variable(s) containing predictions and reviewing the JavaScript of the created variables (alternatively, review the first of the references provided in Regression Outputs. Additional Properties When using this feature you can obtain additional information that is stored by the R code which produces the output. 1. To do so, select Create > R Output. 2. In the R CODE, paste: item = YourReferenceName 3. Replace YourReferenceName with the reference name of your item. Find this in the Report tree or by selecting the item and then going to Properties > General > Name from the object inspector on the right. 4. Below the first line of code, you can paste in snippets from below or type in str(item) to see a list of available information. For a more in depth discussion on extracting information from objects in R, checkout our blog post here. Properties which may be of interest are: • Summary outputs from the regression model: item$summary$coefficients # summary regression outputs
{"url":"https://wiki.q-researchsoftware.com/wiki/Ordered_Logit","timestamp":"2024-11-07T01:04:05Z","content_type":"text/html","content_length":"26628","record_id":"<urn:uuid:1fac3c6e-c380-4204-896b-2d6d7b22ba58>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00834.warc.gz"}
Class EnumeratedIntegerDistribution All Implemented Interfaces: Serializable, IntegerDistribution Implementation of an integer-valued Values with zero-probability are allowed but they do not extend the support. Duplicate values are allowed. Probabilities of duplicate values are combined when computing cumulative probabilities and statistics. See Also: • Constructor Summary Create a discrete integer-valued distribution from the input data. Create a discrete distribution using the given probability mass function definition. • Method Summary Modifier and Type For a random variable X whose values are distributed according to this distribution, this method returns P(X <= x). Use this method to get the numerical value of the mean of this distribution. Use this method to get the numerical value of the variance of this distribution. Return the probability mass function as a list of (value, probability) pairs. Access the lower bound of the support. Access the upper bound of the support. Use this method to get information about whether the support is connected, i.e. For a random variable X whose values are distributed according to this distribution, this method returns P(X = x). Methods inherited from class java.lang.Object clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait • Constructor Details □ EnumeratedIntegerDistribution Create a discrete distribution using the given probability mass function definition. singletons - array of random variable values. probabilities - array of probabilities. MathIllegalArgumentException - if singletons.length != probabilities.length MathIllegalArgumentException - if probabilities contains negative, infinite or NaN values or only 0's □ EnumeratedIntegerDistribution public EnumeratedIntegerDistribution(int[] data) Create a discrete integer-valued distribution from the input data. Values are assigned mass based on their frequency. For example, [0,1,1,2] as input creates a distribution with values 0, 1 and 2 having probability masses 0.25, 0.5 and 0.25 respectively, data - input dataset • Method Details □ probability public double probability(int x) For a random variable X whose values are distributed according to this distribution, this method returns P(X = x). In other words, this method represents the probability mass function (PMF) for the distribution. x - the point at which the PMF is evaluated the value of the probability mass function at x □ cumulativeProbability public double cumulativeProbability(int x) For a random variable X whose values are distributed according to this distribution, this method returns P(X <= x). In other words, this method represents the (cumulative) distribution function (CDF) for this distribution. x - the point at which the CDF is evaluated the probability that a random variable with this distribution takes a value less than or equal to x □ getNumericalMean public double getNumericalMean() Use this method to get the numerical value of the mean of this distribution. sum(singletons[i] * probabilities[i]) □ getNumericalVariance public double getNumericalVariance() Use this method to get the numerical value of the variance of this distribution. sum((singletons[i] - mean) ^ 2 * probabilities[i]) □ getSupportLowerBound public int getSupportLowerBound() Access the lower bound of the support. This method must return the same value as . In other words, this method must return inf {x in Z | P(X <= x) > 0}. Returns the lowest value with non-zero probability. the lowest value with non-zero probability. □ getSupportUpperBound public int getSupportUpperBound() Access the upper bound of the support. This method must return the same value as . In other words, this method must return inf {x in R | P(X <= x) = 1}. Returns the highest value with non-zero probability. the highest value with non-zero probability. □ isSupportConnected public boolean isSupportConnected() Use this method to get information about whether the support is connected, i.e. whether all integers between the lower and upper bound of the support are included in the support. The support of this distribution is connected. □ getPmf Return the probability mass function as a list of (value, probability) pairs. the probability mass function.
{"url":"https://hipparchus.org/apidocs/org/hipparchus/distribution/discrete/EnumeratedIntegerDistribution.html","timestamp":"2024-11-14T05:32:51Z","content_type":"text/html","content_length":"24711","record_id":"<urn:uuid:abe33598-973a-48e4-ba95-9ab35d97bef3>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00567.warc.gz"}
CVXR: An R Package for Disciplined Convex Optimization A. Fu, B. Narasimhan, and S. Boyd Journal of Statistical Software, vol. 94 (14): 1-34, August 2020. CVXR is an R package that provides an object-oriented modeling language for convex optimization, similar to CVX, CVXPY, YALMIP, and Convex.jl. It allows the user to formulate convex optimization problems in a natural mathematical syntax rather than the standard form required by most solvers. The user specifies an objective and set of constraints by combining constants, variables, and parameters using a library of functions with known mathematical properties. CVXR then applies signed disciplined convex programming (DCP) to verify the problem's convexity. Once verified, the problem is converted into standard conic form using graph implementations and passed to a cone solver such as ECOS or SCS. We demonstrate CVXR's modeling framework with several applications.
{"url":"https://web.stanford.edu/~boyd/papers/cvxr_paper.html","timestamp":"2024-11-03T00:44:04Z","content_type":"application/xhtml+xml","content_length":"4208","record_id":"<urn:uuid:d6048d44-e0f7-4112-b809-20ab5a02ef9e>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00708.warc.gz"}
范金燕:The CP-matrix Approximation Problem----中国科学院数学与系统科学研究院 Academy of Mathematics and Systems Science, CAS Colloquia & Seminars Speaker: 范金燕,上海交通大学 Title: The CP-matrix Approximation Problem Time & Venue: 2018.11.09 14:00 N205 Abstract: A symmetric matrix $A$ is completely positive (CP) if there exists an entrywise nonnegative matrix $V$ such that $A = V V ^T$. In this talk, we discuss the CP-matrix approximation problem: for a given symmetric matrix $C$, find a CP matrix $X$, such that $X$ is close to $C$ as much as possible, under some linear constraints. We formulate the problem as a linear optimization problem with the norm cone and the cone of moments, then construct a hierarchy of semidefinite relaxations for solving it.
{"url":"http://www.amss.ac.cn/mzxsbg/201811/t20181105_5164871.html","timestamp":"2024-11-10T22:11:06Z","content_type":"application/xhtml+xml","content_length":"10174","record_id":"<urn:uuid:7a612d03-e6a7-440f-8ec0-33bae7ddc4cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00464.warc.gz"}
THORChain Dev Docs Streaming Swaps is a means for a swapper to get better price execution if they are patient. This ensures Capital Efficiency while still keeping with the philosophy "impatient people pay more". There are two important parts to streaming swaps: 1. The interval part of the stream allows arbs enough time to rebalance intra-swap - this means the capital demands of swaps are met throughout, instead of after. 2. The quantity part of the stream allows the swapper to reduce the size of their sub-swap so each is executed with less slip (so the total swap will be executed with less slip) without losing capital to on-chain L1 fees. If a swapper is willing to be patient, they can execute the swap with a better price, by allowing arbs to rebalance the pool between the streaming swaps. Once all swaps are executed and the streaming swap is completed, the target token is sent to the user (minus outbound fees). Streaming Swaps is similar to a Time Weighted Average Price (TWAP) trade however it is restricted to 24 hours (Mimir STREAMINGSWAPMAXLENGTH = 14400 blocks). Streaming Swaps is used by default within Savers and Lending with a 5bps fee. To utilise a streaming swap, use the following within a Memo: Trade Target or Limit / Swap Interval / Swap Quantity. • Limit or Trade Target: Uses the trade limit to set the maximum asset ratio at which a mini-swap can occur; otherwise, a refund is issued. • Interval: Block separation of each swap. For example, a value of 10 means a mini-swap is performed every 10 blocks. • Quantity: The number of swaps to be conducted. If set to 0, the network will determine the appropriate quantity. Using the values Limit/10/5 would conduct five mini-swaps with a block separation of 10. Only swaps that achieve the specified asset ratio (defined by Limit) will be performed, while others will result in a refund. On each swap attempt, the network will track how much (in funds) failed to swap and how much was successful. After all swap attempts are made (specified by "swap quantity"), the network will send out all successfully swapped value, and the remaining source asset via refund (that failed to swap for some reason, most likely due to the trade target). If the first swap attempt fails for some reason, the entire streaming swap is refunded and no further attempts will be made. If the swap quantity is set to zero, the network will determine the number of swaps on its own with a focus on the lowest fees and maximize the number of trades. A min swap size is placed on the network for streaming swaps (Mimir StreamingSwapMinBPFee = 5 Basis Points). This is the minimum slip for each individual swap within a streaming swap allowed. This also puts a cap on the number of swaps in a streaming swap. This allows the network to be more friendly to large trades, while also keeping revenues up for small or medium-sized trades. The network works out the optimal streaming swap solution based on the Mimumn Swap Size and the swapAmount. Single Swap: To calculate the minimum swap size for a single swap, you take 5 basis points (bps) of the depth of the pool. The formula is as follows: Example using BTC Pool: • BTC Rune Depth = 20,007,476 RUNE • StreamingSwapMinBPFee = 5 bp MinimumSwapSize = 0.0005 * 20,007,476 = 10,003. RUNE Double Swap: When dealing with two pools of arbitrary depths and aiming for a precise 5 bps swap fee (set by StreamingSwapMinBPFee), you need to create a virtual pool size called runeDepth using the following formula: r1 represents the rune depth of pool1, and r2 represents the rune depth of pool2. The runeDepth is then used with 1.25 bps (half of 2.5 bps since there are two swaps), which gives you the minimum swap size that results in a 5 bps swap fee. The larger the difference between the pools, the more the virtual pool skews towards the smaller pool. This results in less rewards given to the larger pool, and more rewards given to the smaller Example using BTC and ETH Pool • BTC Rune Depth = 20,007,476 RUNE • ETH Rune Depth = 8,870,648 RUNE • StreamingSwapMinBPFee = 5 bp virtualRuneDepth = (2*20,007,476*8,870,648) / (20,007,476 + 8,870,648) = 12,291,607 RUNE MinimumSwapSize = (0.0005/4) * 12,291,607 = 1536.45 RUNE The number of swaps required is determined by dividing the swap Amount by the minimum swap size calculated in the previous step. The swapAmount represents the total amount to be swapped. Example: swap 20,000 RUNE worth of BTC to ETH. (approx 0.653 BTC). 20,000 / 3,072.90 = 6.5 = 7 Swaps. The difference between streaming swaps and non-streaming swaps can be calculated using the swap count with the following formula: The differencevalue represents the percentage of the swap fee saved compared to doing the same swap with a regular fee structure. There higher the swapCount, the bigger the difference. • (7-1)/7 = 6/7 = 85% better price execution by being patient.
{"url":"https://dev.thorchain.org/swap-guide/streaming-swaps.html","timestamp":"2024-11-09T00:09:00Z","content_type":"text/html","content_length":"37775","record_id":"<urn:uuid:39b29307-dcc7-480d-a96a-948d285b7a43>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00313.warc.gz"}
The Learning Portal: Learning Math: Math Anxiety Math Anxiety Math anxiety is a negative feeling or fear associated with math, and can develop in students who have had negative experiences with learning math in the past. Math anxiety may cause students to avoid math, believe that they can not learn math, or freeze during assessments. There are real physical symptoms associated with math anxiety, such as increased heart rate, nausea, sweating, and shortness of breath. With support and a positive learning environment, it is possible for students to overcome math anxiety.
{"url":"https://tlp-lpa.ca/math/anxiety","timestamp":"2024-11-04T20:22:29Z","content_type":"text/html","content_length":"42131","record_id":"<urn:uuid:4c59c829-8088-42e3-b368-0c3f4ed0fb03>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00328.warc.gz"}
Use the NumericSort command in a Logic Template to sort results in either ascending or descending order, using a field containing numeric values. The values are assumed to be signed integers or fixed point data with an implied decimal point. Integer values stored within a numeric field must be greater than -4294967296 and less than 4294967296. Values greater than or equal to 4294967296 will be treated as negative. Fixed point data with 2 implied decimal places must be greater than -42949672.96 and less than 42949672.96. The limits for fixed point data with other than 2 implied decimal places is adjusted in the same way by moving the decimal point. To execute a Numeric Sort request, use the NumericSort command with three arguments: NumericSort( rev, fld, pas ); NumericSort Arguments Argument Description rev Indicates whether results will be sorted in ascending or descending order. fld The numeric data field on which to sort. pas The pass during which the sort is to be executed. Access the NumericSort Dialog Box by selecting the Numeric menu item under the Sorting category of the Logic Template editor command menu. │Command │Description │ │NumericSort( ASCENDING, Price, 1); │Sort results in ascending order by data field "Price" during pass 1. │ │NumericSort( DESCENDING, Price, 4);│Sort results in descending order by data field "Price" during pass 4. │
{"url":"http://www.sliccware.com/WebHelp/Logic_Template/Commands/Sort_Commands/Sort_Numeric_Results/NumericSort.htm","timestamp":"2024-11-12T16:57:40Z","content_type":"text/html","content_length":"16649","record_id":"<urn:uuid:e11bc2c8-fbb0-469e-a390-73a369e2cebc>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00136.warc.gz"}
Eigen solutions of the Schrӧdinger equation and the thermodynamic stability of the black hole temperature Onate, C.A and Okoro, J.O and Adebimpe, O. and Lukman, A. F. (2018) Eigen solutions of the Schrӧdinger equation and the thermodynamic stability of the black hole temperature. Results in Physics, 10. pp. 406-410. ONATE 23.pdf - Published Version Download (645kB) The approximate analytical solutions of the Schrӧdinger equation for Eckart potential is obtained via supersymmetry shape invariance approach. The energy equation and the corresponding wave function are obtained in a closed and compact form. The wave function was used to calculate the Rényi entropy. The results of the Rényi entropy was used to study the mass energy parameter, temperature and heat capacity of the black hole. From the results obtained, the temperature of the black hole becomes stable as the two Eckart potential parameters increases respectively. Actions (login required)
{"url":"http://eprints.lmu.edu.ng/1573/","timestamp":"2024-11-12T10:15:21Z","content_type":"application/xhtml+xml","content_length":"17349","record_id":"<urn:uuid:03c9387d-72b2-4295-8a55-046244d0d90e>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00627.warc.gz"}
Where the term dg-geometry is actually used ? I did mention those papers in the first versions of derived algebraic geometry. It may be more fruitful to have general picture of any kind of derived geometry under derived geometry than to discuss models of derived schemes in the sense of algebraic geometry at both places in detail. okay, thanks, no rush. I have no time for it either. But if you have a rough idea, maybe you could just write a single sentence at “dg-geometry” that mentions Kapranov. I feel he deserves to be menioned as one of the first (if I understand correctly) who thought in this direction at all, even if maybe his definitions don’t quite survive. Well, most of people nowdays do spec of dg-category viewed as enhanced derived category of qcoh sheaves; the point of view of Kapranov et al. is I think to extend the flexibility notion of structure sheaf of rings. Of course, there is some embedding of the picture by taking a generator which is often a dg-algebra. The question of reconstruction from enhanced derived category is subtle. I will put dg-schemes of that kind in the nlab at least once I am back to normal work (I just came back from Hungary; wednesday is the deadline for material spending for this year. I have a visitor from Thursday to Sunday. Small trip out of Zagreb Sun/Monday etc…) if not before. I am not against putting Kapranov’s refernce (on the contrary). I am just against a light conclusion that it is the same level of generality and even cautious again about full and faithful embedding. Do you want to put Kapranov’s definition of dg-scheme onto the Lab? Then we can see. All these notions will try to formalize the notion “locally equivalent to spec of a dg-algebra”. Urs, the mathematics terminology is not made optimal according to your nice idealistic wishes. Kapranov-Ciocan-Fountaine dg-schemes are an intermediate notion at the beginning of the subject of derived geometry, the concept which has nice examples which is consistent, but rather special and abandoned soon after the publication. It does not cover the concept of scheme in some modern geometry based on dg-categories; it is an alternative representation for a class of very special cases. I am not even sure if they form a full subcategory. there is a notion of scheme in any “HAG-context” in the language o Toen-Vezzosi and with respect to every “geometry” in Lurie’s terminologyy. Kapranov’s dg-schemes should be schemes in dg-geometry, in this sense. I think that dg-schemes of Kapranov are pretty much different from the geometry based on dg-categories. Dg-schemes of Kapranov are dg-ringed topological spaces – the structure sheaf is a dg-algebra, and the underlying topological space is a usual one. It is much more special and concrete framework than what Toen et al. or Kontsevich et al. do with dg-categories and A-infinity categories. I split off an entry dg-geometry from the entry on Hochschild geometry, since it really deserves a stand-alone discussion. Eventually somebody should add the references by Kapranov et al on dg-schemes etc. And much more. I made up that particular term. I needed a more specific term than “derived algebraic geometry”. In Toen-Vezzosi they have the concept of an “HAG-context”. One of those that they discuss is $cdgAlg_k ^- \hookrightarrow cdgAlg_k$. I needed a name for that. A little later, when they work entirely over unbounded dg-algebra, they call this the “complicial HAG context”. But I don’t feel we can use that term here, as it collides with other things. I think some term is necessary: there are other contexts. For instance if over a field not of char 0, there is no model in terms of cdg-algebras at all. (Okay, maybe we should rename “dg-geometry” to “cdg-geometry”.) I like complicial though. dg-geometry may be OK, but we should think more through terminology. It is nice seeing how you are getting more familiar with algebraic geometry terminology and literature in general. Soon you will teach algebraic geometers in that subject :) I am currently expanding the entry dg-geometry by listing more of the results by Toën-Vezzosi. Turns out that several of the statements that I was working on and discussing here recently are all proven there (unsurprisingly, I have to admit…) I made a few changes to dg-scheme.
{"url":"https://nforum.ncatlab.org/discussion/2266/dggeometry/","timestamp":"2024-11-07T04:31:18Z","content_type":"application/xhtml+xml","content_length":"56424","record_id":"<urn:uuid:73160927-422d-4e9a-997f-d3d76afa6eea>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00115.warc.gz"}
Draw A Contour Map Of The Function Draw A Contour Map Of The Function - 98% (48 ratings) transcribed image text: Web sketching the contour map of a function with several variables. Get the free contour plot widget for your website, blog, wordpress, blogger, or. Contourplot [ f== g, { x, x min, x max }, { y, y min, y. Drawing several contour curves ff(x;y) = cg or several produces what one calls a contour map. Web the contour map below shows the e ect of weather on us corn production. We sketch the graph in three space to check our. Web 1 for the contour map, we consider z z as a constant k ⇒ z = k k ⇒ z = k, like this: Speci cally, it gives the contour lines for the production function c = f(r; T ), where c is corn. This widget plots contours of a two parameter function, f (x,y). Web contour maps and functions of two variables. Solved Draw a contour map of the function showing several This widget plots contours of a two parameter function, f (x,y). Find more mathematics widgets in wolfram|alpha. Consider the surface given by the function f(x, y) = 4 x2+y2−9√ f ( x, y) = 4. A Contour Map Of A Function Is Shown Use It To Make A Rough Sketch Of The example shows the graph of the function f(x;y) = sin(xy). Sketch the contour map of f (x,y) = 1/ (x^2 + y^2). Web you create a contour diagram corresponding to a function z =. How To Draw A Contour Map Of A Function ezzeyn Label the level curves at c= 0, 1, 1/4, 4. Graph functions, plot points, visualize algebraic equations, add sliders, animate graphs, and more. Contourplot [ f== g, { x, x min, x max }, {. How To Draw A Contour Map Calculus Maps Model Online Drawing several contour curves ff(x;y) = cg or several produces what one calls a contour map. Graph functions, plot points, visualize algebraic equations, add sliders, animate graphs, and more. K = z k2 = f(x,. Solved Draw a contour map of the function showing several Graph functions, plot points, visualize algebraic equations, add sliders, animate graphs, and more. Web contour plot and surface. Web get the free contour plot widget for your website, blog, wordpress, blogger, or igoogle. Contourplot [. Draw a contour map of the function showing several level curves. f(x, y Web fernandonazphi 8 years ago so, when you show us the vector field of nabla (f (x,y)) = <y;x>, you say that the more red the vector is, the greater is its length. Web you. Solved 2 1 0 4552 Draw a contour map of the function Speci cally, it gives the contour lines for the production function c = f(r; Web get the free contour plot widget for your website, blog, wordpress, blogger, or igoogle. K = z k2 = f(x,. Solved Draw a contour map of the function showing several Contourplot [ f== g, { x, x min, x max }, { y, y min, y. 98% (48 ratings) transcribed image text: Graph functions, plot points, visualize algebraic equations, add sliders, animate graphs, and more.. Draw a contour map of the function f(x,y) = e^x+y sho… SolvedLib Contourplot [ f== g, { x, x min, x max }, { y, y min, y. Draw a contour map of the function showing several level curves. The example shows the graph of the function. Ex 1 Determine a Function Value Using a Contour Map YouTube Web contour maps and functions of two variables. Y) by creating a topographical map of its graph. The example shows the graph of the function f(x;y) = sin(xy). 98% (48 ratings) transcribed image text: You. Draw A Contour Map Of The Function Web explore math with our beautiful, free online graphing calculator. K = z = f ( x, y) = y 2 − x 2 k 2 = y 2. We sketch the graph in three space to check our. You choose equally spaced elevations z = c for a bunch. Find more mathematics widgets in wolfram|alpha. Draw A Contour Map Of The Function Related Post :
{"url":"https://classifieds.independent.com/print/draw-a-contour-map-of-the-function.html","timestamp":"2024-11-07T22:25:28Z","content_type":"application/xhtml+xml","content_length":"23017","record_id":"<urn:uuid:12206215-9be3-4483-89cf-da36ecb58e4e>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00058.warc.gz"}
Subscribe to receive new posts: Posts Tagged ‘Metronic Cycle’ Sunday, January 22, 2012 @ 01:01 PM Credit: Roger Price The Hebrew Bible is filled with numbers. There are different kinds of numbers — cardinals and ordinals, integers and fractions, even primes. And they are everywhere in the Torah text. There are numbers for days. (See, e.g., Gen. 1:5, 8, 13.) There are numbers for life spans. (See, e.g., Gen. 5:5, 8, 11.) There are numbers for populations, i.e., census numbers. (See, e.g., Ex. 1:5, 12:37; Num. 1:46, 2:32.) There are numbers for the measurement of quantities. (See, e.g., Ex. 16:22, 36, 29:40.) And for sizes. (See, e.g., Gen. 6:15.) There are numbers for the duration of events. (See, e.g., Ex. 12:40, 24:18.) There are numbers for a host of seemingly mundane things, such as the number of visitors and the number of palm trees. (See, e.g., Gen. 18:2; Ex. 15:27.) read more
{"url":"https://www.judaismandscience.com/tag/metronic-cycle/","timestamp":"2024-11-11T00:30:55Z","content_type":"application/xhtml+xml","content_length":"61823","record_id":"<urn:uuid:ae750efd-74e8-485a-94e8-14f08767c96c>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00263.warc.gz"}
Understanding Mathematical Functions: How To Evaluate The Function Introduction to Mathematical Functions In the realm of mathematics, functions play a vital role in modeling relationships between different variables. Understanding how to evaluate these functions is essential for solving mathematical problems and addressing real-world scenarios. In this chapter, we will delve into the definition of mathematical functions, the different types of functions, and their significance in various fields. A Definition of a function and its importance in mathematics Mathematical functions can be defined as a relation between a set of inputs and a set of possible outputs, with the property that each input is related to exactly one output. This concept serves as a fundamental building block in mathematics, providing a systematic way to relate different quantities. Functions are crucial in various mathematical operations, such as calculus, algebra, and Overview of different types of functions There are several types of mathematical functions, each with its unique characteristics and properties. These include linear functions, which have a constant rate of change, quadratic functions, which contain squared terms, and polynomial functions, with multiple terms involving variables raised to non-negative integer powers. Additionally, there are exponential functions, logarithmic functions, and many more, each serving different purposes and applications. The relevance of functions in real-world applications and various fields Functions have a widespread impact on real-world applications, from engineering and physics to economics and biology. For example, in physics, the motion of an object can be described using functions, while in finance, functions are utilized to model growth and decay in investments. Furthermore, functions are instrumental in computer science for tasks such as data analysis, algorithms, and computational modeling. Key Takeaways • Understand the function's input and output • Identify the function's formula or rule • Substitute the input into the formula • Perform the necessary operations to evaluate the function • Check your answer for accuracy Understanding Mathematical Functions: How to evaluate the function Mathematical functions are a fundamental concept in mathematics and are used to describe the relationship between input and output values. Evaluating a function involves understanding the notation, domain and range, and the importance of substituting the correct value for the variable. Basics of Function Evaluation When evaluating a mathematical function, it is essential to understand the notation f(x) and how it relates to inputs and outputs. The function notation f(x) represents the output value of the function when the input is x. In other words, f(x) is the dependent variable, and x is the independent variable. The concept of the domain and range of a function The domain of a function refers to the set of all possible input values for the function. It is crucial to identify the domain of a function to ensure that the function is defined for all relevant input values. On the other hand, the range of a function represents the set of all possible output values that the function can produce. Understanding the domain and range of a function is essential for evaluating the function accurately. Importance of substituting the correct value for the variable Substituting the correct value for the variable in a function is crucial for obtaining the accurate output value. It is essential to pay attention to the domain of the function and ensure that the input value falls within the specified domain. Substituting an incorrect value for the variable can lead to inaccurate results and misinterpretation of the function's behavior. Steps for Evaluating Functions Understanding how to evaluate mathematical functions is an essential skill in mathematics. By following a few key steps, you can easily determine the output of a function for a given input. Here are the steps for evaluating functions: Identifying the function rule or expression Before you can evaluate a function, you need to know the function rule or expression. This is the mathematical formula that defines the relationship between the input and the output. The function rule is typically given as an equation or an algebraic expression. Substituting values into the function properly Once you have the function rule, the next step is to substitute the given input values into the function. This involves replacing the variable in the function rule with the specific input value. It is important to do this substitution correctly to ensure an accurate evaluation of the function. Simplifying expressions to find the output After substituting the input values into the function, the final step is to simplify the resulting expression to find the output. This may involve performing arithmetic operations, combining like terms, and simplifying the expression as much as possible to obtain the final output of the function. By following these steps, you can effectively evaluate mathematical functions and determine the corresponding output for a given input. Understanding how to evaluate functions is fundamental in various mathematical concepts and applications. Practical Examples of Function Evaluation Understanding how to evaluate mathematical functions is an essential skill in various fields such as engineering, finance, and science. Let's explore some practical examples of function evaluation to gain a better understanding of how it works. A. Evaluating linear functions with given inputs Linear functions are some of the simplest mathematical functions, and evaluating them with given inputs is relatively straightforward. The general form of a linear function is y = mx + b, where m is the slope and b is the y-intercept. For example, let's consider the linear function y = 2x + 3. If we are asked to evaluate the function at x = 5, we simply substitute the value of x into the function to get y = 2(5) + 3 = 13. Therefore, when x = 5, y = 13. B. Calculating output for quadratic functions using factoring or the quadratic formula Quadratic functions are more complex than linear functions, but they can still be evaluated using different methods such as factoring or the quadratic formula. The general form of a quadratic function is y = ax^2 + bx + c, where a, b, and c are constants. For example, let's consider the quadratic function y = x^2 - 4x + 4. To evaluate this function, we can use factoring to simplify it into the form y = (x - 2)^2. This form makes it clear that the function has a minimum value of y = 0 at x = 2. If factoring is not possible, we can use the quadratic formula x = (-b ± √(b^2 - 4ac)) / (2a) to calculate the roots of the function, which in turn helps us evaluate the function for specific values of x. C. Real-life scenarios such as calculating interest with financial functions Mathematical functions are not just theoretical concepts; they have practical applications in real-life scenarios. Financial functions, for example, are used to calculate interest, investments, and loan payments. Consider the compound interest formula A = P(1 + r/n)^(nt), where A is the amount of money accumulated after n years, including interest, P is the principal amount, r is the annual interest rate, n is the number of times that interest is compounded per year, and t is the time the money is invested for. If we have a principal amount of $1000 invested at an annual interest rate of 5% compounded quarterly, we can use the compound interest formula to evaluate the amount of money accumulated after 5 years. By substituting the given values into the formula, we can calculate the final amount and understand the impact of compounding on the investment. These practical examples demonstrate the importance of understanding how to evaluate mathematical functions in various contexts, from simple linear functions to complex financial calculations. Advanced Techniques in Evaluating Functions When it comes to evaluating mathematical functions, there are several advanced techniques that come into play. These techniques are essential for dealing with functions that involve exponentials or logarithms, evaluating trigonometric functions, and understanding piecewise functions with different rules for different intervals. A. Dealing with functions that involve exponentials or logarithms Functions involving exponentials or logarithms can be quite complex to evaluate. One of the key techniques for dealing with these functions is to understand the properties of logarithms and exponentials. For example, the logarithm of a product is the sum of the logarithms of the individual numbers, and the logarithm of a quotient is the difference of the logarithms. Similarly, the exponential function has properties such as the product rule and the quotient rule, which can be used to simplify complex expressions. Example: Evaluating the function f(x) = 3e^x - 2ln(x) • Apply the properties of exponentials and logarithms to simplify the function. • Use the rules of exponents and logarithms to evaluate the function at specific values of x. B. Evaluating trigonometric functions and their applications in physics and engineering Trigonometric functions such as sine, cosine, and tangent are widely used in physics and engineering. Understanding how to evaluate these functions is crucial for solving problems in these fields. One technique for evaluating trigonometric functions is to use the unit circle and the properties of trigonometric ratios. Additionally, trigonometric identities can be used to simplify complex expressions involving trigonometric functions. Example: Evaluating the function g(x) = sin(2x) + cos(x) • Use the unit circle to determine the values of sine and cosine for specific angles. • Apply trigonometric identities to simplify the function and evaluate it at specific values of x. C. Strategies for evaluating piecewise functions with different rules for different intervals Piecewise functions have different rules for different intervals, making them challenging to evaluate. One strategy for dealing with piecewise functions is to break down the function into its individual pieces and evaluate each piece separately. It's important to pay attention to the domain of each piece and ensure that the function is continuous at the points where the pieces meet. Example: Evaluating the piecewise function h(x) = { x^2, if x < 0; 2x, if x ≥ 0 } • Evaluate the function separately for x < 0 and x ≥ 0, ensuring that the function is continuous at x = 0. • Understand the behavior of the function in each interval and how the different rules apply. Troubleshooting Common Issues in Function Evaluation When evaluating mathematical functions, it is common to encounter various issues that can make the process challenging. Understanding how to troubleshoot these common issues is essential for accurately evaluating functions. Here are some common problems that may arise and how to address them: A. Addressing mistakes in algebraic simplification One of the most common issues when evaluating mathematical functions is making mistakes in algebraic simplification. This can lead to incorrect results and confusion. To address this issue, it is important to carefully review each step of the simplification process and double-check the calculations. Look for potential errors such as incorrect distribution of terms, errors in factoring, or mistakes in combining like terms. Additionally, using software or calculators to verify the simplification can help catch any mistakes. B. What to do when the function is undefined for a particular input (outside of the domain) Another common issue is encountering inputs for which the function is undefined, typically outside of the function's domain. When this happens, it is important to recognize that the function does not have a valid output for that particular input. To address this, it is crucial to identify the domain of the function and determine the range of valid inputs. If an input falls outside of this domain, it is necessary to acknowledge that the function is undefined for that specific input and cannot be evaluated. C. Handling complex functions with nested operations or multiple terms Complex functions with nested operations or multiple terms can present challenges when evaluating. To address this issue, it is helpful to break down the function into smaller, more manageable parts. This can involve simplifying nested operations step by step, identifying common factors, and grouping like terms. Additionally, using rules of algebra such as the distributive property, combining like terms, and factoring can help simplify complex functions and make them easier to evaluate. Conclusion & Best Practices in Evaluating Functions After understanding the essential steps in evaluating mathematical functions and learning about best practices, it is important to recap the key points and emphasize the value of consistent practice and advanced study for mastering function evaluation. A Recap of the essential steps in function evaluation • Identify the function: Understand the given function and its components, including variables, constants, and operations. • Substitute the input: Replace the variable in the function with the given input value to evaluate the function at that specific point. • Simplify the expression: Use mathematical operations to simplify the function and obtain the final output or value. Best practices such as double-checking work and understanding the function's behavior When evaluating functions, it is important to double-check the work to ensure accuracy. Mistakes in substitution or simplification can lead to incorrect results. Additionally, understanding the behavior of the function can provide insights into its properties and help in evaluating it more effectively. The value of consistent practice and advanced study for mastering function evaluation Consistent practice is essential for mastering function evaluation. By regularly practicing evaluating different types of functions, one can improve their skills and gain confidence in handling complex mathematical expressions. Furthermore, advanced study of mathematical functions, including exploring various types of functions and their properties, can deepen one's understanding and proficiency in function evaluation.
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-evaluate-function","timestamp":"2024-11-11T16:43:39Z","content_type":"text/html","content_length":"223113","record_id":"<urn:uuid:dc85304e-7219-4579-ac74-115b31fbc532>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00297.warc.gz"}
What Size Arrow For 29 Inch Draw What Size Arrow For 29 Inch Draw - Web while both are important, we’ll only really be concerned about one of them. But it’s actually best to think of the two measurements as being related to each other but not married. This will increase power and accuracy, and will provide a more consistent shot. For me it would depend on whether you're wanting hunting arrows or target/3d arrows. Unlike longbows or recurves that can be practically drawn back to any length, compound bows have a determined draw length (28”, 29”, 30”, etc.). Bow, you should use and arrow of not less than 300 grains). The vast majority of people will have a draw length of roughly 28″, and for those people here are our arrow recommendations: I got a few new arrows today, that were 31 just to practice with until i bought a good set. So, the arrows i shoot from my compound bow should be 29 inches long, right? Arrow size tells how stiff the arrow is whereas, arrow length determines the total length of an arrow shaft. This ensures proper arrow clearance and optimal performance. So if your draw length is 28″, you should get arrows with a maximum length of 29″. Arrow LengthGuide to Sizing Arrows Correctly Web #1 · nov 4, 2014 fairly new to bow hunting/archery in general. So, the arrows i shoot from my compound bow should be 29 inches long, right? The vast majority of people will have. How To Measure Correct Arrow Length Reverasite Web my draw length is 29 inches. Web your arrow length when at full draw is 29. Web while both are important, we’ll only really be concerned about one of them. So if your draw. Guide to determine the right length arrows to buy Archery, Archery Web the stiffest spine arrows are 300, 400, 500, and 600. How to calculate draw length and arrow length for beginners if you want to improve your accuracy with a bow, take the time to. I have these arrows and when I draw the arrow tip is almost all the way So now that you understand that where your arrow sits on the rest helps determine its minimum length, it’s time to figure out the proper length for your arrows. Web your arrow length when at. How to Measure Draw Length Acquiring the Best Archery Skills This ensures proper arrow clearance and optimal performance. The following calculator allows you to estimate your draw length to begin practicing quickly. For hunting i want the broadhead in front of my fingers, so for. Guides Bow Draw Length Urban Archery Web we have heard the question “how long should my arrows be for a 29 inch draw” plenty of times. So now that you understand that where your arrow sits on the rest helps determine. 29 Inch Draw Arrow Length. How To Choose The Right Bow For Your Draw The rule of thumb here is that you take your draw length and then add one inch to it. The right answer is to cut them to the length that creates the proper spine for. how to choose arrows So, the arrows i shoot from my compound bow should be 29 inches long, right? Generally speaking, there is one easy way to know what size arrows you need. So if your draw length is. LEARN ARCHERY Finding Correct Draw Length and What to Look For YouTube Web your arrow length when at full draw is 29. Web what size arrows do i need? So now that you understand that where your arrow sits on the rest helps determine its minimum length,. Choosing The Right Arrows For A Compound Bow All You Need To Know Web thanks to advances in technology and improvements in design, things have become much easier. I got a few new arrows today, that were 31 just to practice with until i bought a good set.. What Size Arrow For 29 Inch Draw Arrow size tells how stiff the arrow is whereas, arrow length determines the total length of an arrow shaft. For me it would depend on whether you're wanting hunting arrows or target/3d arrows. How heavy should my arrows be? Web my draw length is 29 inches. If you´re a beginner i would recommend a good pro shop where they help you figure this out. What Size Arrow For 29 Inch Draw Related Post :
{"url":"https://classifieds.independent.com/print/what-size-arrow-for-29-inch-draw.html","timestamp":"2024-11-12T07:21:50Z","content_type":"application/xhtml+xml","content_length":"23367","record_id":"<urn:uuid:22abe154-4037-42ac-a5df-8727fc9a1951>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00555.warc.gz"}
Formula to Total rows I have Rows with names of People Then I have columns that show projects / Activity names I am trying to create a summary field that will sum all the time spent on two categories by all the people. Either "Customer Meeting" or "Testing". The rows those are listed in might not always be the same. For example one month they may be in Row 2 and 45 and another Month it may be in row 245 and 250. The list of Category names start in Row 2 and go all the way through Row 250 I've tried several iterations but can't seem to quite get my syntax to work. This was the last one I was trying but it comes back as unparseable =SUMIF(SSRS2:SSRS250, <>"Customer Meeting", [PM1]@row:[PM5]@row) Below is a screen shot of the sheet. Best Answer • @fboivin - Hello. You could create a helper column and sum the rows and then create a Sheet Summary field to break down each SSRS. The helper column, Total, would have a column formula: =IF(SSRS@row = "Customer Meeting", SUM([PM1]@row:[PM5]@row), IF(SSRS@row = "Testing", SUM([PM1]@row:[PM5]@row), 0)) The two Sheet Summary fields would have: Customer Testing - =SUMIF(SSRS:SSRS, "Customer Meeting", Total:Total) Testing - =SUMIF(SSRS:SSRS, "Testing", Total:Total) Another option - hope it helps. Thanks -Peggy • @Fboivin I think your formula should be, =SUMIF(SSRS:SSRS,"Customer Meeting", [PM1]:[PM5]), just change what's in the quotes per summary • That gives an Invalid Column Value error. I thought maybe because Row 1 has Names in instead of numbers. So I tried this instead so it would only add starting at Row 2 where the numbers are =SUMIF(SSRS2:SSRS250, "Customer Meeting", [PM1]2:[PM5]250) However that gives error of #Incorrect Argument Set • EDIT: I am looking at this again. • Hi Eric, I did try that and it gives this error: #Incorrect Argument Set It doesn't seem to like that the sum range is across multiple columns. If i change it to just =SUMIF(SSRS2:SSRS250, "Customer Meeting", [PM1]2:[PM1]250) That works fine, however that would force me to create that same formula for every column and then add them together. • @fboivin - Hello. You could create a helper column and sum the rows and then create a Sheet Summary field to break down each SSRS. The helper column, Total, would have a column formula: =IF(SSRS@row = "Customer Meeting", SUM([PM1]@row:[PM5]@row), IF(SSRS@row = "Testing", SUM([PM1]@row:[PM5]@row), 0)) The two Sheet Summary fields would have: Customer Testing - =SUMIF(SSRS:SSRS, "Customer Meeting", Total:Total) Testing - =SUMIF(SSRS:SSRS, "Testing", Total:Total) Another option - hope it helps. Thanks -Peggy • Thank you Peggy that works for what I need. • @Peggy Parchert That's such a great simple solution with a helper total column. It is weird that SUMIF can't sum multiple columns with 1 criteria column. Help Article Resources
{"url":"https://community.smartsheet.com/discussion/106074/formula-to-total-rows","timestamp":"2024-11-02T13:47:42Z","content_type":"text/html","content_length":"444681","record_id":"<urn:uuid:4cfdc427-6b92-4b44-b6cc-af6f3347673d>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00318.warc.gz"}
Does somebody know if there is an implementation of crcmod available for mircopython? I need to calculate crc for 16bits values. I have a DALLAS 1Wire implementation. I'm sure you can find look up values to change it to the CRC8 you require. def crc8Nib (data): CRC8_TABLE_SML1 = bytes([0x00, 0x5e, 0xbc, 0xe2, 0x61, 0x3f, 0xdd, 0x83, 0xc2, 0x9c, 0x7e, 0x20, 0xa3, 0xfd, 0x1f, 0x41]) CRC8_TABLE_SML2 = bytes([0x00, 0x9d, 0x23, 0xbe, 0x46, 0xdb, 0x65, 0xf8, 0x8c, 0x11, 0xaf, 0x32, 0xca, 0x57, 0xe9, 0x74]) crc = 0xFF # 0xFF seeded for x in data: t = (x ^ crc) & 0xFF crc = CRC8_TABLE_SML1[t & 0x0f] ^ CRC8_TABLE_SML2[t >> 4] return crc; except ValueError: Thxs for you answer. Do you also hava a CRC8 function which is able to do the following? Name: CRC-8 Width: 8 bit Polynomial 0x31 (x8 + x5 + x4 + 1) Initialization 0xFF Final XOR: 0x00 Examples: CRC (0xBEEF) = 0x92 This is my CRC16 function based on a very common CRC16 method. Perhaps it will help you. You may need to change the seed value to 0x0000, depending on what CRC16 you are hoping to implement def crc16Nib (data): CRC16_TBL_SML = ([0x0000, 0xCC01, 0xD801, 0x1400, 0xF001, 0x3C00, 0x2800, 0xE401, 0xA001, 0x6C00, 0x7800, 0xB401, 0x5000, 0x9C01, 0x8801, 0x4400,]) crc = 0xFFFF # 0xFFFF seeded for x in data: crc = CRC16_TBL_SML[(x ^ crc) & 0x0F] ^ (crc >> 4) # because using the reflected algorithm, the low nibble is processed first crc = CRC16_TBL_SML[((x >> 4) ^ crc) & 0x0F] ^ (crc >> 4) return crc;
{"url":"https://forum.pycom.io/topic/571/crcmod","timestamp":"2024-11-13T08:44:21Z","content_type":"text/html","content_length":"53028","record_id":"<urn:uuid:8362581d-67d0-43b3-b14d-8196d17ab26c>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00392.warc.gz"}
fama and french 2010 THE JOURNAL OF FINANCE • VOL. LXV, NO. 5 • OCTOBER 2010 Luck versus Skill in the Cross-Section of Mutual Fund Returns EUGENE F. FAMA and KENNETH R. FRENCH∗ The aggregate portfolio of actively managed U.S. equity mutual funds is close to the market portfolio, but the high costs of active management show up intact as lower returns to investors. Bootstrap simulations suggest that few funds produce benchmark-adjusted expected returns sufficient to cover their costs. If we add back the costs in fund expense ratios, there is evidence of inferior and superior performance (nonzero true α) in the extreme tails of the cross-section of mutual fund α estimates. THERE IS A CONSTRAINT on the returns to active investing that we call equilibrium accounting. In short (details later), suppose that when returns are measured before costs (fees and other expenses), passive investors get passive returns, that is, they have zero α (abnormal expected return) relative to passive benchmarks. This means active investment must also be a zero sum game— aggregate α is zero before costs. Thus, if some active investors have positive α before costs, it is dollar for dollar at the expense of other active investors. After costs, that is, in terms of net returns to investors, active investment must be a negative sum game. (Sharpe (1991) calls this the arithmetic of active We examine mutual fund performance from the perspective of equilibrium accounting. For example, at the aggregate level, if the value-weight (VW) portfolio of active funds has a positive α before costs, we can infer that the VW portfolio of active investments outside mutual funds has a negative α. In other words, active mutual funds win at the expense of active investments outside mutual funds. We find that, in fact, the VW portfolio of active funds that invest primarily in U.S. equities is close to the market portfolio, and estimated before expenses, its α relative to common benchmarks is close to zero. Since the VW portfolio of active funds produces α close to zero in gross (pre-expense) returns, α estimated on the net (post-expense) returns realized by investors is negative by about the amount of fund expenses. The aggregate results imply that if there are active mutual funds with positive true α, they are balanced by active funds with negative α. We test for the ∗ Fama is at the Booth School of Business, University of Chicago, and French is at the Amos Tuck School of Business Administration, Dartmouth College. We are grateful for the comments of Juhani Linnainmaa, Sunil Wahal, Jerry Zimmerman, and seminar participants at the University of Chicago, the California Institute of Technology, UCLA, and the Meckling Symposium at the University of Rochester. Special thanks to John Cochrane and the journal Editor, Associate Editor, and referees. The Journal of FinanceR existence of such funds. The challenge is to distinguish skill from luck. Given the multitude of funds, many have extreme returns by chance. A common approach to this problem is to test for persistence in fund returns, that is, whether past winners continue to produce high returns and losers continue to underperform (see, e.g., Grinblatt and Titman (1992), Carhart (1997)). Persistence tests have an important weakness. Because they rank funds on short-term past performance, there may be little evidence of persistence because the allocation of funds to winner and loser portfolios is largely based on noise. We take a different tack. We use long histories of individual fund returns and bootstrap simulations of return histories to infer the existence of superior and inferior funds. Specifically, we compare the actual cross-section of fund α estimates to the results from 10,000 bootstrap simulations of the cross-section. The returns of the funds in a simulation run have the properties of actual fund returns, except we set true α to zero in the return population from which simulation samples are drawn. The simulations thus describe the distribution of α estimates when there is no abnormal performance in fund returns. Comparing the distribution of α estimates from the simulations to the cross-section of α estimates for actual fund returns allows us to draw inferences about the existence of skilled managers. For fund investors the simulation results are disheartening. When α is estimated on net returns to investors, the cross-section of precision-adjusted α estimates, t(α), suggests that few active funds produce benchmark-adjusted expected returns that cover their costs. Thus, if many managers have sufficient skill to cover costs, they are hidden by the mass of managers with insufficient skill. On a practical level, our results on long-term performance say that true α in net returns to investors is negative for most if not all active funds, including funds with strongly positive α estimates for their entire histories. Mutual funds look better when returns are measured gross, that is, before the costs included in expense ratios. Comparing the cross-section of t(α) estimates from gross fund returns to the average cross-section from the simulations suggests that there are inferior managers whose actions reduce expected returns, and there are superior managers who enhance expected returns. If we assume that the cross-section of true α has a normal distribution with mean zero and standard deviation σ , then σ around 1.25% per year seems to capture the tails of the cross-section of α estimates for our full sample of actively managed funds. The estimate of the standard deviation of true α, 1.25% per year, does not imply much skill. It suggests, for example, that fewer than 16% of funds have α greater than 1.25% per year (about 0.10% per month), and only about 2.3% have α greater than 2.50% per year (about 0.21% per month)—before expenses. The simulation tests have power. If the cross-section of true α for gross fund returns is normal with mean zero, the simulations strongly suggest that the standard deviation of true α is between 0.75% and 1.75% per year. Thus, the simulations rule out values of σ rather close to our estimate, 1.25%. The power traces to the fact that a large cross-section of funds produces precise estimates of the percentiles of t(α) under different assumptions about σ , the standard deviation of true α. This precision allows us to put σ in a rather narrow range. Luck versus Skill in Mutual Fund Returns Readers suggest that our results are consistent with the predictions of Berk and Green (2004). We outline their model in Section II, after the tests on mutual fund aggregates (Section I) and before the bootstrap simulations (Sections III and IV). Our results reject most of their predictions about mutual fund returns. Given the prominence of their model, our contrary evidence seems an important contribution. The paper closest to ours is Kosowski et al. (2006). They run bootstrap simulations that appear to produce stronger evidence of manager skill. We contrast their tests and ours in Section V, after presenting our results. Section VI concludes. I. The Performance of Aggregate Portfolios of U.S. Equity Mutual Funds Our mutual fund sample is from the CRSP (Center for Research in Security Prices) database. We include only funds that invest primarily in U.S. common stocks, and we combine, with value weights, different classes of the same fund into a single fund (see French (2008)). To focus better on the performance of active managers, we exclude index funds from all our tests. The CRSP data start in 1962, but we concentrate on the period after 1983. During the period 1962 to 1983 about 15% of the funds on CRSP report only annual returns, and the average annual equal-weight (EW) return for these funds is 5.29% lower than for funds that report monthly returns. As a result, the EW average return on all funds is a nontrivial 0.65% per year lower than the EW return of funds that report monthly returns. Thus, during 1962 to 1983 there is selection bias in tests like ours that use only funds that report monthly returns. After 1983, almost all funds report monthly returns. (Elton, Gruber, and Blake (2001) discuss CRSP data problems for the period before 1984.) A. The Regression Framework Our main benchmark for evaluating fund performance is the three-factor model of Fama and French (1993), but we also show results for Carhart’s (1997) four-factor model. To measure performance, these models use two variants of the time-series regression Rit − Rft = ai + bi (RMt − Rft ) + si SMBt + hi HMLt + mi MOMt + eit . In this regression, Rit is the return on fund i for month t, Rft is the risk-free rate (the 1-month U.S. Treasury bill rate), RMt is the market return (the return on a VW portfolio of NYSE, Amex, and NASDAQ stocks), SMBt and HMLt are the size and value-growth returns of Fama and French (1993), MOMt is our version of Carhart’s (1997) momentum return, ai is the average return left unexplained by the benchmark model (the estimate of α i ), and eit is the regression residual. The full version of (1) is Carhart’s four-factor model, and the regression without MOMt is the Fama–French three-factor model. The construction of SMBt and HMLt follows Fama and French (1993). The momentum return, The Journal of FinanceR MOMt , is defined like HMLt , except that we sort on prior return rather than the book-to-market equity ratio. (See Table I below.) Regression (1) allows a more precise statement of the constraints of equilibrium accounting. The VW aggregate of the U.S. equity portfolios of all investors is the market portfolio. It has a market slope equal to 1.0 in (1), zero slopes on the other explanatory returns, and a zero intercept—before investment costs. This means that if the VW aggregate portfolio of passive investors also has a zero intercept before costs, the VW aggregate portfolio of active investors must have a zero intercept. Thus, positive and negative intercepts among active investors must balance out—before costs. There is controversy about whether the average SMBt , HMLt , and MOMt returns are rewards for risk or the result of mispricing. For our purposes, there is no need to take a stance on this issue. We can simply interpret SMBt , HMLt , and MOMt as diversified passive benchmark returns that capture patterns in average returns during our sample period, whatever the source of the average returns. Abstracting from the variation in returns associated with RMt − Rft , SMBt , HMLt , and MOMt then allows us to focus better on the effects of active management (stock picking), which should show up in the three-factor and four-factor intercepts. From an investment perspective, the slopes on the explanatory returns in (1) describe a diversified portfolio of passive benchmarks (including the riskfree security) that replicates the exposures of the fund on the left to common factors in returns. The regression intercept then measures the average return provided by a fund in excess of the return on a comparable passive portfolio. We interpret a positive expected intercept (true α) as good performance, and a negative expected intercept signals bad performance.1 Table I shows summary statistics for the explanatory returns in (1) for January 1984 through September 2006 (henceforth 1984 to 2006), the period used in our tests. The momentum factor (MOMt ) has the highest average return, 0.79% per month (t = 3.01), but the average values of the monthly market premium (RMt − Rft ) and the value-growth return (HMLt ) are also large, 0.64% (t = 2.42) and 0.40% (t = 2.10), respectively. The size return, SMBt , has the smallest average value, 0.03% per month (t = 0.13). B. Regression Results for EW and VW Portfolios of Active Funds Table II shows estimates of regression (1) for the monthly returns of 1984 to 2006 on EW and VW portfolios of the funds in our sample. In the VW portfolio, funds are weighted by assets under management (AUM) at the beginning of 1 Formal justification for this definition of good and bad performance is provided by Dybvig and Ross (1985). Given a risk-free security, their Theorem 5 implies that if the intercept in (1) is positive, there is a portfolio with positive weight on fund i and the portfolio of the explanatory portfolios on the right of (1) that has a higher Sharpe ratio than the portfolio of the explanatory portfolios. Similarly, if the intercept is negative, there is a portfolio with negative weight on fund i that has a higher Sharpe ratio than the portfolio of the explanatory portfolios. Table I RM −Rf Average Return RM −Rf Standard Deviation RM −Rf RM is the return on a value-weight market portfolio of NYSE, Amex, and NASDAQ stocks, and Rf is the 1-month Treasury bill rate. The construction of SMBt and HMLt follows Fama and French (1993). At the end of June of each year k, we sort stocks into two size groups. Small includes NYSE, Amex, and NASDAQ stocks with June market capitalization below the NYSE median and Big includes stocks with market cap above the NYSE median. We also sort stocks into three book-to-market equity (B/M) groups, Growth (NYSE, Amex, and NASDAQ stocks in the bottom 30% of NYSE B/M), Neutral (middle 40% of NYSE B/M), and Value (top 30% of NYSE B/M). Book equity is for the fiscal year ending in calendar year k−1, and the market cap in B/M is for the end of December of k−1. The intersection of the (independent) size and B/M sorts produces six value-weight portfolios, refreshed at the end of June each year. The size return, SMBt , is the simple average of the month t returns on the three Small stock portfolios minus the average of the returns on the three Big stock portfolios. The value-growth return, HMLt , is the simple average of the returns on the two Value portfolios minus the average of the returns on the two Growth portfolios. The momentum return, MOMt , is defined like HMLt , except that we sort on prior return rather than B/M and the momentum sort is refreshed monthly rather than annually. At the end of each month t−1 we sort NYSE stocks on the average of the 11 months of returns to the end of month t−2. (Dropping the return for month t−1 is common in the momentum literature.) We use the 30th and 70th NYSE percentiles to assign NYSE, Amex, and NASDAQ stocks to Low, Medium, and High momentum groups. The intersection of the size sort for the most recent June and the independent momentum sort produces six value-weight portfolios, refreshed monthly. The momentum return, MOMt , is the simple average of the month t returns on the two High momentum portfolios minus the average of the returns on the two Low momentum portfolios. The table shows the average monthly return, the standard deviation of monthly returns, and the t-statistic for the average monthly return. The period is January 1984 through September 2006. Summary Statistics for Monthly Explanatory Returns for the Three-Factor and Four-Factor Models Luck versus Skill in Mutual Fund Returns The Journal of FinanceR Table II Intercepts and Slopes in Variants of Regression (1) for Equal-Weight (EW) and Value-Weight (VW) Portfolios of Actively Managed Mutual The table shows the annualized intercepts (12 ∗ a) and t-statistics for the intercepts (t(Coef )) for the CAPM, three-factor, and four-factor versions of regression (1) estimated on equal-weight (EW) and value-weight (VW) net and gross returns on the portfolios of actively managed mutual funds in our sample. The table also shows the regression slopes (b, s, h, and m, for RM −Rf , SMB, HML, and MOM, respectively), t-statistics for the slopes, and the regression R2 , all of which are the same to two decimals for gross and net returns. For the market slope, t(Coef ) tests whether b is different from 1.0. Net returns are those received by investors. Gross returns are net returns plus 1/12th of a fund’s expense ratio for the year. When a fund’s expense ratio for a year is missing, we assume it is the same as other actively managed funds with similar assets under management (AUM). The period is January 1984 through September 2006. On average there are 1,308 funds and their average AUM is $648.0 million. 12 ∗ a EW Returns t(Coef ) t(Coef ) t(Coef ) VW Returns t(Coef ) t(Coef ) t(Coef ) each month. The EW portfolio weights funds equally each month. The intercepts in (1) for EW fund returns tell us whether funds on average produce returns different from those implied by their exposures to common factors in returns, whereas VW returns tell us about the fate of aggregate wealth invested in funds. Table II shows estimates of (1) for fund returns measured gross and net of fund expenses. Net returns are those received by investors. Monthly gross returns are net returns plus 1/12th of a fund’s expense ratio for the year. The market slopes in Table II are close to 1.0, which is not surprising since our sample is funds that invest primarily in U.S. stocks. The HMLt and MOMt slopes are close to zero. Thus, in aggregate, active funds show little exposure to the value-growth and momentum factors. The EW portfolio of funds produces a larger SMBt slope (0.18) than the VW portfolio (0.07). We infer that smaller funds are more likely to invest in small stocks, but total dollars invested in active funds (captured by VW returns) show little tilt toward small Luck versus Skill in Mutual Fund Returns The intercepts in the estimates of (1) summarize the average performance of funds (EW returns) and the performance of aggregate wealth invested in funds (VW returns) relative to passive benchmarks. In terms of net returns to investors, performance is poor. The three-factor and four-factor (annualized) intercepts for EW and VW net returns are negative, ranging from −0.81% to −1.00% per year, with t-statistics from −2.05 to −3.02. These results are in line with previous work (e.g., Jensen (1968), Malkiel (1995), Gruber (1996)). The intercepts in (1) for EW and VW net fund returns tell us whether on average active managers have sufficient skill to generate returns that cover the costs funds impose on investors. Gross returns come closer to testing whether managers have any skill. For EW gross fund returns, the three-factor and fourfactor intercepts for 1984 to 2006 are positive, 0.36% and 0.39% per year, but they are only 0.85 and 0.90 standard errors from zero. The intercepts in (1) for VW gross returns are quite close to zero, 0.13% per year (t = 0.40) for the three-factor version of (1), and −0.05% per year (t = −0.15) for the four-factor Table II also shows estimates of the CAPM version of (1), in which RMt − Rft is the only explanatory return. The annualized CAPM intercept for VW gross fund returns for 1984 to 2006, −0.18% per year (t = −0.49), is again close to zero and similar to the estimates for the three-factor and four-factor models. It is not surprising that the intercepts of the three models are so similar (−0.18%, 0.13%, and −0.05% per year) since VW fund returns produce slopes close to zero for the non-market explanatory returns in (1). We can offer an equilibrium accounting perspective on the results in Table II. When we add back the costs in expense ratios, α estimates for VW gross fund returns are close to zero. Thus, before expenses, there is no evidence that total wealth invested in active funds gets any benefits or suffers any losses from active management. VW fund returns also show little exposure to the size, value, and momentum returns, and the market return alone explains 99% of the variance of the monthly VW fund return. Together these facts say that during 1984 to 2006, active mutual funds in aggregate hold a portfolio that, before expenses, mimics market portfolio returns. The return to investors, however, is reduced by the high expense ratios of active funds. These results echo equilibrium accounting, but for a subset of investment managers where the implications of equilibrium accounting for aggregate investor returns need not hold. C. Measurement Issues in the Tests on Gross Returns The benchmark explanatory returns in (1) are before all costs. This is appropriate in tests on net fund returns where the issue addressed is whether managers have sufficient skill to produce expected returns that cover their costs. Gross returns pose more difficult measurement issues. The issue in the tests on gross fund returns is whether managers have skill that causes expected returns to differ from those of comparable passive benchmarks. For this purpose, one would like fund returns measured before all The Journal of FinanceR costs and non-return revenues. This would put funds on the same pure return basis as the benchmark explanatory returns, so the regressions could focus on manager skill. Our gross fund returns are before the costs in expense ratios (including management fees), but they are net of other costs, primarily trading costs, and they include the typically small revenues from securities We could attempt to add trading costs to our estimates of gross fund returns. Funds do not report trading costs, however, and estimates are subject to large errors. For example, trading costs are likely to vary across funds because of differences in style tilts, trading skill, and the extent to which a fund demands immediacy in trade execution. Trading costs also vary through time. Our view is that estimates of trading costs for individual funds, especially actively managed funds, are fraught with error and potential bias, and are likely to be misleading. We prefer to stay with our simple definition of gross returns (net returns plus the costs in expense ratios), with periodic qualifications to our An alternative approach (suggested by a referee) is to put the passive benchmarks produced by combining the explanatory returns in (1) in the same units as the gross fund returns on the left of (1). This involves taking account of the costs not covered in expense ratios that would be borne by an efficiently managed passive benchmark with the same style tilts as the fund whose gross returns are to be explained. Appendix A discusses this approach in detail. The bottom line is that for efficiently managed passive funds, the costs missed in expense ratios are close to zero. Thus, adjusting the benchmarks produced by (1) for estimates of these costs is unnecessary. This does not mean our tests on gross fund returns capture the pure effects of skill. Though it appears that all substantial costs incurred by efficiently managed passive funds are in their expense ratios, this is less likely to be true for actively managed funds. The typical active fund trades more than the typical passive fund, and active funds are likely to demand immediacy in trading that pushes up costs. Our tests on gross returns thus produce α estimates that capture skill, less whatever net costs (costs minus non-return revenues) are missed by expense ratios. Equivalently, the tests say that a fund’s management has skill only if it is sufficient to cover the missing costs (primarily trading costs). This seems like a reasonable definition of skill since an efficiently managed passive fund can apparently avoid these costs. More important, this is the definition of skill we can accurately test, given the unavoidable absence of accurate trading cost estimates for active funds. The fact that our gross fund returns are net of the costs missed in expense ratios, however, does affect the inferences about equilibrium accounting we can draw from the aggregate results in Table II. Since the α estimates for VW gross fund returns in Table II are close to zero, they suggest that in aggregate funds show sufficient skill to produce expected returns that cover some or all of the costs missed in expense ratios. If this is the correct inference (precision is an issue), equilibrium accounting then says that the costs recovered by funds are matched by equivalent losses on investments outside mutual funds. Luck versus Skill in Mutual Fund Returns II. Berk and Green (2004) Readers contend that our results (Table II and below) are consistent with Berk and Green (2004). Their model is attractive theory, but our results reject most of its predictions about mutual fund returns. In their world, a fund is endowed with a permanent α, before costs, but it faces costs that are an increasing convex function of AUM. Investors use returns to update estimates of α. A fund with a positive expected α before costs attracts inflows until AUM reaches the point where expected α, net of costs, is zero. Outflows drive out funds with negative expected α. In equilibrium, all active funds (and thus funds in aggregate) have positive expected α before costs and zero expected α net of costs. Our evidence that the aggregate portfolio of mutual funds has negative α net of costs contradicts the predictions of Berk and Green (2004). The results below on the net returns of individual funds also reject their prediction that all active managers have zero α net of costs. In fact, our results say that for most if not all funds, true α in net returns is negative. Finally, equilibrium accounting poses a theoretical problem for Berk and Green (2004). Their model focuses on rational investors who optimally choose among passive and active alternatives. In aggregate, their investors have positive α before costs and zero α after costs. Equilibrium accounting, however, says that in aggregate investors have zero α before costs and negative α after III. Bootstrap Simulations Table II says that, on average, active mutual funds do not produce gross returns above (or below) those of passive benchmarks. This may just mean that managers with skill that allows them to outperform the benchmarks are balanced by inferior managers who underperform. We turn now to simulations that use individual fund returns to infer the existence of superior and inferior A. Setup To lessen the effects of “incubation bias” (see below), we limit the tests to funds that reach 5 million 2006 dollars in AUM. Since the AUM minimum is in 2006 dollars, we include a fund in 1984, for example, if it has more than about $2.5 million in AUM in 1984. Once a fund passes the AUM minimum, it is included in all subsequent tests, so this requirement does not create selection bias. We also show results for funds after they pass $250 million and $1 billion. Since we estimate benchmark regressions for each fund, we limit the tests to funds that have at least 8 months of returns after they pass an AUM bound, so there is a bit of survival bias. To avoid having lots of new funds with short return histories, we only use funds that appear on CRSP at least 5 years before the end of our sample period. Fund management companies commonly provide seed money to new funds to develop a return history. Incubation bias arises because funds typically The Journal of FinanceR open to the public—and their pre-release returns are included in mutual fund databases—only if the returns turn out to be attractive. The $5 million AUM bound for admission to the tests alleviates this bias since AUM is likely to be low during the pre-release period. Evans (2010) suggests that incubation bias can be minimized by using returns only after funds receive a ticker symbol from NASDAQ, which typically means they are available to the public. Systematic data on ticker symbol start dates are available only after 1998. We have replicated our tests for 1999 to 2006 using CRSP start dates for new funds (as in our reported results) and then using NASDAQ ticker dates (from Evans). Switching to ticker dates has almost no effect on aggregate fund returns (as in Table II), and has only trivial effects on the cross-section of t(α) estimates for funds (as in Table III below). We conclude that incubation bias is probably unimportant in our results for 1984 to 2006. Our goal is to draw inferences about the cross-section of true α for active funds, specifically, whether the cross-section of α estimates suggests a world where true α is zero for all funds or whether there is nonzero true α, especially in the tails of the cross-section of α estimates. We are interested in answering this question for 12 different cross-sections of α estimates—for gross and net returns, for the three-factor and four-factor benchmarks, and for the three AUM samples. Thus, we use regression (1) to estimate each fund’s three-factor or four-factor α for gross or net returns for the part of 1984 to 2006 after the fund passes each AUM bound. The tests for nonzero true α in actual fund returns use bootstrap simulations on returns that have the properties of fund returns, except that true α is set to zero for every fund. To set α to zero, we subtract a fund’s α estimate from its monthly returns. For example, to compute three-factor benchmark-adjusted gross returns for a fund in the $5 million group, we subtract its three-factor α estimated from monthly gross returns for the part of 1984 to 2006 that the fund is in the $5 million group from the fund’s monthly gross returns for that period. We calculate benchmark-adjusted returns for the three-factor and four-factor models, for gross and net returns, and for the three AUM bounds. The result is 12 populations of benchmark-adjusted (zero-α) returns. (CAPM simulation results are in Appendix B.) A simulation run is a random sample (with replacement) of 273 months, drawn from the 273 calendar months of January 1984 to September 2006. For each of the 12 sets of benchmark-adjusted returns, we estimate, fund by fund, the relevant benchmark model on the simulation draw of months of adjusted returns, dropping funds that are in the simulation run for less than 8 months. Each run thus produces 12 cross-sections of α estimates using the same random sample of months from 12 populations of adjusted (zero-α) fund We do 10,000 simulation runs to produce 12 distributions of t-statistics, t(α), for a world in which true α is zero. We focus on t(α), rather than estimates of α, to control for differences in precision due to differences in residual variance and in the number of months funds are in a simulation run. Luck versus Skill in Mutual Fund Returns Note that setting true α equal to zero builds different assumptions about skill into the tests on gross and net fund returns. For net returns, setting true α to zero leads to a world where every manager has sufficient skill to generate expected returns that cover all costs. In contrast, setting true α to zero in gross returns implies a world where every fund manager has just enough skill to produce expected returns that cover the costs missed in expense ratios. Our simulation approach has an important advantage. Because a simulation run is the same random sample of months for all funds, the simulations capture the cross-correlation of fund returns and its effects on the distribution of t(α) estimates. Since we jointly sample fund and explanatory returns, we also capture any correlated heteroskedasticity of the explanatory returns and disturbances of a benchmark model. We shall see that these details of our approach are important for inferences about true α in actual fund returns. Defining a simulation run as the same random sample of months for all funds also has a cost. If a fund is not in the tests for the entire 1984 to 2006 period, it is likely to show up in a simulation run for more or less than the number of months it is in our sample. This is not serious. We focus on t(α), and the distribution of t(α) estimates depends on the number of months funds are in a simulation run through a degrees of freedom effect. The distributions of t(α) estimates for funds that are oversampled in a simulation run have more degrees of freedom (and thinner extreme tails) than the distributions of t(α) for the actual returns of the funds. Within a simulation run, however, oversampling of some funds should roughly offset undersampling of others, so a simulation run should produce a representative sample of t(α) estimates for simulated returns that have the properties of actual fund returns, except that true α is zero for every fund. Oversampling and undersampling of fund returns in a simulation run should also about balance out in the 10,000 runs used in our inferences. A qualification of this conclusion is in order. In a simulation run, as in the tests on actual returns, we discard funds that have less than 8 months of returns. This means we end up with a bit more oversampling of fund returns. As a result, the distributions of t(α) estimates in the simulations tend to have more degrees of freedom (and thinner tails) than the estimates for actual fund returns. This means our tests are a bit biased toward finding false evidence of performance in the tails of t(α) estimates for actual fund returns. There are two additional caveats. (i) Random sampling of months in a simulation run preserves the cross-correlation of fund returns, but we lose any effects of autocorrelation. The literature on autocorrelation of stock returns (e.g., Fama (1965)) suggests that this is a minor problem. (ii) Because we randomly sample months, we also lose any effects of variation through time in the regression slopes in (1). (The issues posed by time-varying slopes are discussed by Ferson and Schadt (1996).) Capturing time variation in the regression slopes poses thorny problems, and we leave this potentially important issue for future To develop perspective on the simulations, we first compare, in qualitative terms, the percentiles of the cross-section of t(α) estimates from actual fund returns and the average values of the percentiles from the simulations. We then The Journal of FinanceR turn to likelihood statements about whether the cross-section of t(α) estimates for actual fund returns points to the existence of skill. B. First Impressions When we estimate a benchmark model on the returns of each fund in an AUM group, we get a cross-section of t(α) estimates that can be ordered into a cumulative distribution function (CDF) of t(α) estimates for actual fund returns. A simulation run for the same combination of benchmark model and AUM group also produces a cross-section of t(α) estimates and its CDF for a world in which true α is zero. In our initial examination of the simulations we compare (i) the values of t(α) at selected percentiles of the CDF of the t(α) estimates from actual fund returns and (ii) the averages across the 10,000 simulation runs of the t(α) estimates at the same percentiles. For example, the first percentile of three-factor t(α) estimates for the net returns of funds in the $5 million AUM group is −3.87, versus an average first percentile of −2.50 from the 10,000 three-factor simulation runs for the net returns of funds in this group (Table III). For each combination of gross or net returns, AUM group, and benchmark model, Table III shows the CDF of t(α) estimates for actual returns and the average of the 10,000 simulation CDFs. The average simulation CDFs are similar for gross and net returns and for the two benchmark models. This is not surprising since true α is always zero in the simulations. The dispersion of the average simulation CDFs decreases from lower to higher AUM groups. This is at least in part a degrees of freedom effect; on average, funds in lower AUM groups have shorter sample periods. B.1. Net Returns The Berk and Green (2004) prediction that most fund managers have sufficient skill to cover their costs fares poorly in Table III. The left tail percentiles of the t(α) estimates from actual net fund returns are far below the corresponding average values from the simulations. For example, the 10th percentiles of the actual t(α) estimates, −2.34, −2.37, and −2.53 for the $5 million, $250 million, and $1 billion groups, are much more extreme than the average estimates from the simulation, −1.32, −1.31, and −1.30. The right tails of the t(α) estimates also do not suggest widespread skill sufficient to cover costs. In the tests that use the three-factor model, the t(α) estimates from the actual net returns of funds in the $5 million group are below the average values from the simulations for all percentiles below the 98th . For the $1 billion group, only the 99th percentile of three-factor t(α) for actual net fund returns is above the average simulation 99th percentile, and then only slightly. For the $250 million group, the percentiles of three-factor t(α) for actual net fund returns are all below the averages from the simulations. Figure 1 shows the actual and average simulated CDFs for the $5 million AUM group. Luck versus Skill in Mutual Fund Returns Table III Percentiles of t(α) Estimates for Actual and Simulated Fund Returns: January 1984 to September 2006 The table shows values of t(α) at selected percentiles (Pct) of the distribution of t(α) estimates for actual (Act) net and gross fund returns. The table also shows the percent of the 10,000 simulation runs that produce lower values of t(α) at the selected percentiles than those observed for actual fund returns (% &lt; Act). Sim is the average value of t(α) at the selected percentiles from the simulations. The period is January 1984 to September 2006 and results are shown for the three- and four-factor models for the $5 million, $250 million, and $1 billion AUM fund groups. There are 3,156 funds in the $5 million group, 1,422 in the $250 million group, and 660 in the $1 billion group. 5 Million 250 Million 1 Billion 3-Factor Net Returns 4-Factor Net Returns The Journal of FinanceR Table III—Continued 5 Million 250 Million 1 Billion 3-Factor Gross Returns 4-Factor Gross Returns Evidence of skill sufficient to cover costs is even weaker with an adjustment for momentum exposure. In the tests that use the four-factor model, the percentiles of the t(α) estimates for actual net fund returns are always below the average values from the simulations. In other words, the averages of the Luck versus Skill in Mutual Fund Returns &lt; -4 &lt; -3 &lt; -2 &lt; -1 &lt; 0 &lt; 1 &lt; 2 &lt; 3 &lt; 4 Figure 1. Simulated and actual cumulative density function of three-factor t(α) for net returns, 1984–2006. percentile values of four-factor t(α) from the simulations of net returns (where by construction skill suffices to cover costs) always beat the corresponding percentiles of t(α) for actual net fund returns. There is a glimmer of hope for investors in the tests on net returns. Even in the four-factor tests, the 99th and, for the $5 million group, the 98th percentiles of the t(α) estimates for actual fund returns are close to the average values from the simulations. This suggests that some fund managers have enough skill to produce expected benchmark-adjusted net returns that cover costs. This is, however, a far cry from the prediction of Berk and Green (2004) that most if not all fund managers can cover their costs. B.2. Gross Returns It is possible that the fruits of skill do not show up more generally in net fund returns because they are absorbed by expenses. The tests on gross returns in Table III show that adding back the costs in expense ratios pushes up t(α) for actual fund returns. For all AUM groups, however, the left tail of three-factor t(α) estimates for actual gross fund returns is still to the left of the average from the simulations. For example, in the simulations the average value of the fifth percentile of t(α) for gross returns for the $5 million group is −1.71, but the actual fifth percentile from actual fund returns is much lower, −2.19. The Journal of FinanceR &lt; -4 &lt; -3 &lt; -2 &lt; -1 &lt; 0 &lt; 1 &lt; 2 &lt; 3 &lt; 4 Figure 2. Simulated and actual cumulative density function of three-factor t(α) for gross returns, 1984–2006. Thus, the left tails of the CDFs of three-factor t(α) suggest that when returns are measured before expenses, there are inferior fund managers whose actions result in negative true α relative to passive benchmarks. Conversely, the right tails of three-factor t(α) suggest that there are superior managers who enhance expected returns relative to passive benchmarks. For the $5 million AUM group, the CDF of t(α) estimates for actual gross fund returns moves to the right of the average from the simulations at about the 60th percentile. For example, the 95th percentile of t(α) for funds in the $5 million group averages 1.68 in the simulations, but the actual 95th percentile is higher, 2.04. For the two larger AUM groups the crossovers occur at higher percentiles, around the 80th percentile for the $250 million group and the 90th percentile for the $1 billion group. Figure 2 graphs the results for the threefactor benchmark and the $5 million AUM group. The four-factor results for gross returns in Table III are similar to the threefactor results, with a minor nuance. Adding a momentum control tends to shrink slightly the left and right tails of the cross-sections of t(α) estimates for actual fund returns. This suggests that funds with negative three-factor α estimates tend to have slight negative MOMt exposure and funds with positive three-factor α tend to have slight positive exposure. Controlling for momentum pulls the α estimates toward zero, but only a bit. Finally, the average simulation distribution of t(α) for the $5 million fund group is like a t distribution with about 24 degrees of freedom. The average sample life of these funds is 112 months, so we can probably conclude that the Luck versus Skill in Mutual Fund Returns simulation distributions of t(α) are more fat-tailed than can be explained by degrees of freedom. This may be due in part to fat-tailed distributions of stock returns (Fama (1965)). A referee suggests that active trading may also fatten the tails of fund returns. And properties of the joint distribution of fund returns may have important effects on the cross-section of t(α) estimates—a comment of some import in our later discussion of Kosowski et al. (2006). C. Likelihoods Comparing the percentiles of t(α) estimates for actual fund returns with the simulation averages gives hints about whether manager skill affects expected returns. Table III also provides likelihoods, in particular, the fractions of the 10,000 simulation runs that produce lower values of t(α) at selected percentiles than actual fund returns. These likelihoods allow us to judge more formally whether the tails of the cross-section of t(α) estimates for actual fund returns are extreme relative to what we observe when true α is zero. Specifically, we infer that some managers lack skill sufficient to cover costs if low fractions of the simulation runs produce left tail percentiles of t(α) below those from actual net fund returns, or equivalently, if large fractions of the simulation runs beat the left tail t(α) estimates from actual net fund returns. Likewise, we infer that some managers produce benchmark-adjusted expected returns that more than cover costs if large fractions of the simulation runs produce right tail percentiles of t(α) below those from actual net fund returns. The logic is similar for gross returns, but the question is whether there are managers with skill sufficient to cover the costs (primarily trading costs) missing from expense ratios. There are two problems in drawing inferences from the likelihoods in Table III. (i) Results are shown for many percentiles so there is a multiple comparisons issue. (ii) The likelihoods for different percentiles are correlated. One way to address these problems is to focus on a given percentile of each tail of t(α), for example, the 5th and the 95th percentiles, and draw inferences entirely from them. But this approach discards lots of information. We prefer to examine all the likelihoods, with emphasis on the extreme tails, where performance is most likely to be identified. As a result, our inferences from the formal likelihoods are somewhat informal. C.1. Net Returns The likelihoods in Table III confirm that skill sufficient to cover costs is rare. Below the 80th percentile, the three-factor t(α) estimates for actual net fund returns beat those from the simulations in less than 1.0% of the net return simulation runs. For example, the 70th percentile of the cross-section of threefactor t(α) estimates from the net returns of $5 million funds (our full sample) is 0.08, and only 0.51% (about half of one percent) of the 10,000 simulation runs for this group produce 70th percentile t(α) estimates below 0.08. It seems safe to conclude that most fund managers do not have enough skill to produce The Journal of FinanceR benchmark-adjusted net returns that cover costs. This again is bad news for Berk and Green (2004) since their model predicts that skill sufficient to cover costs is the general rule. The likelihoods for the most extreme right tail percentiles of the three-factor t(α) estimates in Table III also confirm our earlier conclusion that some managers have sufficient skill to cover costs. For the $5 million group, the 97th , 98th , and 99th percentiles of the cross-section of three-factor t(α) estimates from actual net fund returns are close to the average values from the simulations, and 49.35% to 58.70% of the t(α) estimates from the 10,000 simulation runs are below those from actual net returns. The likelihoods that the highest percentiles of the t(α) estimates from the net returns of funds in the $5 million group beat those from the simulations drop below 40% when we use the four-factor model to measure α, but the likelihoods nevertheless suggest that some fund managers have enough skill to cover costs. Some perspective is helpful. For the $5 million group, about 30% of funds produce positive net return α estimates. The likelihoods in Table III tell us, however, that most of these funds are just lucky; their managers are not able to produce benchmark-adjusted expected returns that cover costs. For example, the 90th percentile of the t(α) estimates for actual net fund returns is near 1.00. The average standard error of the α estimates is 0.28% (monthly), which suggests that funds around the 90th percentile of t(α) beat our benchmarks by more than 3.3% per year for the entire period they are in the sample. These managers are sure to be anointed as highly skilled active investors. But about 90% of the net return simulation runs produce 90th percentiles of t(α) that beat those from actual fund returns. It thus seems that, like funds below the 90th percentile, most funds around the 90th percentile do not have managers with sufficient skill to cover costs; that is, true net return α is negative. The odds that managers have enough skill to cover costs are better for funds at or above the 97th percentile of the t(α) estimates. In the $5 million group, funds at the 97th , 98th , and 99th percentiles of three-factor t(α) estimates do about as well as would be expected if all fund managers were able to produce benchmark-adjusted expected returns that cover costs. But this just means that our estimate of true net return three-factor α for these funds is close to zero. If we switch to the four-factor model, the estimate of true α is negative for all percentiles of the t(α) estimates since the percentiles from actual net fund returns beat those from the simulations in less than 40% of the simulation What mix of active funds might generate the net return results in Table III? Suppose there are two groups of funds. Managers of good funds have just enough skill to produce zero α in net returns; bad funds have negative α. When the two groups are mixed, the expected cross-section of t(α) estimates is entirely to the left of the average of the cross-sections from the net return simulation runs (in which all managers have sufficient skill to cover costs). Even the extreme right tail of the t(α) estimates for actual net fund returns will be weighed down by bad managers who are extremely lucky but have smaller t(α) estimates than if they were extremely lucky good managers. In our tests, Luck versus Skill in Mutual Fund Returns most of the cross-section of t(α) estimates for actual net fund returns is way left of what we expect if all managers have zero true α. Thus, most funds are probably in the negative true α group. At least for the $5 million AUM sample, the 97th , 98th , and 99th percentiles of the three-factor t(α) estimates for actual net fund returns are similar to the simulation averages. This suggests that buried in the results are fund managers with more than enough skill to cover costs, and the lucky among them pull up the extreme right tail of the net return t(α) estimates. Unfortunately, these good funds are indistinguishable from the lucky bad funds that land in the top percentiles of the t(α) estimates but have negative true α. As a result, our estimate of the three-factor net return α for a portfolio of the top three percentiles of the $5 million group is near zero; the positive α of the lucky (but hidden) good funds is offset by the negative α of the lucky bad funds. And when we switch to the four-factor model, our estimate of true α turns negative even for the top three percentiles of the t(α) estimates. Finally, our tests exclude index funds, but we can report that for 1984 to 2006 the net return three-factor α estimate for the VW portfolio of index funds (in which large, low cost funds get heavy weight) is −0.16% per year (−0.01% per month, t = −0.61), and four-factor α is 0.01% per year (t = 0.02). Since large, low cost index funds are not subject to the vagaries of active management, it seems reasonable to infer that the net return true α for a portfolio of these funds is close to zero. In other words, going forward we expect that a portfolio of low cost index funds will perform about as well as a portfolio of the top three percentiles of past active winners, and better than the rest of the active fund C.2. Gross Returns The simulation tests for net returns ask whether active managers have sufficient skill to cover all their costs. In the tests on gross returns, the bar is lower. Specifically, the issue is whether managers have enough skill to at least cover the costs (primarily trading costs) missing from expense ratios. The three-factor gross return simulations for the $5 million AUM group suggest that most funds in the left tail of three-factor t(α) estimates do not have enough skill to produce benchmark-adjusted expected returns that cover trading costs, but many managers in the right tail have such skill. For the 40th and lower percentiles, the three-factor t(α) estimates for the actual gross returns of funds in the $5 million group beat those from the simulations in less than 30% of the simulation runs, falling to less than 6% for the 10th and lower percentiles. Conversely, above the 60th percentile, the three-factor t(α) estimates for actual gross fund returns beat those from the simulations in at least 56% of the simulation runs, rising to more than 90% for the 96th and higher percentiles. As usual, the results are weaker when we switch from three-factor to four-factor benchmarks, but the general conclusions are the same. For many readers, the important insight of Berk and Green (2004) is their assumption that there are diseconomies of scale in active management, not their detailed predictions about net fund returns (which are rejected in our tests). The Journal of FinanceR The right tails of the t(α) estimates for gross returns suggest diseconomies. The extreme right tail percentiles of t(α) are typically lower for the $250 million and $1 billion groups than for the $5 million group, and more of the simulation runs beat the extreme right tail percentiles of the t(α) estimates for the larger AUM funds. In the world of Berk and Green (2004), however, the weeding out of unskilled managers should also lead to left tails for t(α) estimates that are less extreme for larger funds. This prediction is not confirmed in our results. The left tails of the t(α) estimates for the $250 million and $1 billion groups are at least as extreme as the left tail for the $5 million group. This contradiction in the left tails of the t(α) estimates makes us reluctant to interpret the right tails as evidence of diseconomies of scale. The tests on gross returns point to the presence of skill (positive and negative). We next estimate the size of the skill effects. A side benefit is evidence on the power of the simulation tests. IV. Estimating the Distribution of True α in Gross Fund Returns To examine the likely size of the skill effects in gross fund returns we repeat the simulations but with α injected into fund returns. We then examine (i) how much α is necessary to reproduce the cross-section of t(α) estimates for actual gross fund returns, and (ii) levels of α too extreme to be consistent with the t(α) estimates for actual fund returns. Given the evidence that, at least for the $5 million group (our full sample), the distribution of t(α) estimates in gross fund returns is roughly symmetric about zero (Table III), it is reasonable to assume that true α is distributed around zero. It is also reasonable to assume that extreme levels of skill (good or bad) are rare. Concretely, we assume that each fund is endowed with a gross return α drawn from a normal distribution with a mean of zero and a standard deviation of σ per year. The new simulations are much like the old. The first step again is to adjust the gross returns of each fund, setting α to zero for the three-factor and fourfactor benchmarks and each of the three AUM groups. But now, before drawing the random sample of months for a simulation run, we draw a true α from a normal distribution with mean zero and standard deviation σ per year—the same α for every combination of benchmark model and AUM group for a given fund, but an independent drawing of α for each fund. It seems reasonable that more diversified funds have less leeway to generate true α. To capture this idea, we scale the α drawn for a fund by the ratio of the fund’s (three-factor or four-factor) residual standard error to the average standard error for all funds. We add the scaled α to the fund’s benchmarkadjusted returns. We then draw a random sample (with replacement) of 273 months, and for each fund we estimate three-factor and four-factor regressions on the adjusted gross returns of the fund’s three AUM samples. The simulations thus use returns that have the properties of actual fund returns, except we know true α has a normal distribution with mean zero and (for the “average” fund) standard deviation σ per year. We do 10,000 simulation runs, and a fund Luck versus Skill in Mutual Fund Returns gets a new drawing of α in each run. To examine power, we vary σ , the standard deviation of true α, from 0.0% to 2.0% per year, in steps of 0.25%. Table IV shows percentiles of the cross-section of t(α) estimates for actual gross fund returns (from Table III) and the average t(α) estimates at the same percentiles from the 10,000 simulation runs, for each value of σ . These are useful for judging how much dispersion in true α is consistent with the actual cross-section of t(α) estimates. For each σ , the table also shows the fraction of the simulation runs that produce percentiles of t(α) estimates below those from actual fund returns. We use these for inferences about the amount of dispersion in true α we might rule out as too extreme. A. Likely Levels of Performance If true α comes from a normal distribution with mean zero and standard deviation σ , Table IV provides two slightly different ways to infer the value of σ . We can look for the value of σ that produces average simulation percentile values of t(α) most like those from actual fund returns. Or we can look for the σ that produces simulation t(α) estimates below those for actual returns in about 50% of the simulation runs. If α has a normal distribution with mean zero and standard deviation σ , we expect the effects of the level of σ to become stronger as we look further into the tails of the cross-section of t(α). Thus, we are most interested in values of σ that match the extreme tails of the t(α) estimates for actual gross fund returns. The normality assumption for true α is an approximation. We do not expect that a single value of σ (the standard deviation of true α) completely captures the tails of the t(α) estimates for actual fund returns, even if we allow a different σ for each tail. With this caveat, the three-factor and four-factor simulations for the $5 million group suggest that σ around 1.25% to 1.50% per year captures the extreme left tail of the t(α) estimates for actual gross fund returns, and 1.25% works for the right tail. For the $250 million and $1 billion groups, the three-factor simulations again suggest σ around 1.25% to 1.50% per year for the left tail of the t(α) estimates for gross fund returns, but for the right tail σ is lower, 0.75% to 1.00% per year. In the four-factor simulations for the $250 and $1 billion groups σ = 1.25% per year seems to capture the extreme left tail of the t(α) estimates for gross fund returns, but the estimate of σ for the right tail is again lower, 0.75% per year. (To save space, Table IV shows results only for the $5 million and $1 billion AUM groups.) The estimates do not suggest much performance, especially for larger funds. Thus, σ = 1.25% says that about one-sixth of funds have true gross return α greater than 1.25% per year (about 0.10% per month) and only about 2.4% have true α greater than 2.50% per year (0.21% per month). For perspective, the average of the OLS standard errors of individual fund α estimates—the average imprecision of α estimates—is 0.28% per month (3.4% per year). Moreover, much lower right tail σ estimates for the $250 million and $1 billion funds say that a lot of the right tail performance observed in the full ($5 million) sample is due to tiny funds. Table IV 3-Factor α, AUM &gt; 1 Billion 3-Factor α, AUM &gt; 5 Million Average t(α) from Simulations Average t(α) from Simulations Percent of Simulations below Actual Percent of Simulations below Actual The table shows values of t(α) at selected percentiles (Pct) of the distribution of t(α) estimates for Actual gross fund returns (repeated from Table III). The table also shows the average values of the t(α) estimates at the same percentiles from the 10,000 simulations, for seven values of σ (the annual standard deviation of injected α). The final seven columns of the table show, for each value of σ , the percent of the 10,000 simulation runs that produce lower t(α) estimates at the selected percentiles than actual fund returns. The period is January 1984 to September 2006 and results are shown for the three- and four-factor models for the $5 million and $1 billion AUM fund groups. Percentiles of t(α) Estimates for Actual and Simulated Gross Fund Returns with Injected α The Journal of FinanceR 4-Factor α, AUM &gt; 1 Billion 4-Factor α, AUM &gt; 5 Million Average t(α) from Simulations Table IV—Continued Average t(α) from Simulations Percent of Simulations below Actual Percent of Simulations below Actual Luck versus Skill in Mutual Fund Returns The Journal of FinanceR Our gross fund returns are net of trading costs. Returning trading costs to funds (if that is deemed appropriate) would increase the t(α) estimates in both the left and the right tails, which, depending on the (unknown) magnitudes, may move them toward more similar estimates of σ . B. Unlikely Levels of Performance What levels of σ can we reject? The answer depends on how confident we wish to be about our inferences. Suppose we are willing to accept a 20% chance of setting a lower bound for σ that is too high and a 20% chance of setting an upper bound that is too low. These bounds imply a narrower range than we would have with standard significance levels, but they are reasonable if our goal is to provide perspective on likely values of σ . Under the 20% rule, the lower bound for the left tail estimate of σ is the value that produces left tail percentile t(α) estimates below those from actual fund returns in about 20% of the simulation runs. The upper bound for the left tail σ is the value that produces left tail percentiles of t(α) below those from actual fund returns in about 80% of the simulation runs. Conversely, under the 20% rule, the lower bound for the right tail σ estimate produces right tail percentile t(α) estimates below those from actual fund returns in about 80% of the simulation runs. And the upper bound for the right tail σ produces right tail percentiles of t(α) below those from actual fund returns in about 20% of the simulation runs. In brief, applying the 20% rule leads to intervals for σ that are equal to the point estimates of the preceding section plus and minus 0.5%. For example, 1.25% per year works fairly well as the left tail estimate of σ for all AUM groups and for the three-factor and four-factor models, and the interval for the left tail σ estimates is 0.75% to 1.75%. For the $5 million group, σ = 1.25% also works for the right tail, and the interval is again 0.75% to 1.75%. For the $250 million and $1 billion groups, the right tail estimate of σ drops to about 0.75% per year, and the 20% rule leads to an interval for σ from 0.25% to 1.25% per What do these results say about the power of the simulation approach? The upper bound on σ for the $5 million group, 1.75% per year, translates to a monthly σ for the cross-section of true α of about 0.146%. Suppose the standard error of each fund’s α estimate is 0.28% per month (the sample average). With a monthly σ of 0.146%, the standard deviation of the cross-section of α estimates—caused by measurement error and dispersion in true α—is (0.1462 + 0.282 )1/2 = 0.316%. This is only a bit bigger than 0.299%, the standard deviation implied by our estimate of σ for the $5 million group, 1.25% per year. The fact that the simulations assign a relatively low probability to σ ≥ 1.75% despite the small difference between the implied standard deviations of the α estimates for σ = 1.25% (the point estimate) and σ = 1.75% suggests that the simulations have power. The source of the power is our large sample of funds (3,156 in the $5 million group). With so many funds, the percentiles of t(α) Luck versus Skill in Mutual Fund Returns are estimated precisely, which produces power to draw inferences about σ . (We thank a referee for this insight.) V. Kosowski et al. (2006) The paper closest to ours is Kosowski et al. (2006). They use bootstrap simulations to draw inferences about performance in the cross-section of four-factor t(α) estimates for net fund returns. Their main inference is more positive than ours. They find that the 95th and higher percentiles of four-factor t(α) estimates for net fund returns are above the same simulation percentiles in more than 99% of simulation runs. This seems like strong evidence that among the best funds, many have more than sufficient skill to cover costs. Our simulations on net returns uncover much less evidence of skill. Two features of their tests account for their stronger results—simulation approach and time We jointly sample fund (and explanatory) returns, whereas Kosowski et al. (2006) do independent simulations for each fund. The benefit of their approach is that the number of months a fund is in a simulation run always matches the fund’s actual number of months of returns. The cost is that their simulations do not take account of the correlation of α estimates for different funds that arises because a benchmark model does not capture all common variation in fund returns. They summarize but do not show simulations that jointly sample the four-factor residuals of funds. But they never jointly sample fund returns and explanatory returns, which means (for example) they miss any effects of correlated movement in the volatilities of four-factor explanatory returns and residuals. In fact, in the results they show, the explanatory returns do not vary across simulation runs; the historical sequence of explanatory returns is used in every run. Their rules for including funds in the simulation tests are also different. They include the complete return histories of all funds that survive more than 60 months (so there is survival bias). We include funds after they pass $5 million in AUM if they have at least 8 months of returns thereafter (less survival bias). Table V shows simulation results for their 1975 to 2002 period using (i) their rules for including funds and (ii) our rules. Note that both sets of simulations use our approach to drawing simulation samples, that is, a simulation run uses the same random sample of months for all funds, which allows for all effects implied by the joint distribution of fund returns, and of fund and explanatory The rules used to include funds affect the cross-section of t(α) estimates for actual fund returns. Specifically, the right tail t(α) estimates for actual fund returns are less extreme for our sample. This suggests that their rule that a fund must have at least 60 months of returns produces more survival bias than our 8-month rule. Another possibility is that some funds have high returns when they are tiny but do not do as well after they pass $5 million. This may be due in part to an incubation bias in the fund sample of Kosowski et al. (2006), The Journal of FinanceR Table V Percentiles of Four-Factor t(α) for Actual and Simulated Fund Returns: 1975 to 2002 The table shows values of four-factor t(α) at selected percentiles (Pct) of the distribution of t(α) for actual (Act) net and gross fund returns for funds selected using the exclusion rules of Kosowski et al. (2006) and for funds in our $5 million AUM group selected using our exclusion rules. The period is 1975 to 2002 (as in Kosowski et al. (2006)). The table also shows the fraction (%&lt;Act) of the 10,000 simulation runs that produce lower values of t(α) at the selected percentiles than those observed for actual fund returns. Sim is the average value of t(α) at the selected percentiles from the simulations. Kosowski et al. Exclusion Rules Our Exclusion Rules since they include a fund’s entire return history if the fund survives for 60 For either sample of funds, joint sampling of fund returns (our approach) affects the simulation results. Kosowski et al. (2006) report that more than 99% of their simulation runs produce 95th percentile four-factor t(α) estimates below the 95th percentile from actual net fund returns. In Table V, the number drops to 82.42% for the fund sample selected using their rules and 68.32% using our rules. Skipping the details, we can report that the stronger performance results from the fund sample chosen using their rules is due to the 60-month survival rule. If the survival rule is reduced to 8 months, their rules for including funds produce simulation results close to ours. The important point, however, is that whatever inclusion rules are used, failure to account for the joint distribution of fund returns, and of fund and explanatory returns, biases the inferences of Kosowski et al. (2006) toward positive performance. (Cuthbertson, Nitzche, and O’Sullivan (2008) apply the simulation approach of Kosowski Luck versus Skill in Mutual Fund Returns et al. to U.K. mutual funds, with similar results and, we guess, similar Time period is also an important source of differences in results. Our simulations for 1984 to 2006 produce much less evidence of funds with sufficient skill to cover costs. In Table III, the CDFs of four-factor t(α) estimates for the net fund returns of 1984 to 2006 are always to the left of the average CDFs from the net return simulations (in which funds have sufficient skill to cover costs). Even in the extreme right tail of four-factor t(α) for net returns, more than 60% of the simulation runs beat the t(α) estimates for actual fund returns. But when our approach is applied to the 1975 to 2002 period of Kosowski et al. (2006), the 90th and higher percentiles of t(α) for net fund returns are above the average values from the simulations (Table V). And for the 97th and higher percentiles, less than 20% of the simulation runs beat the t(α) estimates for actual fund returns. What do we make of the stronger results for 1975 to 2002 versus 1984 to 2006? One story is that in olden times there were fewer funds and a larger percentage of managers with sufficient skill to cover costs. Over time the skilled managers lost their edge or went on to more lucrative pursuits (e.g., hedge funds). Or perhaps, the entry of hordes of mediocre managers posing as skilled (Cremers and Petajisto (2009)) buries the tracks of true skill. Stronger results for 1975 to 2002 may also be due to biases in the CRSP data that are more prevalent in earlier years (Elton et al. (2001)). Whatever the explanation, the stronger evidence for performance during 1975 to 2002 is interesting, but irrelevant for today’s investors. VI. Conclusions For 1984 to 2006, when the CRSP database is relatively free of biases, mutual fund investors in aggregate realize net returns that underperform CAPM, three-factor, and four-factor benchmarks by about the costs in expense ratios. Thus, if there are fund managers with enough skill to produce benchmarkadjusted expected returns that cover costs, their tracks are hidden in the aggregate results by the performance of managers with insufficient skill. When we turn to individual funds, the challenge is to distinguish skill from luck. With 3,156 funds in our full ($5 million AUM) sample, some do extraordinarily well and some do extraordinarily poorly just by chance. To distinguish between luck and skill, we compare the distribution of t(α) estimates from actual fund returns with the distribution from bootstrap simulations in which all funds have zero true α. The tests on net returns say that few funds have enough skill to cover costs. The distribution of three-factor t(α) estimates from net fund returns is almost always to the left of the zero α distribution. The extreme right tail of the three-factor t(α) estimates for net fund returns, however, is roughly in line with the simulated distribution. This suggests that some managers do have sufficient skill to cover costs. But the estimate of net return three-factor true α is about zero even for the portfolio of funds in the top percentiles of historical three-factor t(α) estimates, and the estimate of four-factor true α is The Journal of FinanceR negative. Moreover, the estimate of true α for funds in the top percentiles is no better than the estimated α (also near zero) for large, efficiently managed passive funds. The simulation results for gross fund returns say that when returns are measured before the costs in expense ratios, there is stronger evidence of manager skill, negative as well as positive. For our $5 million AUM sample, true three-factor or four-factor gross return α seems to be symmetric about zero with a cross-section standard deviation of about 1.25% per year (about 10 basis points per month). For larger ($250 million and $1 billion AUM) funds, the standard deviation for the left tail is again about 1.25% per year, but the right tail standard deviation of true α falls to about 0.75%. Appendix A: Measurement Issues in Gross Returns The question in the tests on gross fund returns is whether managers have skill that causes expected returns to differ from those of comparable passive benchmarks. For this purpose, we would like to have fund returns measured before all costs but net of non-return income like revenues from securities lending. This would put funds on the same pure return basis as the benchmark explanatory returns, so the tests could focus on the effects of skill. Our gross fund returns are before the costs in expense ratios, but they are net of other costs, primarily trading costs, and they include income from securities lending. We could attempt to add trading costs to our estimates of gross fund returns. Funds do not report trading costs, however, and even when turnover is available, estimates of trading costs are subject to large errors (Carhart (1997)). For example, trading costs are likely to vary across funds because of differences in style tilts, trading skill, and the extent to which a fund is actively managed and demands immediacy in trade execution. Trading costs can also vary through time because of changes in a fund’s management and general changes in the costs of trading. All this leads us to conclude that estimates of trading costs for individual funds, especially actively managed funds, are fraught with error and potential bias, and so can be misleading. As a result, we do not take that route in our tests on gross returns. An alternative approach (suggested by a referee) is to put the passive benchmarks produced by combining the explanatory returns in (1) in the same units as the gross fund returns on the left of (1). This involves taking account of the costs (primarily trading costs) not covered in expense ratios that would be borne by an efficiently managed passive benchmark with the same style tilts as the fund whose gross returns are to be explained. Vanguard’s index funds are good candidates for this exercise since, except for momentum, Vanguard provides index funds (Total Stock Market Index Fund, Growth Index Fund, Value Index Fund, Small-Cap Index Fund, Small-Cap Growth Index Fund, and Small-Cap Value Index Fund) that track well-defined target passive portfolios much like the market portfolio and the components of SMBt and HMLt in (1). (We thank an Associate Editor for this insight.) Because the Vanguard index funds closely track their targets and stock picking skill is Luck versus Skill in Mutual Fund Returns not an issue, we can estimate the average annual costs not included in a fund’s expense ratio. Specifically, we add a fund’s expense ratio to its reported average annual return for the 10 years through 2008 and then subtract the result from the average annual return of the fund’s target for the same period. (The same calculation for an actively managed fund would include the effects of skill, as well as the costs not in expense ratios.) For every Vanguard index fund, this estimate of the costs missed in expense ratios is negative; that is, the fund’s target return, which is before all costs, beats the fund’s actual net return by less than the fund’s expense ratio. If anything, Vanguard’s small cap index funds do better on this score than its large cap funds—a clear warning that presumptions about trading costs can be misleading. The Vanguard results are probably not unusual. We can report that the CAPM, three-factor, and four-factor α estimates for 1984 to 2006 for the net returns on a VW portfolio of index funds (which is dominated by large funds with low expense ratios) are close to zero, 0.08%, −0.16%, and 0.01% per year (t = 0.18, −0.61, and 0.02). In other words, in aggregate, wealth invested in index funds seems to earn average returns that cover costs, including trading Passive mutual funds that focus on momentum do not as yet exist, so we do not have estimates of trading costs for such funds. Existing work (Grundy and Martin (2001), Korajczyk and Sadka (2004)) suggests that the costs are significant. In our tests, however, the cross-sections of four-factor α estimates for funds are similar to the cross-sections of three-factor estimates, and the three-factor and four-factor tests produce much the same inferences. Given the large average MOMt return, these results suggest that nontrivial long-term exposure to MOMt is rare, so ignoring MOMt trading costs is inconsequential. Moreover, the discussion of results in the text centers primarily on the threefactor model. The four-factor results are primarily a robustness check. The Vanguard evidence and the results for a VW portfolio of index funds suggest that for the market and the components of SMBt and HMLt , comparable efficiently managed passive mutual funds can enhance returns through trading, securities lending, and perhaps in other ways, so that their total costs are close to their expense ratios. Thus, our three-factor α estimates for the gross returns of funds would hardly change if we adjusted their passive benchmarks for the costs missed in expense ratios. This does not mean our tests on gross returns capture the pure effects of skill. Though expense ratios seem to capture the total costs of efficiently managed passive funds, this is less likely to be true for actively managed funds. The typical active fund trades more than the typical passive fund, and active funds are likely to demand immediacy in trading that produces positive costs. Because of their high turnover, active funds also have fewer opportunities to generate revenues via securities lending (which are also trivial for the Vanguard funds). In short, it seems more likely that for active funds the costs not included in expense ratios are positive. Thus, our tests on the gross returns of funds produce α estimates that capture the effects of skill, less any costs missed by the expense ratios of the funds. The Journal of FinanceR Equivalently, our tests on gross returns say that a fund’s management has skill only if the fund’s expected gross returns are sufficient to cover the costs (primarily trading costs) not included in its expense ratio. This is a reasonable definition of skill since a comparable efficiently managed passive fund would apparently avoid these costs. More important, this definition of skill is the only one we can accurately test in the absence of accurate estimates of the trading costs of active funds (impossible with available data). It is fortuitous that efficiently managed passive benchmarks do not seem to have substantial costs missed in their expense ratios since accurate adjustment for such costs is nontrivial, perhaps impossible. For example, consider an actively managed small value fund. The passive benchmark for the fund produced by the three-factor version of (1) is likely to imply positive weights on the market, SMB, and HML, which implies positive weights on the market (M), small stocks (S), and value stocks (H) and negative weights on big stocks (B) and growth stocks (L). Suppose that (contrary to our estimates) efficiently managed passive funds have nontrivial trading costs. We might then increase the three-factor gross return α estimate for an active fund for the trading costs of the long positions in M, S, and H and the short positions in B and L that passively replicate the small value style of the active fund. But this is overkill. The three-factor model produces a passive clone for an actively managed fund by inefficiently combining five passive portfolios. A small value fund simply buys a diversified portfolio of small value stocks and only bears the trading costs of these stocks. As a result, even a passive small value fund evaluated with the three-factor model is likely to produce a positive α estimate if we enhance the estimate with positive trading costs for the five components of its three-factor If we wish to adjust the tests on gross returns for the trading costs of an efficiently managed passive fund with the same style tilts as the active fund to be evaluated, the correct procedure is to add an estimate of the trading costs of a comparable efficiently managed passive fund to the active fund’s gross return α estimate. For example, a small value active fund would be reimbursed for the trading costs (more precisely, for all the costs missed in the expense ratio) of an efficiently managed passive fund with the same style tilts. This is nontrivial since a style group includes active funds with widely different style tilts, and we need an efficiently managed passive clone for every active fund. Fortunately, the costs missed in expense ratios are apparently close to zero for efficiently managed passive funds, and ignoring them (as we do in our tests) is inconsequential for inferences. Appendix B: CAPM Bootstrap Simulations Table AI replicates the bootstrap simulations in Table III for a CAPM benchmark, that is, regression (1) with the excess market return as the only explanatory variable. The CAPM results are different. The CAPM tests on net returns produce what seems like strong evidence that some fund managers have sufficient skill to cover costs. Thus, for percentiles above the 90th , the CAPM t(α) Luck versus Skill in Mutual Fund Returns Table AI Percentiles of CAPM t(α) Estimates for Actual and Simulated Fund The table shows values of t(α) at selected percentiles (Pct) of the distribution of CAPM t(α) estimates for actual (Act) net and gross fund returns. The table also shows the percent of the 10,000 simulation runs that produce lower values of t(α) at the selected percentiles than those observed for actual fund returns (%&lt;Act). Sim is the average value of t(α) at the selected percentiles from the simulations. The period is January 1984 to September 2006 and results are shown for the $5 million, $250 million, and $1 billion AUM fund groups. 5 Million 250 Million 1 Billion Net Returns Gross Returns The Journal of FinanceR estimates for actual net fund returns are always above the averages from the net return simulations (in which all managers have sufficient skill to cover costs), and the t(α) estimates for actual fund returns typically beat those from the simulations in more than 80% of simulation runs. Relative to the threefactor and four-factor tests in Table III, the CAPM tests on gross returns in Table AI also produce what seems like stronger evidence that some managers have skill that leads to positive true α, while others have negative true α. In fact, the CAPM results just illustrate well-known patterns in average returns that cause problems for the CAPM during our sample period. Actual mutual fund returns contain the effects of size, value-growth, and momentum tilts in fund portfolios that are missed by the CAPM. Thus, even passive funds that tilt toward small stocks, value stocks, or positive momentum stocks are likely to produce positive α estimates in CAPM tests, despite the fact that their managers make no effort to pick individual stocks. The CAPM simulations allow for the relation between average return and market exposure, but they wash out all other patterns in average returns when they subtract each fund’s CAPM α estimate from its returns. As a result, the CAPM simulations say that actual fund returns have nonzero true α. Which patterns in average returns left unexplained by the CAPM are most responsible for the differences between the CAPM simulation results and the results for the three-factor and four-factor models? Table III says that adding the momentum factor to the three-factor model has minor effects on estimates of t(α). Since the momentum return MOMt has the highest average premium during our sample period, we infer that long-term exposure to momentum is probably rare among mutual funds. The average size (SMBt ) premium is trivial during our 1984 to 2006 sample period (0.03% per month, Table I), so size tilts probably are not driving the different results for the CAPM. That leaves the value (HMLt ) premium as the focus of the story. Funds in the right tail of the CAPM t(α) estimates are more likely to have positive HMLt exposure that makes them look good in CAPM tests, and funds in the left tail are likely to have negative HMLt exposure. In short, the CAPM tests are a lesson about how failure to account for common patterns in returns and average returns can affect inferences about the skill of fund managers. Berk, Jonathan B., and Richard C. Green, 2004, Mutual fund flows in rational markets, Journal of Political Economy 112, 1269–1295. Carhart, Mark M., 1997, On persistence in mutual fund performance, Journal of Finance 52, Cremers, Martijn, and Antti Petajisto, 2009, How active is your fund manager? A new measure that predicts performance, Review of Financial Studies 22, 3329–3365. Dybvig, Philip H., and Stephen A. Ross, 1985, The analytics of performance measurement using a security market line, Journal of Finance 40, 401–416. Elton, Edwin J., Martin J. Gruber, and Christopher R. Blake, 2001, A first look at the accuracy of the CRSP mutual fund database and a comparison of the CRSP and Morningstar mutual fund databases, Journal of Finance 56, 2415–2430. Luck versus Skill in Mutual Fund Returns Evans, Richard, 2010, Mutual fund incubatier, Journal of Finance 65, Forthcoming. Fama, Eugene F., 1965, The behavior of stock market prices, Journal of Business 38, 34–105. Fama, Eugene F., and Kenneth R. French, 1993, Common risk factors in the returns on stocks and bonds, Journal of Financial Economics 33, 3–56. Ferson, Wayne E., and Rudi W. Schadt, 1996, Measuring fund strategy and performance in changing economic conditions, Journal of Finance 51, 425–462. French, Kenneth R., 2008, The cost of active investing, Journal of Finance 63, 1537–1573. Grinblatt, Mark, and Sheridan Titman, 1992, Performance persistence in mutual funds, Journal of Finance 47, 1977–1984. Gruber, Martin J., 1996, Another puzzle: The growth of actively managed mutual funds, Journal of Finance 51, 783–810. Grundy, Bruce D., and J. Spencer Martin, 2001, Understanding the nature of the risks and the sources of the rewards to momentum investing, Journal of Financial Studies 14, 29–78. Jensen, Michael C., 1968, The performance of mutual funds in the period 1945–1964, Journal of Finance 23, 2033–2058. Korajczyk, Robert A., and Ronnie Sadka, 2004, Are momentum profits robust to trading costs? Journal of Finance 59, 1039–1082. Kosowski, Robert, Allan Timmermann, Russ Wermers, and Hal White, 2006, Can mutual fund “stars” really pick stocks? New evidence from a bootstrap analysis, Journal of Finance 61, Malkiel, Burton G., 1995, Returns from investing in equity mutual funds: 1971–1991, Journal of Finance 50, 549–572. O’Sullivan, N., 2008, UK mutual fund performance: Skill or luck? Journal of Empirical Finance 15, 613–634 [with K. Cuthbertson and D. Nitzsche]. Sharpe, William F., 1991, The arithmetic of active management, Financial Analysts Journal 47,
{"url":"https://studylib.net/doc/25913516/fama-and-french-2010","timestamp":"2024-11-14T10:18:56Z","content_type":"text/html","content_length":"153981","record_id":"<urn:uuid:9b0e3430-6e5d-4caf-8f32-e4e599b55924>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00874.warc.gz"}
Categorical logic Jump to navigation Jump to search Categorical logic is the branch of mathematics in which tools and concepts from category theory are applied to the study of mathematical logic. It is also notable for its connections to theoretical computer science. In broad terms, categorical logic represents both syntax and semantics by a category, and an interpretation by a functor. The categorical framework provides a rich conceptual background for logical and type-theoretic constructions. The subject has been recognisable in these terms since around 1970. There are three important themes in the categorical approach to logic: Categorical semantics Categorical logic introduces the notion of structure valued in a category C with the classical model theoretic notion of a structure appearing in the particular case where C is the category of sets and functions. This notion has proven useful when the set-theoretic notion of a model lacks generality and/or is inconvenient. R.A.G. Seely's modeling of various impredicative theories, such as system F is an example of the usefulness of categorical semantics. It was found that the connectives of pre-categorical logic were more clearly understood using the concept of adjoint functor, and that the quantifiers were also best understood using adjoint Internal languages This can be seen as a formalization and generalization of proof by diagram chasing. One defines a suitable internal language naming relevant constituents of a category, and then applies categorical semantics to turn assertions in a logic over the internal language into corresponding categorical statements. This has been most successful in the theory of toposes, where the internal language of a topos together with the semantics of intuitionistic higher-order logic in a topos enables one to reason about the objects and morphisms of a topos "as if they were sets and functions". This has been successful in dealing with toposes that have "sets" with properties incompatible with classical logic. A prime example is Dana Scott's model of untyped lambda calculus in terms of objects that retract onto their own function space. Another is the Moggi–Hyland model of system F by an internal full subcategory of the effective topos of Martin Hyland. Term-model constructions In many cases, the categorical semantics of a logic provide a basis for establishing a correspondence between theories in the logic and instances of an appropriate kind of category. A classic example is the correspondence between theories of βη-equational logic over simply typed lambda calculus and Cartesian closed categories. Categories arising from theories via term-model constructions can usually be characterized up to equivalence by a suitable universal property. This has enabled proofs of meta-theoretical properties of some logics by means of an appropriate categorical algebra. For instance, Freyd gave a proof of the existence and disjunction properties of intuitionistic logic this way. See also[edit] • Abramsky, Samson; Gabbay, Dov (2001). Handbook of Logic in Computer Science: Logic and algebraic methods. Oxford: Oxford University Press. ISBN 0-19-853781-6. • Gabbay, Dov (2012). Handbook of the History of Logic: Sets and extensions in the twentieth century. Oxford: Elsevier. ISBN 978-0-444-51621-3. • Kent, Allen; Williams, James G. (1990). Encyclopedia of Computer Science and Technology. New York: Marcel Dekker Inc. ISBN 0-8247-2272-8. Seminal papers • Lawvere, F.W., Functorial Semantics of Algebraic Theories. In Proceedings of the National Academy of Sciences 50, No. 5 (November 1963), 869-872. • Lawvere, F.W., Elementary Theory of the Category of Sets. In Proceedings of the National Academy of Sciences 52, No. 6 (December 1964), 1506-1511. • Lawvere, F.W., Quantifiers and Sheaves. In Proceedings of the International Congress on Mathematics (Nice 1970), Gauthier-Villars (1971) 329-334. 1. ^ Lawvere, Quantifiers and Sheaves Further reading[edit] • Michael Makkai and Gonzalo E. Reyes, 1977, First order categorical logic, Springer-Verlag. • Lambek, J. and Scott, P. J., 1986. Introduction to Higher Order Categorical Logic. Fairly accessible introduction, but somewhat dated. The categorical approach to higher-order logics over polymorphic and dependent types was developed largely after this book was published. • Jacobs, Bart (1999). Categorical Logic and Type Theory. Studies in Logic and the Foundations of Mathematics 141. North Holland, Elsevier. ISBN 0-444-50170-3. A comprehensive monograph written by a computer scientist; it covers both first-order and higher-order logics, and also polymorphic and dependent types. The focus is on fibred category as universal tool in categorical logic, which is necessary in dealing with polymorphic and dependent types. • John Lane Bell (2005) The Development of Categorical Logic. Handbook of Philosophical Logic, Volume 12. Springer. Version available online at John Bell's homepage. • Jean-Pierre Marquis and Gonzalo E. Reyes (2012). The History of Categorical Logic 1963–1977. Handbook of the History of Logic: Sets and Extensions in the Twentieth Century, Volume 6, D. M. Gabbay, A. Kanamori & J. Woods, eds., North-Holland, pp. 689–800. A preliminary version is available at [1]. External links[edit]
{"url":"https://static.hlt.bme.hu/semantics/external/pages/sz%C3%A1m%C3%ADt%C3%B3g%C3%A9pes_program_szemantik%C3%A1ja/en.wikipedia.org/wiki/Categorical_semantics.html","timestamp":"2024-11-06T04:39:59Z","content_type":"text/html","content_length":"46879","record_id":"<urn:uuid:5dd19beb-d1cc-4b8e-9f17-790329ff256e>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00651.warc.gz"}
GMAT Question of the Day April 9th - DS Statistics GMAT Question of the Day April 9th – DS Statistics Set M contains a set of consecutive integers. What is the standard deviation of set M? (1) Set M contains 17 numbers (2) The median of set M is 23 April 9th GMAT Question of the Day Solution: Consecutive sets have special properties. 1. Median = Mean = (Max + Min)/2 2. Any consecutive set with the same number of numbers has the same standard deviation. For instance: The set 1, 2, 3 has the same standard deviation as 51, 52, 53 as will any set of 3 consecutive integers. So if you know the number of numbers you can figure out the standard deviation. Statement (1) gives us the number of numbers in the set. Sufficient. Statement (2) only gives us the median. The set could could be 22, 23, 24 or 21, 22, 23, 24, 25. These two sets have different standard deviations. Insufficient. Leave a Comment
{"url":"https://atlanticgmat.com/gmat-question-of-the-day-april-9th-ds-statistics/","timestamp":"2024-11-06T01:20:24Z","content_type":"text/html","content_length":"292115","record_id":"<urn:uuid:bd946796-bfae-4c9c-9fde-1208ebf9cc58>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00366.warc.gz"}
Project work¶ Choose a project work. There are three projects to choose from: 1. biological sequence analysis (part07) 2. regression analysis on medical data (part08) 3. fossil data analysis The fossil data analysis project is currently not integrated to the TMC system, so for that, see the instructions from the tasks section of the course page. For the two first projects (part07 and part08), download the exercises and/or notebooks from the TMC server. Then try to solve as many exercises as you can. Then submit your solutions to the TMC server. Note that the TMC server is only used here for helping you to proceed with the project work. Your final project work returned to Moodle can include partial solutions. The projects differ a lot in their workflow. See the below sections on individual projects to see in detail how solving, testing and reporting of exercises are done in each project. Save the report as a Jupyter notebook file (with ipynb extension) and submit this file to Moodle. After this you will need to give feedback for your own report and two other reports. See the tasks section of the course page for instructions on this peer-review process. Sequence analysis¶ Download part 7 from tmc after you have completed the required number of part 6 as a usual. The src-folder contain a jupyter notebook project_notebook_sequence_analysis.ipynb. Run the notebook and fill in the cells as instructed. You may run tmc test to see that functions work as required. Submitting may not work especially if you download content from the internet as part of your code. Next to each exercise in the report there are also two text boxes for you to fill. In the first box, in your own words, describe the idea of the solution to the exercise. In the second box analyse the results, including how the program worked with the given example input or your own examples. Make sure the notebook includes your solutions and looks readable, and then submit the resulting notebook to Moodle. NOTE. Exercises in section “Stationary and equilibrium distributions (extra)” (exercises 20, 21, and 22) are not obligatory. Thus, you only need to do 19 exercises, if you are aiming to get full Regression analysis¶ Read the introduction introduction-to-regression-analysis.pdf. It looks like the TMC server corrupted the pdf, you can read it from here Write solutions to exercises directly into the cells of the given Jupyter notebook. Do not modify lines that say # exercise x; without those the tests won’t work. Don’t use additional cells, and do in each cell exactly as the instructions say. Save the file and run tmc test. The tests read and execute directly the cells of the notebook. Make sure the notebook includes you solutions and looks readable, and then submit the resulting notebook to Moodle. Running tests when peer-reviewing students notebooks¶ If you want, you can run tests on the work you are reviewing, to help assess the correctness of the solutions. Note that there can be bugs in the tests too. Make sure you don’t accidentally delete your own solutions, when testing other student’s work. Don’t do the tests where your own solutions are. Regression analysis¶ Go to a temporary working area (like /tmp on Unix) so you don’t accidentally overwrite your own solutions. Run tmc download -a hy-data-analysis-with-python-spring-2020 to get the tests. Store student’s notebook to file part08-e01_regression/src/project_notebook_regression_analysis.ipynb. Run the tests using tmc test part08-e01_regression. Sequence analysis¶ Go to a temporary working area (like /tmp on Unix) so you don’t accidentally overwrite your own solutions. Run tmc download -a hy-data-analysis-with-python-spring-2020 to get the tests. Overwrite the student’s notebook in part07-e01_sequence_analysis/src. Run the tests using tmc test in the part07-e01_sequence_analysis folder.
{"url":"https://csmastersuh.github.io/data_analysis_with_python_spring_2020/project.html","timestamp":"2024-11-08T23:34:01Z","content_type":"text/html","content_length":"15176","record_id":"<urn:uuid:318f9736-7d45-46d4-8522-6acda67f9933>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00593.warc.gz"}
Subject: Fitting and Predicting Influenza Data with SEEIIR Model in Stan Hi everyone, I’m currently working on fitting a SEEIIR model to influenza data and making predictions for future time points. I want to ensure that my implementation of the generated quantities block for prediction aligns with the fit-and-predict methodology recommended by the Stan manual. Below is my code. functions { real[] seeiir(real t, real[] y, real[] theta, real[] x_r, int[] x_i) { real mu = x_r[1]; real epsilon = x_r[2]; real lambda = x_r[3]; real alpha_max = x_r[4]; int N = x_i[1]; real alpha_min = theta[1]; real t_max = theta[2]; real beta_0 = theta[3]; real e0 = theta[4]; real i0 = theta[5]; real r0 = theta[6]; //real init[6] = {N - (2*i0 + 2*e0 + r0), e0,e0, i0, i0, r0}; // initial values real sum_init = 0; // Variable to store the sum of elements // Compute the sum of the elements //for (i in 1:6) { // sum_init += init[i]; // } // print("Sum of elements in init array: ", sum_init); real S = y[1] ; real E1 = y[2] ; real E2 = y[3] ; real I1 = y[4] ; real I2 = y[5] ; real R = y[6] ; real dS_dt = -beta_0 * 0.5 * ((1 - (alpha_min / alpha_max)) * sin((2 * pi() / 365) * (t - t_max) + pi() / 2) + 1 + (alpha_min / alpha_max)) * (I1 + I2)*S/ N + lambda*R; real dE1_dt = beta_0 * 0.5 * ((1 - (alpha_min / alpha_max)) * sin((2 * pi() / 365) * (t - t_max) + pi() / 2) + 1 + (alpha_min / alpha_max)) * (I1 + I2)*S/ N - 2*epsilon * E1; real dE2_dt = 2*epsilon*(E1 -E2); real dI1_dt = 2*(epsilon * E2 - mu * I1); real dI2_dt = 2*mu*(I1 -I2); real dR_dt = 2*mu * I2 - lambda*R; return {dS_dt, dE1_dt, dE2_dt, dI1_dt, dI2_dt, dR_dt}; data { int<lower=1> n_days; int<lower=1> n_train_days; int<lower=1> n_test_days; real t0; real ts_train[n_train_days]; real ts_test[n_test_days]; real mu; real epsilon; real lambda; real alpha_max; int N; int cases[n_train_days/7]; } // checekd transformed data { real x_r[4]={mu, epsilon,lambda, alpha_max}; int x_i[1] = { N }; } //checked here are the constant parameters parameters { real<lower=0> alpha_min; real<lower=1> t_max; real<lower=0> beta_0; real<lower=0> e0; // Common prior for e0_1 and e0_2 real<lower=0> i0; // Common prior for i0_1 and i0_2 real<lower=0> r0; real<lower=0> phi_inv; real<lower=0, upper=1> p_reported; // proportion of infected (symptomatic) people reported } //checked transformed parameters{ real y_train[n_train_days, 6]; real incidence_train[n_train_days]; real aggregated_incidence_train[n_train_days / 7]; // New aggregated incidence array real phi = 1. / phi_inv; real theta[6] = {alpha_min, t_max, beta_0, e0, i0, r0}; // Define all the theta parameters //y_train = integrate_ode_rk45(seeiir, rep_array(0.0, 6), t0, ts_train, theta, x_r, x_i); y_train = integrate_ode_rk45(seeiir, {N - (2*i0 + 2*e0 + r0), e0,e0, i0, i0, r0}, t0, ts_train, theta, x_r, x_i); for (i in 1:n_train_days){ incidence_train[i] = 2*epsilon*y_train[i, 3] * p_reported; // the new defition of incidence in //for (i in 1:n_train_days){ // real sum_y = 0.0; // for (j in 1:6) // { // sum_y+=y_train[i,j]; // } // print("sum_y:", sum_y); // } // Aggregate incidence for every seven days for (j in 1:(n_train_days / 7)) { int start_index = (j - 1) * 7 + 1; int end_index = j * 7; aggregated_incidence_train[j] = sum(incidence_train[start_index:end_index]); model { alpha_min ~ uniform(0,1); t_max ~ uniform(1, 365); beta_0 ~ uniform(0,1); real upper_limit =N; // Upper limit as 10% of the population e0 ~ uniform(0, N); // Apply the same prior to e0_1 and e0_2 i0 ~ uniform(0, N); // Apply the same prior to i0_1 and i0_2 r0 ~ uniform(0, N); // Flat prior for r0 phi_inv ~ exponential(5); p_reported ~ uniform(0,1); //sampling distribution cases[1:(n_train_days / 7)] ~ neg_binomial_2(aggregated_incidence_train, phi); }// checked generated quantities { real y_test[n_test_days, 6]; real incidence_test[n_test_days]; real aggregated_incidence_test[n_test_days / 7]; real aggregated_incidence[n_days/7]; real pred_cases[n_days / 7]; y_test = integrate_ode_rk45(seeiir, y_train[n_train_days], ts_train[n_train_days], ts_test, theta, x_r, x_i); for (i in 1:n_test_days) { incidence_test[i] = 2*epsilon*y_test[i, 3] * p_reported; for (j in 1:(n_test_days / 7)) { int start_index = (j - 1) * 7 + 1; int end_index = j * 7; aggregated_incidence_test[j] = sum(incidence_test[start_index:end_index]); // Copy elements from array a into the new array c for (i in 1:size(aggregated_incidence_train)) { aggregated_incidence[i] = aggregated_incidence_train[i]; // Copy elements from array b into the new array c, continuing after array a's elements for (j in 1:size(aggregated_incidence_test)) { aggregated_incidence[size(aggregated_incidence_train) + j] = aggregated_incidence_test[j]; pred_cases = neg_binomial_2_rng(aggregated_incidence, phi); 1. Does this approach correctly use the generated quantities block to predict future cases based on posterior samples from y_train? 2. Is there a better way to handle y_test predictions for higher accuracy? Thank you for your input and suggestions! [edit: escaped code] Hi, @Raha_ms_sci: @charlesm93 is our expert in these models. But I’ll give it a go. First, just an editorial comment. It helps immensely if you indent programs consistently (no tabs as who knows how they’ll be rendered) and put spaces around operators like * and -. For question (1), yes, the looks like the right use of integrate_ode_rk45 to predict behavior going forward from the last y_train state. I find the name confusing because y_train isn’t given from the outside—it’s the expected value of y computed by the ODE in the transformed parameters block. What you are missing is the uncertainty in y_test. There is usually measurement and modeling error included where you take the ODE to generate expectations, then generate the observed data with noise around those expectations. You want to do that with your predictions, too. I tried to explain why in this section of our User’s Guide in the section : where I distinguish the so-called ‘aleatoric’ uncertainty (from having a noisy process) from the ‘sampling’ uncertainty (from having a finite training data set). Maybe that’s what you’re doing with the incidence_test? I’m just used to seeing it as further sampling from the sampling distribution (the thing that generates data randomly given values of the Dear @Bob_Carpenter, Thank you very much for your reply—I really appreciate it! I still have some questions about my prediction setup, though. I’m new to Stan, so I apologize in advance if my questions seem basic. To clarify a bit about my code, I’m working with a flu dataset that reports weekly incidence (number of reported cases) . My goal is to fit a double - compartment model ( SEEIIR ) to this data up to a certain week, and then make predictions for the following 4 weeks. I intend to do this repeatedly : So, first, we wait until around week 10 to have enough data, then fit the SEEIIR model to the first 6 data points, and predict the following 4 weeks. Starting in week 11, I add a new data point to the training set each week and again predict the next 4 weeks. Since the data is reported weekly, I need to align my simulation with this reporting interval by aggregating the incidence values for each week. I’ve done this by calculating aggregated weekly incidence values in my model. Given the transmission parameters and initial conditions in the SEEIIR model, each compartment, including the flow from the E2 compartment to the I1 compartment, has a unique solution over time. This flow represents exactly what we aim to capture which I show it with I_ODE(t). To connect this model-derived solution to the observed data, I consider I_obs(t), the number of infected people recorded at each time point. This observed count is a noisy estimate of the actual number of infected individuals. To model this observed count, we use a count distribution—the Negative Binomial—which allows us to set I_ODE(t) as the expected value while accounting for over-dispersion via the parameter phi: I_obs(t) ~ NegBin (I_ODE(t) , phi) As shown here, I add noise to the deterministic solution of the ODE when sampling from the posterior distributions. I know that in Stan, once the model fitting (sampling) is complete, predictions are typically made in the generated quantities block. By this point, sampling is done, and the generated quantities block performs posterior predictive checks, using the posterior distributions obtained during inference. Using these posterior samples, we can then calculate the quantities of interest, like the weekly number of cases, within the generated quantities block. This is done with pred_cases=neg_binomial_2_rng (aggregated_incidence, phi), which applies a Negative Binomial distribution to incorporate over-dispersion. Finally, we can plot pred_cases along with its confidence interval to evaluate the quality of our inference. This approach makes sense to me. My problem is somewhat different because I have unseen data (test set). I need to first generate y_ODE for the unseen data, which I have accomplished using the following line: y_test = integrate_ode_rk45(seeiir, y_train[n_train_days], ts_train[n_train_days], ts_test, theta, x_r, x_i) This equation uses the last point of the y_train as the initial condition. Now a question arise for me: • Since y_test is a deterministic solution from our simulation, we need to add noise to it. How can I do this? Can I pass it directly to the neg_binomial_2 function, similar to what I did in the model block? (I think no becuase it’s the model’s liklihood) and also because in the end, when I want to plot the aggregated incidence for both the training and test sets along with their confidence intervals, I am passing the whole incidence array to the neg_binomial_2_rng distribution. I hope my explanation is clear, and I apologize for the lengthy message. I would appreciate your insights on whether this is a correct approach when dealing with unseen data. I find the lengthy messages easier to answer. My usual first response is to ask for more details! This is exactly what we were doing with UK Covid data during Covid, but we only had the audacity to predict 2 weeks ahead because Covid had a lot of external driving features. But we didn’t use SIR-type models as they weren’t responsive enough to changing conditions and the baseline versions couldn’t model heterogeneity of contact or spatial smoothing without breaking down into a bajillion compartments. They’re presumably a better fit for flu data if it’s more spatially and temporally homogeneous, but I would strongly urge you to apply posterior predictive checks and some time-series cross-validation against simpler models. That’s the right thing to do, but Stan doesn’t give you a good way to do that other than completely refitting the model. You can use the estimates from the previous weeks as a warm start (for both step size and mass matrix and also initial draw). That’s good for capturing unexplained variation, but it can be hard to fit. I’ve sometimes find it easier to add a random effect (negative binomial is just Poisson with a gamma-distributed random effect), even though it blows up the dimensionality of the problem. The general answer is in this section of the User’s Guide (which has come up three times in my last two days of posting): The answer is that you add the modeled noise (in your case, exactly the beta-binomial) in the generated quantities block. The ODE will predict the expected value, then beta-binomial gives you noise. It’s just like a GLM but without the linearity—we are just using an ODE for the regression. In case you haven’t seen it, this is a nice tutorial on SIR-type models in Stan: Dear @Bob_Carpenter Thanks so much for answering my questions and addressing my concerns. I really appreciate your help!
{"url":"https://discourse.mc-stan.org/t/subject-fitting-and-predicting-influenza-data-with-seeiir-model-in-stan/36895","timestamp":"2024-11-05T15:43:43Z","content_type":"text/html","content_length":"39730","record_id":"<urn:uuid:aca6a5c8-6bb4-4b50-9443-7550f4c17d4e>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00080.warc.gz"}
Course Descriptions MA 005 Precalculus Lab (1 credit) Emphasizes problem solving as applied to topics from Precalculus. Class time is spent on computer generated problem sets, workbooks in a question and answer format, and individualized work with the instructor. Topics covered include: Linear equations in one variable, graphing lines, finding equations of lines, functions, function notation, graphing functions, polynomials, exponents, and radicals. Does not satisfy mathematical sciences core requirement. (Satisfactory/Unsatisfactory). MA 103 Mathematics for Elementary Teachers: Algebraic Restricted to elementary education majors. Provides an inquiry-based examination of basic concepts, operations and structures occurring in numbers, number sense and algebraic reasoning. Students develop a deeper understanding of the numeric, arithmetic and algebraic concepts required to teach elementary school mathematics. Does not fulfill mathematics and statistics core requirement. MA 104 Mathematics for Elementary Teachers: Geometric Restricted to elementary education majors. Provides an activity-based exploration of informal geometry in two and three dimensions as well as probability and statistics. Emphasis is on visualization skills, fundamental geometric concepts, the analysis of shapes and patterns and analyzing and displaying data. Students develop a deeper understanding of mathematical concepts required to teach mathematics in elementary school. Does not fulfill mathematics and statistics core requirement. MA 109 Precalculus This is the course for students intending to take Applied Calculus (MA 151) or Calculus I (MA 251), which will allow review of several fundamental elements necessary for Calculus. These reviews include factoring, exponents and radicals; equations and inequalities; functions and relations including algebraic, exponential, logarithmic and trigonometric functions. Prerequisites: A score of 56 or better on Part I of the Math Placement Test or a score of 50 or better on ALEKS or a math SAT score of 560 or better or a math ACT score of 24 or better. Students not meeting the prerequisite will take corequisite MA 005 in addition to MA 109. This course does not fulfill the mathematics core requirement. It is offered Fall and Spring Semesters. ST 110 Introduction to Statistical Methods and Data Analysis An introductory statistics course requiring no calculus. Statistical methods are motivated through real data sets. Topics include graphical summaries of data, measures of central tendency and dispersion, chi-squared tests, regression model fitting, normal distributions and sampling. Offered Fall and Spring Semesters. MA 114 Mathematics and Sustainability Focuses on critical thinking and how to support arguments quantitatively in the context of sustainability. Topics include measurement, flow, connectivity, change, risk, and decision making. How to model sustainability at the local, regional, and global level is studied. Closed to students who have credit for MA/ST 200-level courses. MA 115 Introduction to Combinatorics A basic introduction to counting and its relationship to combinatorial structure. Topics may be chosen from; sets, enumeration, permutations and combinations, probability, graph theory, colorability, planarity, trees. Closed to students who have credit for MA/ST 200-level courses. MA 116 Topics in Modern Math: Ciphers and Codes Can you figure out the following message? DOO DUH ZHOFRPH? This message is an example of a cipher. There are a wide variety of different schemes for creating ciphers; in fact, one of the earliest known methods was used by Julius Caesar. The course will focus on those schemes that have a mathematical basis. We will begin with Caesar's method and end with a scheme currently used for security on the Internet. The mathematics used will be elementary and will be developed in the course. MA 117 Mathematics, Numbers and the Real World "This sentence is false." Does this statement make any sense? Why is 1 not a prime number? These questions, and even more interesting ones, will be answered as we examine reasoning and logic (inductive and deductive) in a mathematical setting. We will also look at the nature of numbers, including types of numbers and differences among kinds of numbers. We will examine the uses of numbers in real world applications such as interest, installment buying, amortization, etc. We will also look at the fascinating world of probability. For example, how many people have to be in a room so that the chances of two of them having the same birthday not counting the year are 50-50? The philosopher Proclus described mathematics as "the invisible form of the soul." In this course, you will experience mathematics in ways that you never thought possible. We will discover the power and beauty of mathematics by exploring some very intriguing ideas. Simultaneously, we will learn effective strategies for thinking and making decisions in our everyday lives. Some of the topics we will examine are: the beauty of numbers (What does the number of spirals on a pineapple have to do with rabbits?), infinity (Are some infinities larger than others?), modular arithmetic (On what day of the week will your birthday fall in 2057?), and financial management (How much do you need to save each month if you want to have $5000 saved up when you graduate?). Prerequisites:The only prerequisites for this course are an open and curious mind and the willingness to put aside any preconceived prejudices or dislikes for mathematics. MA 118 History of Mathematics This course surveys the development of mathematical ideas throughout history, with emphasis on critical thinking and problem solving from the historical point of view. Topics include the historical development of numbers, calculations, geometry, algebra, and the concept of infinity in various civilizations with specific emphasis on developments in Europe, Egypt, Mesopotamia, Greece, India, and China. Connections are explored between the history of mathematics and other fields such as natural and applied sciences, social sciences and business. This course is offered sporadically. (Last offered Fall 2018.) MA 151 Applied Calculus for Business and Social Sciences A one semester calculus that stresses applications in business and social sciences. Every concept is considered graphically, numerically, algebraically and verbally. Graphing calculators are used to help students learn to the think mathematically. This is a terminal course so if you plan on taking more mathematics and/or minoring in mathematics or statistics, you should take MA 251 instead. Prerequisite: MA 109 or a score of 48 or better on Part II of the Math Placement Test or a score of 65 or higher on ALEKS or one year of high school calculus. Offered Fall and Spring Semesters. MA 200/ST 200 Opportunities in STEM The colloquium focuses on internships, research, and career options available to students in Computer Science, Physics, Mathematics, and Statistics through speaker talks, career center workshops, and field trips to research and industry partners. This course is intended for natural and applied science majors. Written or electronic permission of the instructor. Required for all Hyman Science Scholars in their second year. Does not count toward the 120-credit graduation requirement. Same course as CS200, PH200, ST200. (Pass Fail) This course is only offered in the Fall Semester. ST 210 Introduction to Statistics A non-calculus-based course covering descriptive statistics, regression model fitting, probability, normal, binomial and sampling distributions, estimation and hypothesis testing. ST 210 is not open to students who have already taken ST 265, ST/EG 381, PY 292, or EC 220. Prerequisite: MA 109 or a score of 48 or better on Part II of the Math Placement Test or a score of 65 or higher on ALEKS or one year of high school calculus. MA 251 Calculus I Definition, interpretation, and applications of the derivative and definition and interpretation of the integral are studied. Prerequisite: MA 109 or a score of 56 or better on Part II of the Math Placement Test or a score of 76 or higher on ALEKS or one year of high school calculus.. Offered Fall and Spring Semesters. MA 252 Calculus II A continuation of Calculus I. Techniques and applications of integration, parametric equations, polar coordinates, sequences and series will be studied. Prerequisite: A grade of C- or better in MA 251. Offered in Fall and Spring Semesters. ST 265 Biostatistics A non-calculus-based course covering descriptive statistics, regression model fitting, probability, distributions, estimation and hypothesis testing. Applications are geared toward research and data analysis in biology and medicine. ST 265 is not open to students who have already taken ST 210, ST/EG 381, PY 292, or EC 220. Prerequisite: MA 109 or a score of 48 or better on Part II of the Math Placement Test or a score of 65 or higher on ALEKS or one year of high school calculus. This course is intended mainly for Biology majors. It is offered only in the Spring Semester. MA 295 Discrete Structures Boolean algebra, combinatorics, inductive and deductive proofs, graphs, functions and reflections, recurrence. Prerequisites: CS 151; MA 109 or higher or a score of 56 or better on Part I of the Math Placement Test or a score of 50 or higher on ALEKS or one year of high school calculus. This course is limited to Computer Science Majors and Minors and is also listed as CS 295. It is offered only in the Fall Semester. MA 301 Introduction to Linear Algebra In your video games, what makes Mario jump over the barrel? Linear Algebra! In the airline industry, what technique helps to optimize the scheduling process? Linear Algebra! In the economic world, what technique helps to minimize costs? Linear Algebra! It is the "bread and butter" of mathematics as much as calculus is. In high school, you saw linear algebra. Remember the old "two equations, two unknowns" problems? That was linear algebra. In the real world, there are 3,000 equations and 5,000 unknowns! This is LINEAR ALGEBRA!! Prerequisite: MA 252. This course is required for both mathematics and statistics majors and is usually taken in the sophomore year. MA 302 Programming in Mathematics The basics of MATLAB programming are covered through the investigation of various mathematical topics, including functions, conditional statements, loops and plotting. Prerequisite: CS 151. Pre/Corequisite: MA 301 MA 303 Discovering Information in Data Students use tools for acquiring, cleaning, analyzing, exploring, and visualizing data. This course teaches students how to make data-driven decisions and effectively communicate results. A major component of this course is learning how to use python-based programming tools to apply methods to real-life datasets including those that arise from physics applications. Written or electronic permission of the instructor. Fulfills the natural science core requirement. Does not count toward the computer science and/or physics minors for mathematics majors. Closed to students who have taken CS403, DS303, or Ph203. Same course as DS303, Ph203. MA 304 Ordinary Differential Equations This is an introductory course in ordinary differential equations (ODEs) and their application in modeling physical phenomena. In particular, the following topics are covered: first and second order ODEs, separable ODEs, existence and uniqueness of solutions and numerical solutions (using software such as MATLAB). Modeling plays a crucial role in the course, as do applications to other Prerequisites: MA 351 or MA 252 and written permission of the instructor. Required for mathematics major. This course is only offered in the Spring Semester. ST 310 Statistical Computing The course reviews a number of statistics topics as a vehicle for introducing students to statistical computing and programming using SAS and R for graphical and statistical analysis of data. Statistics topics include graphical and numerical descriptive statistics, probability distributions, one and two sample tests and confidence intervals, simple and multiple linear regression, and chi-square tests. SAS topics include data management, manipulation, cleaning, macros, and matrix computations. Topics in R include data frames, functions, objects, flow control, input and output, matrix computations, and the use of R packages. Lastly, this course also includes an introduction to the resampling and bootstrap approaches to statistical inference. Prerequisite: ST210 or ST265 or EC220 or written permission of the department chair. ST 315 Intermediate Statistical Methods:Test & Model A non-calculus-based study of Inference for the Mean (Sample Size Determination in Interval Estimation, Type I and II Error, the Power of a Test of Hypotheses); Binary Logistic Regression; and Applied Factor Analysis. Prerequisite: ST110 or ST210 or ST265 or ST381 or EC220 or PY292 must be completed prior to taking this course. MA 351 Calculus III This course is a continuation of MA 252 and covers multivariable calculus. Topics covered: vectors and their geometry, parametric curves, functions of several variables, partial derivatives, multiple integrals. The course climaxes with the big theorems, namely the divergence theorem, Stokes' theorem and Green's theorem. Prerequisite: MA 252. This course is required for both mathematics and statistics majors and is usually taken in the sophomore year. ST 381 Probability and Statistics Note: This is the same course as EG 381. Random experiments, probability, random variables, probability density functions, expectation, sample statistics, confidence intervals and hypothesis testing. Prerequisite: MA 252. Degree credit will not be given for more than one of ST 210, ST 265, and ST/EG 381. This course is offered only in the Fall Semester. MA 395 Discrete Methods The logic of compound statements, introduction to proof, mathematical induction, set theory, counting arguments, recurrence relations, permutations and combinations. An introduction to graph theory including Euler and Hamiltonian circuits and trees. Applications may include analysis of algorithms and shortest path problems. Problem solving is stressed. Prerequisite: MA 252. This course is required for both mathematics and statistics majors and is usually taken in the sophomore year. MA 421 Analysis I Calculus is an important tool (perhaps the most important) in applied mathematics and in order to apply calculus successfully it is important that a thorough understanding of calculus is achieved. In Analysis I we will explore the definitions and rigorously prove many of the results used in differential and integral Calculus, and thus the course will have a theoretical component. The ideas and methods explored play a fundamental role in many applied mathematical areas such as ordinary differential equations, probability theory (and thus statistics), numerical analysis and complex analysis. Prerequisite: MA 395 This course is required for the major and is usually taken in the junior year. It is offered only in the Fall Semester. MA 422 Analysis II This course is a continuation of Analysis I. We will finish off any unfinished business about functions of 1 variable, including sequences and series of functions. We will then talk about functions from Rn to Rm and what differentiation and integration means in the context of different combinations of values for n and m. For example, functions from Rn to R were studied in Calculus III, leading to partial derivatives and multiple integrals. Functions from R to R3 describe curves in space. Functions from Rn to Rm where n and m are both greater than 1 are new and will be discussed. Prerequisites: MA 351 and MA 421 This course is required for the Pure Mathematics Concentrations; it may be used for the Statistics, Secondary Education and General Program Concentrations. This course is offered in the Spring Semester of even numbered years. MA 424 Complex Analysis The subject of Complex Analysis has both pure and applied components. On one hand, one can think of complex functions as natural extensions of real functions. We will study the topology of the complex plane as we define the concepts of complex functions, the derivative and integral. On the other hand, complex functions are often used to solve problems in a wide variety of applied areas such as electrical engineering (e.g., electric circuits) and physics (e.g., air flow around an airplane wing). While applications will be introduced, our main focus will be on understanding the mathematical concepts. Prerequisite: MA 351 MA 427 Numerical Analysis This course, along with MA 428, will emphasize the development of numerical algorithms to provide stable and efficient solutions to common problems in science and engineering. MA 427 topics include direct and iterative methods appearing in linear algebra, root finding methods and interpolation. This is an introductory course in numerical analysis, the study of methods for obtaining approximate values of mathematically-defined quantities. Since most uses of mathematics in modern day society involve numerical approximation, this course is essential for any student who wishes to work in an area which involves applied mathematics. Prerequisites: MA 301, MA 302, or written permission from the instructor. MA 428 Computational Mathematics This course, along with MA 427, will emphasize the development of numerical algorithms to provide stable and efficient solutions to common problems in science and engineering. MA 428 topics include numerical differentiation, initial value problems, two point boundary value problems and partial differential equations. Prerequisites: MA 302, MA 304, or written permission from the instructor. MA 431 Geometry A review of Euclidean geometry and an introduction to non-Euclidean geometry. Rigorous deduction and axiom systems are emphasized. Possible techniques include the use of coordinate geometry, linear algebra, and computer geometry systems. Prerequisite: MA 395. This course is offered in the Spring Semester of even numbered years. MA 441 Ring Theory An investigation of the fundamental algebraic systems of integers, rings, polynomials and fields. Topics drawn from homomorphisms, cosets and quotient structures. Prerequisites: MA 301, MA 395 MA 442 Group Theory An investigation of the fundamental algebraic system of groups. Topics include homomorphism, cosets and quotient structures. May include applications, Sylow theory, combinatorics, coding theory, Galois theory, etc. Prerequisites: MA 301, MA 395 This course is offered in the Spring Semester of odd numbered years. MA 445 Advanced Linear Algebra A deeper study of matrices - their properties and uses. Topics include eigenvalues and eigenvectors, special factorizations of matrices and computational algorithms involving matrices. This looks to be a nice blend of theory and applications. Prerequisite: MA 301 This course may be used for the Statistics Concentration. It is frequently offered. MA 447 Number Theory Number Theory deals with properties of whole numbers and is one of the oldest and most fascinating branches of mathematics. Topics include prime numbers and the mystique surrounding them, modular arithmetic and its uses, and equations, solutions of which must be integers. Public-key cryptography and integer arithmetic on computers provide some applications. A nice blend of theory and Prerequisite: MA 395. ST 461 Elements of Statistical Theory I: Distributions This course is the first in the two semester sequence of probability and mathematical statistics. Probability, discrete and continuous distributions, moment generating functions, multivariate distributions, transformations of variables, and order statistics. Prerequisites: ST 210 or ST 265 or ST/EG 381 or EC 220 or PY 292; MA 351. This course is offered in the Fall Semester of even numbered years ST 462 Elements of Statistical Theory II: Inference A continuation of ST 461. Theory of estimation and hypothesis testing, the central limit theorem, maximum likelihood estimation, Bayesian estimation and the likelihood ratio test. Prerequisite: ST 461. This course is offered in the Spring Semester of odd numbered years. ST 465 Experimental Research Methods Concepts and techniques for experimental research including simple, logistic and multiple regression, analysis of variance, analysis of categorical data. Prerequisite: ST 210 or ST 265 or ST/EG 381 or EC 220 or PY 292 Corequisite: ST 365 (for statistics majors and statistics minors) This course is offered in the Fall Semester of odd numbered years. ST 466 Experimental Design A continuation of ST 465. The theory of linear models and its relationship to regression, analysis of variance and covariance. Coverage of interaction, blocking, replication and experimental designs: split-plot, nested, and Latin squares. Prerequisites: MA 301; ST 365; ST 465. This course is offered in the Spring Semester of even numbered years. ST 471 Statistical Quality Control Quality has become an integral part of the lives of both the consumer and the producer. Covered topics include the ideas of W. Edwards Deming; six sigma; Shewhart concepts of process control; control charts for attributes and variables; CUSUM, EWMA, and MA charts; and factorial experimental designs. Prerequisites: ST 210 or ST 265 or ST/EG 381 or EC 220 or PY 292. This course is offered in the Fall Semester of odd numbered years. ST 472 Applied Multivariate Analysis Applications of multivariate statistical methods including: principal components, factor analysis, cluster analysis, discriminant analysis, Hotelling's t-square and multivariate analysis of variance. An applied journal article is read and summarized verbally, in written form and rewritten form. A final course project, based on an original study, is presented verbally, in written form and rewritten form. Prerequisites: Sophomore standing; ST 210 or ST 265 or ST/EG 381 or EC 220 or PY 292 or written permission of the instructor. This course is offered in the Spring Semester of even numbered years. ST 473 Statistical Learning and Big Data Covers foundations and recent advances in statistical learning for complex and massive data. Topics include nonlinear regression, smoothing splines, linear/quadratic discriminant analysis, k-nearest neighbors, regression trees, bagging, random forests, boosting, and support vector machines. Some unsupervised learning methods are discussed: principal components and clustering (k-means and hierarchical). Those methods are performed using statistical software - R and SAS. Prerequisite: (may be taken concurrently): ST 310. This course is required for statistics majors and statistics minors. It is offered only in the Fall Semester of odd numbered years. ST 475 Survival Analysis & Generalized Linear Models The course consists of two parts. The first part provides a survey of the theory and application of survival analysis. Topics include time-to-event data, types of censoring, hazard functions, survival functions, Kaplan-Meier estimators, Nelson-Aalen estimators, and Cox proportional hazards models. Parametric methods and various nonparametric alternatives are discussed. The second part introduces the concepts and background of generalized linear models (GLMs). Topics include exponential family distributions, likelihood functions, link functions, simple and multiple linear regression, logistic regression for binary data, and Poisson regression for count data. Those methods are performed using statistical software - R and SAS. Prerequisite: ST 310 This course is offered in the Fall Semester of even numbered years. MA 481 Operations Research This course will investigate mathematical techniques for determining optimal courses of action for decision problems under restrictions of limited resources. These techniques include the simplex algorithm, the traveling salesman algorithm, branch and bound algorithm, shortest route algorithm. Prerequisite: MA 301 This course is offered sporadically. MA/ST 485 Stochastic Processes The fundamental concepts of random phenomena including: Bernoulli processes, Markov chains, Poisson processes, queuing theory, inventory theory and birth-death processes. Applied and theoretical assignments computer simulation. Prerequisites: ST 210 or ST 265 or ST/EG 381 or EC 220 or PY 292; MA 301. It is offered only in the Spring Semester of odd numbered years. MA 490 Special Topics in Mathematics: Graph Theory The fundamentals of graphs will be discussed. Topics may include graphs, trees, connectivity, Eulerian circuits, Hamiltonian cycles, vertex and edge colorings, planar graphs and extremal problems. Prerequisite: MA 395 or permission of the instructor. MA 490 Special Topics in Mathematics: Cryptology This course will provide an introduction to classical and modern cryptology. We will study methods to encrypt messages to keep the contents secret, methods to attack these encryption schemes and the mathematics underlying these methods. Prerequisite: MA 301 MA 490 Special Topics in Mathematics: Introduction to Non-Linear Programming Nonlinear programming deals with the problem of optimizing an objective function in the presence of equality and inequality constraints. If all the functions are linear, we have a linear program, otherwise, the problem is called a nonlinear program. In this course, we will study Unconstrained Optimization, Convex Sets and Convex Functions, Convex Programming and the Karush-Kuhn-Tucker Note: this course is the foundation of nonlinear programming, computer skill is not required. Students may be asked to do one final project using MATLAB. Prerequisites: MA 301 (Linear Algebra) and MA 351 (Multivariable Calculus). MA 490 Special Topics in Mathematics: Topology An introduction to topology. Topics include metric spaces, general topological spaces, open and closed sets, bases of topologies, continuity, connectedness, compactness, product and quotient spaces and Urysohn's metrization theorem. Prerequisite: MA 395 or permission of instructor. MA 490 Special Topics in Mathematics: Pricing Derivative Securities This course will build on mathematical models of stock and bond prices to cover two major areas of mathematical finance that have an enormous impact on the way that modern financial markets operate. The Black-Scholes arbitrage pricing model of options and other derivative securities; Financial Portfolio Optimization Theory due to Tobin and Markowitz and the Capital Asset Pricing Model. Prerequisite: A very basic familiarity with probability, statistics, and calculus (MA 252 and ST 210). MA 490 Special Topics in Mathematics: The Art of Counting Remember when you first learned 1, 2, 3, . . . Now, years later, you have powerful mathematical tools at your command that will allow you to count more than apples: permutations, partitions, (mathematical) trees, strings, and uncountably more. We will also look at graphs, cliques, probability, existence and external problems as time permits. The course will focus on problem solving. Some of the tools we will study are the pigeon-hole principle, inclusion-exclusion, generating functions and recursion. The material learned in this course and more importantly the thought processes can be useful in any mathematical field from the most basic to the graduate level. Combinatorics has applications to Computer Science, Statistics and much more. Prerequisite: MA 395 or permission of instructor. MA 490 Special Topics in Mathematics: Partial Differential Equations We will study a variety of types of partial differential equation and learn techniques to solve them. The emphasis will be on exact techniques but some discussion of numerical techniques will be included. Applications will be emphasized. MA 499 Mathematics Internship Students gain a better understanding of mathematics through work experience. Interns are required to work in a business or professional environment under the guidance of an on-site supervisor for a minimum of 100 hours. The work conducted during the internship must in some way relate to mathematics or the application of the discipline to the business or professional environment. The location may be in- or out-of-state, on a paid or unpaid basis. Course requirements include a weekly work log, a scheduled performance evaluation signed by the on-site supervisor, and an updated résumé and cover letter. Offered in Fall Semesters. Prerequisite: Permission of the instructor or department chair. ST 499 Statistics Internship Students gain a better understanding of statistics through work experience. Interns are required to work in a business or professional environment under the guidance of an on-site supervisor for a minimum of 100 hours. The work conducted during the internship must in some way relate to statistics or the application of the discipline to the business or professional environment. The location may be in- or out-of-state, on a paid or unpaid basis. Course requirements include a weekly work log, a scheduled performance evaluation signed by the on-site supervisor, and an updated résumé and cover Offered in Fall Semesters. Prerequisite: Permission of the instructor or department chair.
{"url":"https://www.loyola.edu/academics/mathematics-statistics/curriculum/courses/descriptions.html","timestamp":"2024-11-14T01:38:37Z","content_type":"text/html","content_length":"85301","record_id":"<urn:uuid:4162f9a6-1796-4f0e-829f-66c387dd92cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00199.warc.gz"}
C Functionality In Part 1-2 , we talked a little about functions in order to understand what main() was. Here we will take a much more detailed tour through functions. Consider the program: #include <stdio.h> #include <math.h> float pythag(float side_a, float side_b); float square(float x); float adjacent, opposite, hypotenuse; printf("What lengths are the adjacent and opposite sides? "); scanf("%f%f", &adjacent, &opposite); hypotenuse = pythag(adjacent, opposite); printf("The length of the hypotenuse is %f\n", hypotenuse); float pythag(float side_a, float side_b) return sqrt(square(side_a) + square(side_b)); float square(float x) return x * x; (We've left out the comments as we will do in most of the examples in this tutorial.) Looking at the functions for pythag() and square() at the bottom of the program, we see that they look a lot like the function declarations for main that we've already seen, but with some additional features. (Ignore the pythag and square at the beginning of the program for now. We'll see what that's for in Part 2-5. ) In particular they specify parameters and return types. The function: float square(float x) return x * x; can be read as the C translation of the English sentence: The floating point square of a floating point number, x, is given by x * x. The first float , as in float square(... , specifies that the function square will return a floating point number as its result. The float x inside the parentheses specified that square will take one argument (or parameter ) which is a floating point number and it will be called x inside the function. If the function takes more than one parameter, then we list them separated by commas as in the definition of pythag() .
{"url":"https://www.cs.drexel.edu/~bls96/ctutorial/ctut-2-1.html","timestamp":"2024-11-08T05:01:38Z","content_type":"text/html","content_length":"3159","record_id":"<urn:uuid:c6a0b468-745b-4a5c-8b07-1a87de6b09b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00655.warc.gz"}
Pattern formation & advectionPattern formation & advection | VisualPDE Pattern formation & advection Parabolic | Patterns Each of the examples on this page will be a variation of a previous example incorporating one or more linear advection terms. This will introduce a velocity parameter $V$ and, in the unidrectional case, a direction $\theta$. Gliders swimming upstream • We start with the glider example from the Gray–Scott model, and add an advection term in the $u$ equation to get an example of drifting gliders. • The boundaries here will destroy the patterns as mass will be lost at boundaries orthogonal to the flow. Decreasing $V$ will allow the moving spots to survive longer, whereas increasing it will lead to wave-selection. • As discussed in its own page, this model has a huge range of behaviours, and these are all likely influenced by advection. Localised Swift–Hohenberg swiftly moving • We next consider the localised solutions from the Swift–Hohenberg equation, and consider two cases of moving patterns under advection. The first is unidirectional motion at an angle $\theta$, and the second is rotational advection. • In both cases, changing $V$ impacts the velocity of this movement. Note that if $V$ becomes too large in the rotational case, the pattern can generate structures which misbehave at the boundaries (as these will interact with advection in odd ways). In particular, the rotating velocity field which is advecting $u$ is itself not periodic. • Changing $P$ and restarting the simulation allows you to explore how these different localised solutions change their structure under advection.
{"url":"https://visualpde.com/nonlinear-physics/advecting-patterns.html","timestamp":"2024-11-14T07:35:15Z","content_type":"text/html","content_length":"18093","record_id":"<urn:uuid:e5daf6da-6900-4750-aaaa-21a2f3f05693>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00802.warc.gz"}
Addition method of algebra addition method of algebra Related topics: how to do algebra for school homework formula for factoring cubic equations Fluid Mechanics 6th Edition Solution Manual intermediate algebra,16 log rules divide math help, scale factor math 139, exam 2 review sheet Author Message Author Message Blind-Sommet Posted: Thursday 04th of Jan 15:40 Jrahan Posted: Sunday 07th of Jan 09:29 Hello folks ! I have a severe issue about math and I was Algebrator is really a good software program that helps hoping that someone might be able to help me out to deal with algebra problems. I remember facing somehow. I have a algebra test in a few weeks and difficulties with factoring polynomials, adding exponents Reg.: 13.03.2003 even though I have been taking math seriously, there Reg.: 19.03.2002 and factoring expressions. Algebrator gave step by are still a few items that cause a lot of stress, such as step solution to my algebra homework problem on addition method of algebra and converting fractions typing it and simply clicking on Solve. It has helped me especially. Last week I had a session with a private through several math classes. I greatly recommend the tutor , but many things still remain unclear to me. Can program. you propose a good way of studying or a good tutor that you know already? Admilal`Leker Posted: Monday 08th of Jan 09:49 ameich Posted: Saturday 06th of Jan 09:20 I am a regular user of Algebrator. It not only helps me get my assignments faster, the detailed explanations This is a common problem; don’t let it get to you. given makes understanding the concepts easier. I You will get at ease with addition method of algebra in Reg.: 10.07.2002 strongly advise using it to help improve problem solving a couple of days . In the meantime you can use skills. Reg.: 21.03.2005 Algebrator to help you with your homework .
{"url":"https://softmath.com/parabola-in-math/slope/addition-method-of-algebra.html","timestamp":"2024-11-11T04:49:59Z","content_type":"text/html","content_length":"44117","record_id":"<urn:uuid:f4d17347-0cd5-4a95-88a2-fbc81cc6841f>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00660.warc.gz"}
Design Like A Pro Multiplication And Division Of Radicals Worksheet Multiplication And Division Of Radicals Worksheet - Express answers in simplest radical form. Am√b⋅cm√d = acm√bd product rule of radicals: You can also multiply and divide them. There are 12 questions on the front where the student needs to add or subtract. Web 9.4 multiplication and division of radicals. Some of the worksheets for this concept are section multiplication and division of radicals, multiplying radical, divide and reduce the radicals, multiply the radicals, dividing radical, exponent operations work 1, mad minutes, indices and surds. Given real numbers and n√b, n√a ⋅ n√b = n√a ⋅ b \. Am√b⋅cm√d = acm√bd product rule of radicals: Four questions involve simplifying the radicals before adding or subtracting and one question involves adding similar cube roots. Web free worksheet (pdf) and answer key on multiplying radicals. ©z l2s0w1u2u akauqtba8 ysjomflttwfaurvet pl3l0ca.1 i waulel6 aryiigshxtus2 orle1skeurovleuds.u c 3mdaudced 9w0ixtuh4 6iunafoiyngiztnex majlmgyerbxrrap l2z.i. Multiplication and division of radicals. Check your answers at the right. Web ©w a2c0k1 e2t pk0u rtta 9 asioaf3t cwyaarker cltlbcc. Web multiply a radical and a sum or difference of radicals. Worksheets are dividing radicals period, multiplication and division of ra. Once we obtain the lcm, we can multiply each root and exponent in the radicand to obtain the lcm, and rewrite as one radical. Add & subtract radical expressions. Multiplying radicals is very simple if the index on all the radicals match. Check your answers at the right. Web create your own worksheets like this one with infinite algebra 2. When multiplying expressions containing radicals, we use the following law, along with normal procedures of algebraic multiplication. Worksheets are dividing radicals period, multiplication and division of ra. Web multiplying and dividing radicals simplify. \displaystyle {\sqrt [ { {n}}] { {a}}}\times {\sqrt [ { {n}}] { {b}}}= {\sqrt [ { {n}}] {. Express answers in simplest radical form. You can also multiply and divide them. Web free worksheet (pdf) and answer key on multiplying radicals. Simplify the square of a sum or difference of radicals. Worksheets are dividing radicals period, multiplication and division of ra. 25 scaffolded questions that start relatively easy and end with some real challenges. Web ©w a2c0k1 e2t pk0u rtta 9 asioaf3t cwyaarker cltlbcc. Free trial available at kutasoftware.com. Create your own worksheets like this one with infinite algebra 2. W l 4a0lglz erei jg bhpt2sv 5reesseir tvcezdn.x b nm2awdien dw ai 0t0hg witnhf li5nsi 7t3ew fayl mg6ezbjr wat 71j. \displaystyle {\sqrt [ { {n}}] { {a}}}\times {\sqrt [ { {n}}] { {b}}}= {\sqrt. Multiplication And Division Of Radicals Worksheet - Web multiplying and dividing radicals simplify. Web adding, subtracting, multiplying radicals date_____ period____ simplify. Multiplication and division of radicals. 1) −5 3 − 3 3 2) 2 8 − 8 3) −4 6 − 6 4) −3 5 + 2 5. Four questions involve simplifying the radicals before adding or subtracting and one question involves adding similar cube roots. Check your answers at the right. A b m ⋅ c d m = a c b d m. Given real numbers and n√b, n√a ⋅ n√b = n√a ⋅ b \. Web when multiplying radical expressions with the same index, we use the product rule for radicals. Web worksheets are dividing radicals period, multiplication and division of radicals examples, multiplying radical, multiply the radicals, divide and reduce the radicals, adding subtracting and multiplying radical expressions, adding subtracting multiplying and dividing radicals, radical workshop index or root radicand. ©z l2s0w1u2u akauqtba8 ysjomflttwfaurvet pl3l0ca.1 i waulel6 aryiigshxtus2 orle1skeurovleuds.u c 3mdaudced 9w0ixtuh4 6iunafoiyngiztnex majlmgyerbxrrap l2z.i. Plus model problems explained step by step. There are 12 questions on the front where the student needs to add or subtract. Examples, solutions, videos, and worksheets to help grade 7 and grade 8 students learn how to divide radical expressions worksheets. Express answers in simplest radical form. Add & subtract radical expressions. Web free worksheet (pdf) and answer key on multiplying radicals. Multiplication and division of radicals. Web multiplying and dividing radicals simplify. Multiplication and division of radicals. Answer these questions pertaining to the multiplying, dividing, and rationalizing of square roots. Web create your own worksheets like this one with infinite algebra 2. Web worksheets are dividing radicals period, multiplication and division of radicals examples, multiplying radical, multiply the radicals, divide and reduce the radicals, adding subtracting and multiplying radical expressions, adding subtracting multiplying and dividing radicals, radical workshop index or root radicand. Plus model problems explained step by step. Create your own worksheets like this one with infinite algebra 2. 1) 3 ⋅ 3 2) 10 ⋅ −3 10 3) 8 ⋅ 8 4) 212 ⋅ 415 5) 3(3 + 5) 6) 25(5 − 55). Examples, solutions, videos, and worksheets to help grade 7 and grade 8 students learn how to multiply radical expressions worksheets. Am√b⋅cm√d = acm√bd product rule of radicals: Given real numbers and n√b, n√a ⋅ n√b = n√a ⋅ b \. Multiplication and division of radicals. Web Division Of Radicals (Online Learning Themed) Worksheets. Web free worksheet (pdf) and answer key on multiplying radicals. Simplify the square of a sum or difference of radicals. Web multiplying and dividing radicals simplify. Multiply the numbers and expressions inside the radicals. 1) −5 3 − 3 3 2) 2 8 − 8 3) −4 6 − 6 4) −3 5 + 2 5. Plus model problems explained step by step. Express answers in simplest radical form. You can do more than just simplify radical expressions. Use the quotient property of radicals to rewrite the Free Trial Available At Kutasoftware.com. Web multiply a radical and a sum or difference of radicals. Web 9.4 multiplication and division of radicals. Add & subtract radical expressions. You can also multiply and divide them.
{"url":"https://cosicova.org/eng/multiplication-and-division-of-radicals-worksheet.html","timestamp":"2024-11-05T22:18:12Z","content_type":"text/html","content_length":"27931","record_id":"<urn:uuid:9db4b40e-6486-4101-996e-d7f6b6087937>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00061.warc.gz"}
Is consciousness to be found in quantum processes in microtubules? Not open for further replies. Electrochemical guided by Lifes Energy , is the essence of the mathematics . Mathematics can not create any thing physical in and of its self . I never claimed that . My claim is that mathematics are the rules by which creation and evolution (via natural selection) progresses. To build a house you first need a blueprint. Then you need a list of materials that are required by the blueprint. Once those materials are available the building can procees in accordance to the rules set forth in the blueprint. You miss some of the instructions and that part of the house will collapse by the mathematical laws of natural selection. The order that exist in nature is due to the deterministic mathematics that permit or prevent relational interactions to be successful or fail. It is the Logical quasi-intelligent essence of the Universe. IMO, it is also the reason why there are so many religions ascribing a motivated intelligence as the creative agency. There is an illusion of intelligent design, but in the end it is just simple mathematical functions. (value) Input --> (mathematical) function --> (value) Output --> (observable) Patterns. My claim is that only Logical (mathematical) processes are necessary for the creative and evolutionary chronologies to unfold from the interaction of inherent potential values enfolded in all things. Last edited: Electrochemical guided by Lifes Energy , is the essence of the mathematics . Mathematics can not create any thing physical in and of its self . Your turn. What in the world is "life's energy"? The "Elan vital" that has been proven to be "overthinking" the creative problem? Like religion? For you now what , no mathematics here as the essence of the microtubules . And that is where I believe you are missing the point. Microtubules are "data processors" and as such obey EM laws, that are mathematical in essence. Note that microtubules are dipolar spiral coils, which means they are microprocessors. Moreover, their variable length makes them into "potentiometers" and that is an active control mechanism, not just a passive conductor. Now imagine a network of a trillion actively controlling data processors and add a few billion years of evolutionary processes, producing a more finely tuned and sensitive communication network with each generation. Note that microtubules themselves are evolved from Prokaryotic fibrils which were the common microtubule precursor. Origin and Evolution of the Self-Organizing Cytoskeleton in the Network of Eukaryotic Organelles, Gáspár Jékely The eukaryotic cytoskeleton evolved from prokaryotic cytomotive filaments. Prokaryotic filament systems show bewildering structural and dynamic complexity and, in many aspects, prefigure the self-organizing properties of the eukaryotic cytoskeleton. Here, the dynamic properties of the prokaryotic and eukaryotic cytoskeleton are compared, and how these relate to function and evolution of organellar networks is discussed. The evolution of new aspects of filament dynamics in eukaryotes, including severing and branching, and the advent of molecular motors converted the eukaryotic cytoskeleton into a self-organizing “active gel,” the dynamics of which can only be described with computational models. And IMO, this is the "beginning" of evolving emergent consciousness. Advances in modeling and comparative genomics hold promise of a better understanding of the evolution of the self-organizing cytoskeleton in early eukaryotes, and its role in the evolution of novel eukaryotic functions, such as amoeboid motility, mitosis, and ciliary swimming. Last edited: See what you're discussing here: Information about any physical system can be as coarse-grained or fine-grained as you can manage to make it. Evolution though is a process; this process inputs (some form of) information and outputs different information--it might only be slightly different. Difference is what entropy (in particular information entropy) is based on. See what you're discussing here: Information about any physical system can be as coarse-grained or fine-grained as you can manage to make it. And that means it cannot be valid either way? I know what I am discussing and at what level. So far, every general conclusion I have arrived at from available information has been confirmed by the ongoing research in this important field of inquiry. Evolution though is a process; this process inputs (some form of) information and outputs different information--it might only be slightly different. I believe that's exactly what I posited, with supporting scientific links. If you are referring to the genetic mutation of chromosome 2, that has to do with "intelligence", not consciousness. Difference is what entropy (in particular information entropy) is based on. What difference are you talking about? Consciousness is based on information entropy? Last edited: If you are referring to the genetic mutation of chromosome 2, that has to do with "intelligence", not consciousness. Nope, I didn't mention any chromosomes. I didn't mention intelligence or consciousness either. What difference are you talking about? Consciousness is based on information entropy? I was only referring to evolution making changes, nothing else. I didn't even say anything about how it does this (but in bacteria and viruses, it happens quite quickly mostly because of transcription errors; it's how the covid variants arose over the last two years, for example). Nope, I didn't mention any chromosomes. I didn't mention intelligence or consciousness either. Well they are both products of evolutionary processes and their functional proto-types can be measured very early in the cytoplasm and cytoskeleton of even single-celled organisms. The evolutionary refinement and complexity of intelligence and emergent consciousness is demonstrated by all the surviving levels of species adapted to their environment, from extremophiles that require an environment that is deadly to all other species, to the mayfly that spends most of it's life in the larval stage and when hatched has just 24 hr to live and mate and for the female to die in water, because any eggs on land dry out and never hatch to produce a next generation. Natural selection solved that problem by producing up 3000 eggs and the female seeks water while spreading her pheromones on the wind which males several males away can detect and follow towards the female. The number of successful reproductive processes are astounding. Natural selection is extremely effective over long periods of time. This is why man has adopted the process in its quest to breed variety in species that are only ornamental. I was only referring to evolution making changes, nothing else. Yes, and that is why the proper definition of evolution is " evolution via natural selection" where natural selection becomes the arbiter of survivability and ultimately benefit the species' gene I didn't even say anything about how it does this (but in bacteria and viruses, it happens quite quickly mostly because of transcription errors; it's how the covid variants arose over the last two years, for example). Yes despite the body's defenses, any cellular mutation in nanoscale organisms are relatively large and usually affects their virulence positively or negatively because that is the only dynamic attribute viruses have. OTOH, cellular mutations in large organisms usually create relatively small variations that may or may not be beneficial to the organism's ability to survive in its environment and pass the survivability test of natural selection. Hence the enormous variety of adapted species as well as the unmeasurable number of variations that did not survive the test of natural selection. Last edited: Information and its input from an external environment, if say, we decide the brain and all its neurophysical extensions throughout the body is separate, in some logical sense, from the rest of the body, is for that brain, quite a limited set. I think that means we are conscious of the external world because we don't get the chance to input a whole lot of information from it. Our senses and our sense of a flow of time are only as good as evolution "decided". Or, evolution doesn't and hasn't given us any greater perceptive sense (information input capacity or bandwidth) than we needed. Evolution is parsimonious (yes, that's one of those big words). Information and its input from an external environment, if say, we decide the brain and all its neurophysical extensions throughout the body is separate, in some logical sense, from the rest of the body, is for that brain, quite a limited set. I believe it has been decided that all cells in eukaryotic organisms are in communication with each other and by extension with the brain via the spinal cord. (Note: neurons are cells with cytoplasm and cytoskeletons) I think that means we are conscious of the external world because we don't get the chance to input a whole lot of information from it. Our senses and our sense of a flow of time are only as good as evolution "decided". I agree, natural selection only selects for survivability by reproduction and survival skills and do not necessarily have to be complex as long as they allow for reproduction.This is why we can observe adapted species over a range and stages of evolution, sufficient for that species. The great apes except for homo sapiens are a perfect baseline of naturally evolved intelligence and by extension consciousness sufficient to survive in several types of forests. Humans seem to be an anomaly and possess intelligence and conscious awareness far beyond natural necessity due to a major beneficial mutation (fusion of 2 chromosomes into 1 larger chromosome). It is what allowed us to migrate, invade, and conquer most of the solid land. It may also be our demise! We have become too successful and are now on a par with other invasive species that kill their host. Or, evolution doesn't and hasn't given us any greater perceptive sense (information input capacity or bandwidth) than we needed. Evolution is parsimonious (yes, that's one of those big words). True, almost all animals exceed humans in some type and form of sensory ability. But the human brain seems to be the most complex in the animal world and capable of deep analytical powers over and above adaptive necessities. One of the excellent qualities of the human brain is the ability to recognize the mathematical essence of natural phenomena and how this mathematical essence can be symbolized and used to artificially imitate natural phenomena. Many other animals have a sense of quantity and/or quality (more (good) here and less (bad) there). Even bees can decide which patch has more flowering plants than another and communicate that information to the rest of the hive. Many predatory animals use triangulation to calculate distance and trajectory. The variety of survival skills is endless in scope and subtleties. Humans have almost all of them and where we don't we make instruments that inform us where our natural senses fail. Our survival skill is in tool making and imitating natural processes themselves. Humans are the living gods of this planet. Last edited: These living gods: why do they enjoy listening to music? What's your theory? Please try to use physical thoughts in your answer (not so subtle dig at James R) Please stop trolling, arfa. These living gods: why do they enjoy listening to music? Because we can reproduce and imitate the natural affinity and symmetry of self-ordering wave functions and natural harmonics with instruments tuned to wave-lengths audible to the human senses. Personally, I played music for a living for 7 years . What's your theory? Please try to use physical thoughts in your answer Harmonics. It has been proven that harmonics can have beneficial or destructive influence on physical objects. Positive harmonics can bring balance and symmetry (order) from chaos. Disharmony can be causal to symmetry breaking and the disordering into chaos. When wave functions have symmetry reality is ordered and in balance. When wave functions are asymmetrical we have disorder and imbalance. These states of order (comfort) and disorder (discomfort) are experienced by all physical objects. In conscious orgnisms these states translate in physical experiences of comfort (in harmony) or discomfort (out of sorts) with reality. Music is the purposeful ordering of harmonic soundwaves to elicit emotional experiences, not only in humans but in many other animals. Because we can reproduce and imitate the natural affinity and symmetry of self-ordering wave functions and natural harmonics with instruments tuned to wave-lengths audible to the human senses. Personally, I played music for a living for 7 years . Harmonics. It has been proven that harmonics can have beneficial or destructive influence on physical objects. Positive harmonics can bring balance and symmetry (order) from chaos. Disharmony can be causal to symmetry breaking and the disordering into chaos. When wave functions have symmetry reality is ordered and in balance. When wave functions are asymmetrical we have disorder and imbalance. These states of order (comfort) and disorder (discomfort) are experienced by all physical objects. In conscious orgnisms these states translate in physical experiences of comfort (in harmony) or discomfort (out of sorts) with reality. Music is the purposeful ordering of harmonic soundwaves to elicit emotional experiences, not only in humans but in many other animals. All fermions have an symmetric wave function, which is why the Pauli Exclusion Principle applies to them. That includes all the common particles of matter (proton, neutrons, electrons). Bosons, e.g. photons, have symmetric wave functions. This may describe the phenomena of symmetry and order more formally. A Semi-Harmonic Frequency Pattern Organizes Local and Non-Local States by Quantum Entanglement in both EPR-Studies and Life Systems Hans J. H. Geesink Dirk K. F. Meijer A novel biophysical principle: the GM-model was revealed, describing an algorithm for coherent and non-coherent electromagnetic (EM) frequencies that either sustain or deteriorate life conditions. The particular frequency bands could be mathematically positioned on a Pythagorean scale, based on information distribution according to ratios of 2:3 in 1:2. The particular scale exhibits a core pattern of twelve eigenfrequency functions with adjacent self-similar patterns, according to octave hierarchy. In view of the current interest in coherency and entanglement in quantum biology, in the present paper, we report on a meta-analysis of 60 papers in physics that deal with the influence of electromagnetic frequencies on the promotion of entangled states in, so called, EPR experiments. Einstein, Podolsky and Rosen originated the EPR-correlation thought experiment for quantum-entangled particles, in which particles are supposed to react as one body. The meta-analyses of the EPR-experiments learned that entanglement, achieved in the experiments is real, and applied frequencies are located at discrete coherent configurations. Strikingly, all analysed EPR-data of the independent studies fit precisely in the derived scale of coherent frequency data and turned out to be virtually congruent with the above mentioned semi-harmonic EM-scale for living organisms. This implies that the same discrete coherent frequency pattern of EM quantum waves that determine local and non-local states is also applicable to biological order and that quantum entanglement is a prerequisite for life. The study may indicate that the implicate order of pilot-wave steering system, earlier postulated by David Bohm is composed of discrete entangled EM wave modalities, related to a pervading zero-point energy information field. Music is the purposeful ordering of harmonic soundwaves to elicit emotional experiences, not only in humans but in many other animals. And sound in music is classical waves. Humans aren't equipped to hear quantum probabilities, right? And sound in music is classical waves. Humans aren't equipped to hear quantum probabilities, right? Roger Penrose seems to think the brain is able to process quantum data . That's what ORCH OR is all about. And there is evidence that supports that concept. It is proposed that microtubules are not only able to handle EM and EC data, they may be able to process qubits as well. Considering the astounding versatility of microtubules I would not dismiss this notion out-of-hand. Don't forget that microtubules themselves are nano-scale bi-directional processors and that is the domain where quantum becomes real, no? Last edited: Don't forget that microtubules themselves are nano-scale bi-directional processors... What exactly do they process? A process involves operating on an input to produce an output. What is the input, output and processing of a microtubule? What exactly do they process? A process involves operating on an input to produce an output. What is the input, output and processing of a microtubule? What do neurons process? What do synapses process? What process takes place during mitosis? What information does the cytoplasm and cytoskeleton of every living cell in a body process? First; Microtubules regulate the heart beat! IOW the autonomous pumping function of the heart is driven by microtubules!!! How do they do that? Perhaps it is the same as how a single celled paramecium swims without a neural network. Apparently, microtubules are able to function autonomously and that is astounding! Microtubules’ role in heart cell contraction revealed An organized network called the cytoskeleton helps cells maintain their shape and organization. Microtubules (MTs) are a major component of this cellular support structure. MTs can transmit mechanical signals and, like rods or struts, resist compression in contracting heart cells. How they perform these roles has been unclear. Microtubules play a role in a host of other critical life sustaining functions via the neural network. What Is the Autonomic Nervous System? The autonomic nervous system regulates a variety of body process that takes place without conscious effort. The autonomic system is the part of the peripheral nervous system that is responsible for regulating involuntary body functions, such as heartbeat, blood flow, breathing, and digestion. Note that the interior of every axon that connects all neural cells, consists of arrays of microtubules. Axons may contain as many as 100 bundles of microtubules It is estimated that the entire human body may contain: 27 varieties of MT and perhaps more than 100 trillion MT, all of them active in data processing of one kind or another. Detail showing microtubules at axon hillock and initial segment Last edited: Of the three types of protein fibers in the cytoskeleton, microfilaments are the narrowest. They have a diameter of about 7 nm and are made up of many linked monomers of a protein called actin, combined in a structure that resembles a double helix. Because they are made of actin monomers, microfilaments are also known as actin filaments. Actin filaments have directionality, meaning that they have two structurally different ends. Actin filaments have a number of important roles in the cell. For one, they serve as tracks for the movement of a motor protein called myosin, which can also form filaments. Because of its relationship to myosin, actin is involved in many cellular events requiring motion. Intermediate filaments Intermediate filaments are a type of cytoskeletal element made of multiple strands of fibrous proteins wound together. As their name suggests, intermediate filaments have an average diameter of 8 to 10 nm, in between that of microfilaments and microtubules (discussed below). Unlike actin filaments, which can grow and disassemble quickly, intermediate filaments are more permanent and play an essentially structural role in the cell. They are specialized to bear tension, and their jobs include maintaining the shape of the cell and anchoring the nucleus and other organelles in place Despite the “micro” in their name, microtubules are the largest of the three types of cytoskeletal fibers, with a diameter of about 25 nm. A microtubule is made up of tubulin proteins arranged to form a hollow, straw-like tube, and each tubulin protein consists of two subunits, α-tubulin and β-tubulin. Microtubules, like actin filaments, are dynamic structures: they can grow and shrink quickly by the addition or removal of tubulin proteins. Also similar to actin filaments, microtubules have directionality, meaning that they have two ends that are structurally different from one another. In a cell, microtubules play an important structural role, helping the cell resist compression Left: 3D model of a microtubule, showing that it is a hollow cylinder of proteins. Right: Cartoon diagram of a microtubule, showing that it is made of two different types of subunits (alpha and beta). The subunits form dimers, and the dimers are connected in a spiral pattern to form the hollow tube of the microtubule. Image credit: OpenStax Biology. In addition to providing structural support, microtubules play a variety of more specialized roles in a cell. For instance, they provide tracks for motor proteins called kinesins and dyneins, which transport vesicles and other cargoes around the interior of the cell^44start superscript, 4, end superscript. During cell division, microtubules assemble into a structure called the spindle, which pulls the chromosomes apart. Flagella, cilia, and centrosomes Microtubules are also key components of three more specialized eukaryotic cell structures: flagella, cilia and centrosomes. You may remember that our friends the also have structures that have flagella, which they use to move. Don't get confused—the eukaryotic flagella we're about to discuss have pretty much the same role, but a very different structure. Flagella (singular, flagellum) are long, hair-like structures that extend from the cell surface and are used to move an entire cell, such as a sperm. If a cell has any flagella, it usually has one or just a few. Motile cilia (singular, cilium) are similar, but are shorter and usually appear in large numbers on the cell surface. When cells with motile cilia form tissues, the beating helps move materials across the surface of the tissue. For example, the cilia of cells in your upper respiratory system help move dust and particles out towards your nostrils. Despite their difference in length and number, flagella and motile cilia share a common structural pattern. In most flagella and motile cilia, there are 9 pairs of microtubules arranged in a circle, along with an additional two microtubules in the center of the ring. This arrangement is called a 9 + 2 array. You can see the 9 + 2 array in the electron microscopy image at left, which shows two flagella in cross-section. Upper: Transmission electron micrograph of flagella in cross-section, showing the 9+2 microtubule array organization. Lower: Cartoon diagram of a motile cililum, showing the singlet microtubules in the center, the outer doublet microtubules arranged in a circle around the singlet microtubules, and the dyneins attached to the doublet microtubules. The whole structure is surrounded by plasma membrane. At the base of the cilium lies a basal body, which is also made up of microtubules. _Image credits: upper panel, "The cytoskeleton: Figure 5," by OpenStax College, Biology (CC BY 3.0). Modification of work by Dartmouth Electron Microscope Facility, Dartmouth College; scale-bar data from Matt Russell. Lower panel, modification of "Eukaryotic cilium diagram," by Mariana Ruiz Villareal (public domain)._ In flagella and motile cilia, motor proteins called dyneins move along the microtubules, generating a force that causes the flagellum or cilium to beat. The structural connections between the microtubule pairs and the coordination of dynein movement allow the activity of the motors to produce a pattern of regular beating^{5,6}5,6start superscript, 5, comma, 6, end superscript. You may notice another feature in the diagram above: the cilium or flagellum has a basal body located at its base. The basal body is made of microtubules and plays a key role in assembly of the cilium or flagellum. Once the structure has been assembled, it also regulates which proteins can enter or exit^77start superscript, 7, end superscript. The basal body is actually just a modified centriole^77start superscript, 7, end superscript. A centriole is a cylinder of nine triplets of microtubules, held together by supporting proteins. Centrioles are best known for their role in centrosomes, structures that act as microtubule organizing centers in animal cells. A centrosome consists of two centrioles oriented at right angles to each other, surrounded by a mass of "pericentriolar material," which provides anchoring sites for microtubules^88start superscript, 8, end superscript. Image of a centrosome. The centrosome contains two centrioles positioned at right angles to each other. Image credit: modification of "Centriole," by Kelvinsong (CC BY 3.0) The centrosome is duplicated before a cell divides, and the paired centrosomes seem to play a role in organizing the microtubules that separate chromosomes during cell division. However, the exact function of the centrioles in this process still isn’t clear. Cells with their centrosome removed can still divide, and plant cells, which lack centrosomes, divide just fine. Not open for further replies.
{"url":"https://sciforums.com/threads/is-consciousness-to-be-found-in-quantum-processes-in-microtubules.161187/page-117","timestamp":"2024-11-07T06:28:44Z","content_type":"text/html","content_length":"193540","record_id":"<urn:uuid:1c1a3e84-3bb1-426c-a420-27c8550390ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00125.warc.gz"}
Lesson 6 Using Equations to Solve Problems 6.1: Number Talk: Quotients with Decimal Points (5 minutes) The purpose of this Number Talk is to elicit strategies and understandings students have for determining how the size of a quotient changes when the decimal point in the divisor or dividend moves. These understandings help students develop fluency and will be helpful later in this lesson when students will need to be able to check the reasonableness of their answers. While four problems are given in the first problem, it may not be possible to share answers for all of them. Arrange students in groups of 2. Display the first question. Give students 2 minutes of quiet think time and ask them to give a signal when they have an answer, and reasoning, to support their answer. Follow with a whole-class discussion. Display the second question and give students 1 minute of quiet think time. Representation: Internalize Comprehension. To support working memory, provide students with sticky notes or mini whiteboards. Supports accessibility for: Memory; Organization Student Facing Without calculating, order the quotients of these expressions from least to greatest. \(42.6 \div 0.07\) \(42.6 \div 70\) \(42.6 \div 0.7\) \(426 \div 70\) Place the decimal point in the appropriate location in the quotient: \(42.6 \div 7 = 608571\) Use this answer to find the quotient of one of the previous expressions. Activity Synthesis Ask students to share where they placed the decimal point in the second question and their reasoning. After students share, ask the class if they agree or disagree. Ask selected students, who chose different problems to solve, to share quotients for the problems in the first question. Record and display their responses for all to see. To involve more students in the conversation, consider • “Who can restate ___’s reasoning in a different way?” • “Did anyone have the same strategy but would explain it differently?” • “Did anyone solve the problem in a different way?” • “Does anyone want to add on to _____’s strategy?” • “Do you agree or disagree? Why?” Emphasize student reasoning based in place value that involve looking at the relationship between the dividend and divisor to determine the size of the quotient. Speaking: MLR8 Discussion Supports.: Display sentence frames to support students when they explain their strategy. For example, "First, I _____ because . . ." or "I noticed _____ so I . . . ." Some students may benefit from the opportunity to rehearse what they will say with a partner before they share with the whole class. Design Principle(s): Optimize output (for explanation) 6.2: Concert Ticket Sales (15 minutes) This activity requires students to work with larger numbers, which is intended to encourage students to use an equation and notice the efficiencies of doing so. It also emphasizes the interpretation of the constant of proportionality in the context. In this case, the constant represents the cost of a single ticket, and makes it easy to identify which singer would make more money for similar ticket sales in a concert series. Note that asking students to give the revenues for different ticket sales encourages looking for and expressing regularity in repeated reasoning (MP8). The last set of questions ask students to interpret the constant of proportionality as represented in an equation in terms of the context (MP2). Monitor for students who solve the problems using the following strategies and invite them to share during the whole-class discussion. • writing many calculations, without any organization • creating a table to organize their results • writing an equation to encapsulate repeated reasoning Provide access to calculators. Consider using the names of actual performers to make the task more interesting to students. Representation: Internalize Comprehension. Activate or supply background knowledge. Allow students to use calculators to ensure inclusive participation in the activity. Supports accessibility for: Memory; Conceptual processing Student Facing A performer expects to sell 5,000 tickets for an upcoming concert. They want to make a total of $311,000 in sales from these tickets. 1. Assuming that all tickets have the same price, what is the price for one ticket? 2. How much will they make if they sell 7,000 tickets? 3. How much will they make if they sell 10,000 tickets? 50,000? 120,000? a million? \(x\) tickets? 4. If they make $404,300, how many tickets have they sold? 5. How many tickets will they have to sell to make $5,000,000? Activity Synthesis Select student responses to be shared with the whole class in discussion. Sequence their explanations from less efficient and organized to more efficient and organized. Discuss how the solutions are the same and different, and the advantages and disadvantages of each method. An important part of this discussion is correspondences and connections between different approaches. Speaking, Listening: MLR8 Discussion Supports. Use this routine to support whole-class discussion. For each explanation that is shared, ask students to restate what they heard using precise mathematical language. Consider providing students time to restate what they heard to a partner before selecting one or two students to share with the class. Ask the original speaker if their peer was accurately able to restate their thinking. Call students' attention to any words or phrases that helped to clarify the original statement. This provides more students with an opportunity to produce language as they interpret the reasoning of others. Design Principle(s): Support sense-making 6.3: Recycling (15 minutes) This activity is intended to further develop students’ ability to write equations to represent proportional relationships. It involves work with decimals and asks for equations that represent proportional relationships of different pairs of quantities, which increases the challenge of the task. Students may solve the first two problems in different ways. Monitor for different solution approaches such as: using computations, using tables, finding the constant of proportionality, and writing Arrange students in groups of 2. Provide access to calculators. Give 5 minutes quiet work time followed by sharing work with a partner. Representation: Internalize Comprehension. Represent the same information through different modalities by using tables. If students are unsure where to begin, suggest that they draw a table to help organize the information provided. Supports accessibility for: Conceptual processing; Visual-spatial processing Reading: MLR6 Three Reads. Use this routine to support reading comprehension, without solving, for students. In the first read, students read the problem with the goal of comprehending the situation (e.g., The situation involves weight of cans and the amount of money made from recycling). In the second read, ask students to look for quantities without focusing on specific values. Listen for, and amplify, the quantities that vary in relation to each other in this situation: number of aluminum cans; total weight of aluminum cans, in kilograms; money earned, in dollars. In the third read, ask students to brainstorm possible strategies to calculate the weight of aluminum in one can and the amount of money earned from one can. This helps students connect the language in the word problem and the reasoning needed to solve the problem keeping the intended level of cognitive demand in the task. Design Principle(s): Support sense-making Student Facing Aluminum cans can be recycled instead of being thrown in the garbage. The weight of 10 aluminum cans is 0.16 kilograms. The aluminum in 10 cans that are recycled has a value of $0.14. 1. If a family threw away 2.4 kg of aluminum in a month, how many cans did they throw away? Explain or show your reasoning. 2. What would be the recycled value of those same cans? Explain or show your reasoning. 3. Write an equation to represent the number of cans \(c\) given their weight \(w\). 4. Write an equation to represent the recycled value \(r\) of \(c\) cans. 5. Write an equation to represent the recycled value \(r\) of \(w\) kilograms of aluminum. Student Facing Are you ready for more? The EPA estimated that in 2013, the average amount of garbage produced in the United States was 4.4 pounds per person per day. At that rate, how long would it take your family to produce a ton of garbage? (A ton is 2,000 pounds.) Anticipated Misconceptions If students have trouble getting started, encourage them to create representations of the relationships, like a diagram or a table. If they are still stuck, suggest that they first find the weight and dollar value of 1 can. Activity Synthesis Select students to share their methods: using computations, using tables, finding the constant of proportionality, writing equations. If students did not use equations to solve the first two problems, ask them how they can use the equations they found later in the activity to answer the first two questions. If time permits, highlight connections between the equations generated, illustrated by the sequence of equations below. \(\displaystyle r = 0.014c\) \(\displaystyle r = 0.014(62.5w)\) \(\displaystyle r = 0.875w\) Lesson Synthesis The activities in this lesson removed some scaffolds used in previous lessons (e.g., presenting a table) and included features (e.g., large numbers) intended to motivate use of equations. Remind students that throughout this lesson, they considered problem situations and created organized ways to get answers. Whether the numbers in the problem are whole numbers, large numbers, or decimals, if there is a proportional relationship between two quantities, their relationship can be represented by an equation of the form \(y = kx\). The situations provided demonstrate the efficiency of equations for certain types of problems. Finding how many tickets should be sold in order to earn \$5 million and finding the relationship between number of cans and weight and recycled value are more elegantly and efficiently handled by equations than by calculations or tables. • What were some helpful ways we organized information? • What were some equations we found in this lesson? • In each equation, what did the letters represent? What did the number mean? \(y=62.2x\), \(c = 62.5w\), \(r = 0.014c\), \(r = 0.875w\) 6.4: Cool-down - Granola (5 minutes) Student Facing Remember that if there is a proportional relationship between two quantities, their relationship can be represented by an equation of the form \(y = k x\). Sometimes writing an equation is the easiest way to solve a problem. For example, we know that Denali, the highest mountain peak in North America, is 20,310 feet above sea level. How many miles is that? There are 5,280 feet in 1 mile. This relationship can be represented by the equation \(\displaystyle f=5,\!280 m\) where \(f\) represents a distance measured in feet and \(m\) represents the same distance measured in miles. Since we know Denali is 20,310 feet above sea level, we can write \(\displaystyle 20,\!310=5,\!280 m\) So \(m = \frac{20,310}{5,280}\), which is approximately 3.85 miles.
{"url":"https://im.kendallhunt.com/MS/teachers/2/2/6/index.html","timestamp":"2024-11-06T14:22:45Z","content_type":"text/html","content_length":"91553","record_id":"<urn:uuid:ef35347b-c294-458d-afd7-0e677ecde97a>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00537.warc.gz"}
Hello, my school maths are very rusty and I think this is a good opportunity to take advance of this community :D I have two points (a line) and a rectangle, I would like to know how to calculate if the line intersects the rectangle, my first approach had so many "if" statements that the compiler sent me a link to this site. Thanks for...
{"url":"http://ansaurus.com/tag/2d","timestamp":"2024-11-04T05:28:42Z","content_type":"application/xhtml+xml","content_length":"21116","record_id":"<urn:uuid:cba1c7fd-a515-41ac-b98a-d1c0169f0043>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00288.warc.gz"}
Konstantin Borovkov Elements Of Stochastic Modelling (Third Edition) (Hardback) Title: Elements Of Stochastic Modelling (Third Edition). Publisher: World Scientific Publishing Co Pte Ltd. Genre: Science Nature & Math. Description: This is a thoroughly revised and expanded third edition of a successful university textbook that provides a broad introduction to key areas of stochastic The previous edition was developed from lecture notes for two one-semester courses for third-year science and actuarial students at the University of Melbourne. This book reviews the basics of probability theory and presents topics on Markov chains, Markov decision processes, jump Markov processes, elements of queueing theory, basic renewal theory, elements of time series and simulation. It also features elements of stochastic calculus and introductory mathematical finance. It is in this aspect that the present, third edition differs from the second one: the included background material and argument sketches have been extended, made more graphical and informative. The whole text was reviewed and streamlined wherever possible to make the book more attractive and useful for Where appropriate, the book includes references to more specialised texts on respective topics that contain both complete proofs and more advanced material.
{"url":"https://lifesciencecreativebooks.com/konstantin_borovkov_elements_of_stochastic_modelling_third_edition_hardback.htm","timestamp":"2024-11-04T11:39:22Z","content_type":"text/html","content_length":"6973","record_id":"<urn:uuid:f5ffb0a1-6ac1-434c-8865-7965c069c0e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00484.warc.gz"}
Perfect 2-error-correcting codes over arbitrary finite alphabets. Perfect 2-error-correcting codes over arbitrary finite alphabets. Conjecture Does there exist a nontrivial perfect 2-error-correcting code over any finite alphabet, other than the ternary Golay code? Very few perfect codes are known to exist over any alphabet. The trivial examples are codes with 1 or 2 codewords, or q-ary (n, M, d) codes with all of the q^n vectors being codewords. Other than this, we have an infinite family of perfect 1-error-correcting Hamming codes, and two unique Golay codes, the binary one which corrects 1 error, the ternary one which corrects 2 errors. Recent research activity has discovered a large number of previously unknown perfect 1-error correcting codes which are not isomorphic to the Hamming codes. It is well known (see Van Lint) that the answer is negative for codes over alphabets of size equal to a power of a prime number. Further results (see Hong, Best) establish that there are no perfect t-error-correcting codes for any t > 2 over any finite alphabet, which establishes the fact that 2 is the largest number of errors which any new perfect code could possibly correct. Lloyd's theorem plays a key role in ruling out t > 2, but provides less information than needed in the t = 2 case. Establishing the result in the negative would likely require an ad-hoc combinatorial argument, while establishing it in the positive could be done by any clever construction.
{"url":"http://openproblemgarden.org/op/perfect_2_error_correcting_codes_over_arbitrary_finite_alphabets","timestamp":"2024-11-02T08:00:50Z","content_type":"application/xhtml+xml","content_length":"12440","record_id":"<urn:uuid:1db2a4ac-16b9-4a18-89a5-520efb877087>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00584.warc.gz"}
Math Quirks:...Anybody (else) experienced strange Date coincidences ? I very often have a peek at the clock and catch 12:12, it must be a Final Destiny thing. edit: "relevant results"? You do realize that the expansion of pi contains every possible subsequence? I very often have a peek at the clock and catch 12:12, it must be a Final Destiny thing.I always look at the clock when it says 11:34. It's really annoying. And no, it's not OCD. It always happens when i haven't looked at a clock in hours. Are you sure?Take pi to any arbitrary length. Change one digit. A subsequence that does not occur in that expansion. The problem with infinity. Pi may have an infinitely long string of digits, but there will be a bigger infinity of substrings that are not in that string. Take pi to any arbitrary length. Change one digit. A subsequence that does not occur in that expansion. QuoteTake pi to any arbitrary length. Change one digit. A subsequence that does not occur in that expansion.One thing you have to be careful of is arguments that prove more than intended, in other words overgeneral arguments. Since you invited me to, I take "arbitrary length" to be 1.I. Pi to 1 digit is 3.II. "Change one digit": okay, now I have 4.III. "A subsequence that does not occur in that expansion": while it's true that the first digit of pi does not contain 4, it is irrelevant to the topic because 4 is contained someplace else within the expansion, which is all that I claimed (in fact, infinitely many places). And the same is true for substrings of any length, not just 1.The property of containing every possible sequence is called normality: the uniform distribution of digits. It has been known for a long time that nearly all real numbers are normal, but proving that specific numbers are normal is difficult. The known decimal digits of pi are very uniform, and those digits are known to around 10 trillion places, but uniformity of all digits is unproven. We can, however, easily construct normal irrational numbers, such as Champerowne's constant. From my early math studies (ca. 1965), I remember an example where the reader could group the digits in the decimal expansion of pi into discrete groups of 5 digits and compare the probabilities to poker hands played with a corresponding deck of only 10 cards. The number theoreticians just speak of uniform distributions of digits (not "randomness") because it has nothing to do with random probabilities. Every digit of pi is calculable from simple formulas: it has zero entropy. Fun exercise: can you find irrational numbers that do not have a uniform distribution of their decimals? Interestingly, binary (and quaternary, octal, hexadecimal, etc.) digits of Pi can be calculated directly Yup:$$F_n = \frac{\varphi^n - \psi^n}{\sqrt{5}}, \quad \text{where} \quad \varphi = \frac{1 + \sqrt{5}}{2} \quad \text{and} \quad \psi = \frac{1 - \sqrt{5}}{2}$$and \$\varphi \approx 1.61803\$ is the golden ratio, and \$\psi = 1 - \varphi = \frac{-1}{\varphi} \approx -0.61803\$ is its conjugate. Funky!
{"url":"https://www.eevblog.com/forum/dodgy-technology/math-quirks-anybody-(else)-experienced-strange-date-coincidences/70/","timestamp":"2024-11-12T09:18:17Z","content_type":"application/xhtml+xml","content_length":"127063","record_id":"<urn:uuid:f741042b-90db-4122-a531-68e1442dcbab>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00394.warc.gz"}
University Physics Volume 1 5 Newton’s Laws of Motion Learning Objectives By the end of the section, you will be able to: • State Newton’s third law of motion • Identify the action and reaction forces in different situations • Apply Newton’s third law to define systems and solve problems of motion We have thus far considered force as a push or a pull; however, if you think about it, you realize that no push or pull ever occurs by itself. When you push on a wall, the wall pushes back on you. This brings us to Newton’s third law. Newton’s Third Law of Motion Whenever one body exerts a force on a second body, the first body experiences a force that is equal in magnitude and opposite in direction to the force that it exerts. Mathematically, if a body A exerts a force [latex]\mathbf{\overset{\to }{F}}[/latex] on body B, then B simultaneously exerts a force [latex]\text{−}\mathbf{\overset{\to }{F}}[/latex] on A, or in vector equation form, [latex]{\mathbf{\overset{\to }{F}}}_{\text{AB}}=\text{−}{\mathbf{\overset{\to }{F}}}_{\text{BA}}.[/latex] Newton’s third law represents a certain symmetry in nature: Forces always occur in pairs, and one body cannot exert a force on another without experiencing a force itself. We sometimes refer to this law loosely as “action-reaction,” where the force exerted is the action and the force experienced as a consequence is the reaction. Newton’s third law has practical uses in analyzing the origin of forces and understanding which forces are external to a system. We can readily see Newton’s third law at work by taking a look at how people move about. Consider a swimmer pushing off the side of a pool (Figure). She pushes against the wall of the pool with her feet and accelerates in the direction opposite that of her push. The wall has exerted an equal and opposite force on the swimmer. You might think that two equal and opposite forces would cancel, but they do not because they act on different systems. In this case, there are two systems that we could investigate: the swimmer and the wall. If we select the swimmer to be the system of interest, as in the figure, then [latex]{F}_{\text{wall on feet}}[/latex] is an external force on this system and affects its motion. The swimmer moves in the direction of this force. In contrast, the force [latex]{F}_{\text{feet on wall}}[/latex] acts on the wall, not on our system of interest. Thus, [latex]{F}_{\text{feet on wall}}[/latex] does not directly affect the motion of the system and does not cancel [latex]{F}_{\text{wall on feet}}.[/latex] The swimmer pushes in the direction opposite that in which she wishes to move. The reaction to her push is thus in the desired direction. In a free-body diagram, such as the one shown in Figure, we never include both forces of an action-reaction pair; in this case, we only use [latex]{F}_{\text{wall on feet}}[/latex], not [latex]{F}_{\text {feet on wall}}[/latex]. Other examples of Newton’s third law are easy to find: • As a professor paces in front of a whiteboard, he exerts a force backward on the floor. The floor exerts a reaction force forward on the professor that causes him to accelerate forward. • A car accelerates forward because the ground pushes forward on the drive wheels, in reaction to the drive wheels pushing backward on the ground. You can see evidence of the wheels pushing backward when tires spin on a gravel road and throw the rocks backward. • Rockets move forward by expelling gas backward at high velocity. This means the rocket exerts a large backward force on the gas in the rocket combustion chamber; therefore, the gas exerts a large reaction force forward on the rocket. This reaction force, which pushes a body forward in response to a backward force, is called thrust. It is a common misconception that rockets propel themselves by pushing on the ground or on the air behind them. They actually work better in a vacuum, where they can more readily expel the exhaust gases. • Helicopters create lift by pushing air down, thereby experiencing an upward reaction force. • Birds and airplanes also fly by exerting force on the air in a direction opposite that of whatever force they need. For example, the wings of a bird force air downward and backward to get lift and move forward. • An octopus propels itself in the water by ejecting water through a funnel from its body, similar to a jet ski. • When a person pulls down on a vertical rope, the rope pulls up on the person (Figure). There are two important features of Newton’s third law. First, the forces exerted (the action and reaction) are always equal in magnitude but opposite in direction. Second, these forces are acting on different bodies or systems: A’s force acts on B and B’s force acts on A. In other words, the two forces are distinct forces that do not act on the same body. Thus, they do not cancel each other. For the situation shown in Figure, the third law indicates that because the chair is pushing upward on the boy with force [latex]\mathbf{\overset{\to }{C}},[/latex] he is pushing downward on the chair with force [latex]\text{−}\mathbf{\overset{\to }{C}}.[/latex] Similarly, he is pushing downward with forces [latex]\text{−}\mathbf{\overset{\to }{F}}[/latex] and [latex]\text{−}\mathbf{\overset {\to }{T}}[/latex] on the floor and table, respectively. Finally, since Earth pulls downward on the boy with force [latex]\mathbf{\overset{\to }{w}},[/latex] he pulls upward on Earth with force [latex]\text{−}\mathbf{\overset{\to }{w}}[/latex]. If that student were to angrily pound the table in frustration, he would quickly learn the painful lesson (avoidable by studying Newton’s laws) that the table hits back just as hard. A person who is walking or running applies Newton’s third law instinctively. For example, the runner in Figure pushes backward on the ground so that it pushes him forward. Forces on a Stationary Object The package in Figure is sitting on a scale. The forces on the package are [latex]\mathbf{\overset{\to }{S}},[/latex] which is due to the scale, and [latex]\text{−}\mathbf{\overset{\to }{w}},[/latex] which is due to Earth’s gravitational field. The reaction forces that the package exerts are [latex]\text{−}\mathbf{\overset{\to }{S}}[/latex] on the scale and [latex]\mathbf{\overset{\to }{w}}[/ latex] on Earth. Because the package is not accelerating, application of the second law yields [latex]\mathbf{\overset{\to }{S}}-\mathbf{\overset{\to }{w}}=m\mathbf{\overset{\to }{a}}=\mathbf{\overset{\to }{0}},[/latex] [latex]\mathbf{\overset{\to }{S}}=\mathbf{\overset{\to }{w}}.[/latex] Thus, the scale reading gives the magnitude of the package’s weight. However, the scale does not measure the weight of the package; it measures the force [latex]\text{−}\mathbf{\overset{\to }{S}}[/ latex] on its surface. If the system is accelerating, [latex]\mathbf{\overset{\to }{S}}[/latex] and [latex]\text{−}\mathbf{\overset{\to }{w}}[/latex] would not be equal, as explained in Applications of Newton’s Laws. Getting Up to Speed: Choosing the Correct System A physics professor pushes a cart of demonstration equipment to a lecture hall (Figure). Her mass is 65.0 kg, the cart’s mass is 12.0 kg, and the equipment’s mass is 7.0 kg. Calculate the acceleration produced when the professor exerts a backward force of 150 N on the floor. All forces opposing the motion, such as friction on the cart’s wheels and air resistance, total 24.0 N. Since they accelerate as a unit, we define the system to be the professor, cart, and equipment. This is System 1 in Figure. The professor pushes backward with a force [latex]{F}_{\text{foot}}[/latex] of 150 N. According to Newton’s third law, the floor exerts a forward reaction force [latex]{F}_{\text{floor}}[/latex] of 150 N on System 1. Because all motion is horizontal, we can assume there is no net force in the vertical direction. Therefore, the problem is one-dimensional along the horizontal direction. As noted, friction f opposes the motion and is thus in the opposite direction of [latex]{F}_{\text{floor}}.[/latex] We do not include the forces [latex]{F}_{\text{prof}}[/latex] or [latex]{F}_{\text{cart}}[/latex] because these are internal forces, and we do not include [latex] {F}_{\text{foot}}[/latex] because it acts on the floor, not on the system. There are no other significant forces acting on System 1. If the net external force can be found from all this information, we can use Newton’s second law to find the acceleration as requested. See the free-body diagram in the figure. Newton’s second law is given by The net external force on System 1 is deduced from Figure and the preceding discussion to be The mass of System 1 is These values of [latex]{F}_{\text{net}}[/latex] and m produce an acceleration of None of the forces between components of System 1, such as between the professor’s hands and the cart, contribute to the net external force because they are internal to System 1. Another way to look at this is that forces between components of a system cancel because they are equal in magnitude and opposite in direction. For example, the force exerted by the professor on the cart results in an equal and opposite force back on the professor. In this case, both forces act on the same system and therefore cancel. Thus, internal forces (between components of a system) cancel. Choosing System 1 was crucial to solving this problem. Force on the Cart: Choosing a New System Calculate the force the professor exerts on the cart in Figure, using data from the previous example if needed. If we define the system of interest as the cart plus the equipment (System 2 in Figure), then the net external force on System 2 is the force the professor exerts on the cart minus friction. The force she exerts on the cart, [latex]{F}_{\text{prof}}[/latex], is an external force acting on System 2. [latex]{F}_{\text{prof}}[/latex] was internal to System 1, but it is external to System 2 and thus enters Newton’s second law for this system. Newton’s second law can be used to find [latex]{F}_{\text{prof}}.[/latex] We start with The magnitude of the net external force on System 2 is We solve for [latex]{F}_{\text{prof}}[/latex], the desired quantity: The value of f is given, so we must calculate net [latex]{F}_{\text{net}}.[/latex] That can be done because both the acceleration and the mass of System 2 are known. Using Newton’s second law, we see where the mass of System 2 is 19.0 kg ([latex]m=12.0\,\text{kg}+7.0\,\text{kg}[/latex]) and its acceleration was found to be [latex]a=1.5\,{\text{m/s}}^{2}[/latex] in the previous example. Thus, Now we can find the desired force: This force is significantly less than the 150-N force the professor exerted backward on the floor. Not all of that 150-N force is transmitted to the cart; some of it accelerates the professor. The choice of a system is an important analytical step both in solving problems and in thoroughly understanding the physics of the situation (which are not necessarily the same things). Check Your Understanding Two blocks are at rest and in contact on a frictionless surface as shown below, with [latex]{m}_{1}=2.0\,\text{kg},[/latex] [latex]{m}_{2}=6.0\,\text{kg},[/latex] and applied force 24 N. (a) Find the acceleration of the system of blocks. (b) Suppose that the blocks are later separated. What force will give the second block, with the mass of 6.0 kg, the same acceleration as the system of blocks? Show Solution a. [latex]3.0\,\text{m}\text{/}{\text{s}}^{2}[/latex]; b. 18 N View this video to watch examples of action and reaction. View this video to watch examples of Newton’s laws and internal and external forces. • Newton’s third law of motion represents a basic symmetry in nature, with an experienced force equal in magnitude and opposite in direction to an exerted force. • Two equal and opposite forces do not cancel because they act on different systems. • Action-reaction pairs include a swimmer pushing off a wall, helicopters creating lift by pushing air down, and an octopus propelling itself forward by ejecting water from its body. Rockets, airplanes, and cars are pushed forward by a thrust reaction force. • Choosing a system is an important analytical step in understanding the physics of a problem and solving it. Conceptual Questions Identify the action and reaction forces in the following situations: (a) Earth attracts the Moon, (b) a boy kicks a football, (c) a rocket accelerates upward, (d) a car accelerates forward, (e) a high jumper leaps, and (f) a bullet is shot from a gun. Show Solution a. action: Earth pulls on the Moon, reaction: Moon pulls on Earth; b. action: foot applies force to ball, reaction: ball applies force to foot; c. action: rocket pushes on gas, reaction: gas pushes back on rocket; d. action: car tires push backward on road, reaction: road pushes forward on tires; e. action: jumper pushes down on ground, reaction: ground pushes up on jumper; f. action: gun pushes forward on bullet, reaction: bullet pushes backward on gun. Suppose that you are holding a cup of coffee in your hand. Identify all forces on the cup and the reaction to each force. (a) Why does an ordinary rifle recoil (kick backward) when fired? (b) The barrel of a recoilless rifle is open at both ends. Describe how Newton’s third law applies when one is fired. (c) Can you safely stand close behind one when it is fired? Show Solution a. The rifle (the shell supported by the rifle) exerts a force to expel the bullet; the reaction to this force is the force that the bullet exerts on the rifle (shell) in opposite direction. b. In a recoilless rifle, the shell is not secured in the rifle; hence, as the bullet is pushed to move forward, the shell is pushed to eject from the opposite end of the barrel. c. It is not safe to stand behind a recoilless rifle. (a) What net external force is exerted on a 1100.0-kg artillery shell fired from a battleship if the shell is accelerated at [latex]2.40\times {10}^{4}\,{\text{m/s}}^{2}?[/latex] (b) What is the magnitude of the force exerted on the ship by the artillery shell, and why? Show Solution a. [latex]{F}_{\text{net}}=2.64\times {10}^{7}\,\text{N;}[/latex] b. The force exerted on the ship is also [latex]2.64\times {10}^{7}\,\text{N}[/latex] because it is opposite the shell’s direction of A brave but inadequate rugby player is being pushed backward by an opposing player who is exerting a force of 800.0 N on him. The mass of the losing player plus equipment is 90.0 kg, and he is accelerating backward at [latex]1.20\,{\text{m/s}}^{2}[/latex]. (a) What is the force of friction between the losing player’s feet and the grass? (b) What force does the winning player exert on the ground to move forward if his mass plus equipment is 110.0 kg? A history book is lying on top of a physics book on a desk, as shown below; a free-body diagram is also shown. The history and physics books weigh 14 N and 18 N, respectively. Identify each force on each book with a double subscript notation (for instance, the contact force of the history book pressing against physics book can be described as [latex]{\mathbf{\overset{\to }{F}}}_{\text{HP}}[/ latex]), and determine the value of each of these forces, explaining the process used. Show Solution Because the weight of the history book is the force exerted by Earth on the history book, we represent it as [latex]{\mathbf{\overset{\to }{F}}}_{\text{EH}}=-14\mathbf{\hat{j}}\,\text{N}\text{.}[/ latex] Aside from this, the history book interacts only with the physics book. Because the acceleration of the history book is zero, the net force on it is zero by Newton’s second law: [latex]{\ mathbf{\overset{\to }{F}}}_{\text{PH}}+{\mathbf{\overset{\to }{F}}}_{\text{EH}}=\mathbf{\overset{\to }{0}},[/latex] where [latex]{\mathbf{\overset{\to }{F}}}_{\text{PH}}[/latex] is the force exerted by the physics book on the history book. Thus, [latex]{\mathbf{\overset{\to }{F}}}_{\text{PH}}=\text{−}{\mathbf{\overset{\to }{F}}}_{\text{EH}}=\text{−}(-14\mathbf{\hat{j}})\,\text{N}=14\mathbf{\hat {j}}\,\text{N}\text{.}[/latex] We find that the physics book exerts an upward force of magnitude 14 N on the history book. The physics book has three forces exerted on it: [latex]{\mathbf{\overset{\ to }{F}}}_{\text{EP}}[/latex] due to Earth, [latex]{\mathbf{\overset{\to }{F}}}_{\text{HP}}[/latex] due to the history book, and [latex]{\mathbf{\overset{\to }{F}}}_{\text{DP}}[/latex] due to the desktop. Since the physics book weighs 18 N, [latex]{\mathbf{\overset{\to }{F}}}_{\text{EP}}=-18\mathbf{\hat{j}}\,\text{N}\text{.}[/latex] From Newton’s third law, [latex]{\mathbf{\overset{\to }{F}}} _{\text{HP}}=\text{−}{\mathbf{\overset{\to }{F}}}_{\text{PH}},[/latex] so [latex]{\mathbf{\overset{\to }{F}}}_{\text{HP}}=-14\mathbf{\hat{j}}\,\text{N}\text{.}[/latex] Newton’s second law applied to the physics book gives [latex]\sum \mathbf{\overset{\to }{F}}=\mathbf{\overset{\to }{0}},[/latex] or [latex]{\mathbf{\overset{\to }{F}}}_{\text{DP}}+{\mathbf{\overset{\to }{F}}}_{\text{EP}}+{\mathbf {\overset{\to }{F}}}_{\text{HP}}=\mathbf{\overset{\to }{0}},[/latex] so [latex]{\mathbf{\overset{\to }{F}}}_{\text{DP}}=\text{−}(-18\mathbf{\hat{j}})-(-14\mathbf{\hat{j}})=32\mathbf{\hat{j}}\,\text {N}\text{.}[/latex] The desk exerts an upward force of 32 N on the physics book. To arrive at this solution, we apply Newton’s second law twice and Newton’s third law once. A truck collides with a car, and during the collision, the net force on each vehicle is essentially the force exerted by the other. Suppose the mass of the car is 550 kg, the mass of the truck is 2200 kg, and the magnitude of the truck’s acceleration is [latex]10\,{\text{m/s}}^{2}[/latex]. Find the magnitude of the car’s acceleration. Newton’s third law of motion whenever one body exerts a force on a second body, the first body experiences a force that is equal in magnitude and opposite in direction to the force that it exerts reaction force that pushes a body forward in response to a backward force
{"url":"https://pressbooks.online.ucf.edu/osuniversityphysics/chapter/5-5-newtons-third-law/","timestamp":"2024-11-03T10:21:47Z","content_type":"text/html","content_length":"129227","record_id":"<urn:uuid:4af2c15b-6414-44b8-8411-9036d9d126d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00191.warc.gz"}
RF Range Demystified Once the initial euphoria of the first FPV flight has passed, one of the first questions that enters a FPV newcomer’s mind is ‘how far can I fly‘. In the FPV world, the answer to that question boils down to the performance of two RF links, the uplink (control), and the downlink (video). This document will attempt to demystify RF link range calculations, boiling them down into some easily understandable blocks. Where to start… lets start here, with the major blocks to consider in an RF link; 1. Transmitter power, generally specified in mW (milliWatts), but more useful in dBm^(1) 2. Transmitter antenna gain 3. Free space loss 4. Receiver antenna gain 5. Receiver sensitivity NOTE: The knowledgeable reader will have realized that I left out connector, and cable losses, but these are generally not a big factor in the FPV world, so we will quietly ignore them. In Simple Terms… We transmit some RF energy, using an antenna with a certain amount of gain, loose a bunch of energy in free space, pick it up with an antenna with some more gain, and then feed it to a receiver. The receiver must be sensitive enough to identify the transmitted signal above the ‘noise’. Lets start with some simple numbers to explain how it works. We’ll dive into more detail a litte later. So I have a transmitter emitting 500mW (at least that is what the datasheet says). The math gets a lot easier if we work in terms of ‘dBm’ so lets convert 500mW into dBm (any of the online tools will make this easy). We arrive at 27dBm (or 27dB relative to a milliWatt (thats the ‘m’)). Now, for transmitter and antenna gains, lets start with a simple omni-directional dipole, which comes in at about 2dBi of gain. Next thing is to figure out this ‘free space loss’ thing. This is how much your transmitted RF signal will get attenuated for a given distance (for a given frequency). Lets start with 1 km as a nice round number. Doing a little math (or cheating with the calculator below) we arrive at a free space loss of 108 dB. So now the math gets easy. The signal received at the receiver’s input connector is the following: 27 + 2 – 108 + 2 = -77 (dBm) (27dBm transmitted, plus 2dB for the Tx antenna gain, minus 108dB for free space loss, plus 2dB for the Rx antenna gain). Now, lets take a typical receiver sensitivity figure of -94dB, The received signal in this case will be 17dB above the receiver’s sensitivity, a fairly healthy margin (and known as the Link Margin). Confused? Amazed at how simple it is? Lets work through another simple example, something common in the FPV world. We have a 5.8GHz 600mW (28dBm) transmitter, with a SpiroNET Omni antenna attached (approx. 2dBi). At the receiver we have a SpiroNET Patch for 5.8GHz, with a gain of 13dBi, hooked to a Uno5800 Receiver. We want a link that will take us 5km out. So, start with the hard part, lookup the free space loss at 5.8GHz for 5km, which is 122dB. Now, the primary-school math: 28 + 2 – 122 + 13 = -79 (dBm) So for a 5km link using this equipment, we have a signal at the receiver’s SMA connector of -79dBm, or 15dB above the sensitivity level of the receiver. This is a fairly reasonable link margin, and shouldn’t cause too many surprises. Online Calculators A bit of a work-in-progress, but we have put together some Online Calculators to simplify the math. The Other Factors… So the description above explains one of the major factors to consider in the estimation of RF range. There are some others, which can seriously ‘skew’ the numbers: 1. Antenna Radiation Pattern Antennas are never perfect, even an ‘omni’ directional antenna is not truly omni-directional (more like a doughnut). Flying directly above the pilot, with an omni-directional antenna on the plane, and on the ground, is never a fun experience. Patch antennas are directional, some of them highly directional, with radiation patterns like a flashlight beam. Keep the model in the beam, and all is well, drop out of the sides of the beam, and the fun factor decreases rapidly. 2. Multi-pathing When two (or more) RF signals arrive at a receiver ‘out of phase’, the resulting received signal is always attenuated. RF signals propagate like waves in the ocean. Take two waves which arrive half a wave length apart, and the result is a calm ocean. In RF terms, take a direct ‘line of sight’ wave, and one reflected from the ground, and the effect is the same.h Multipathing is reduced significantly when using Circular Polarized antennas. After a single reflection (or equally an odd-number of reflections), circularly polarized waves reverse their polarization, and arrive at the receiver antenna with a polarization opposite to that of the antenna. 3. Other Stuff In addition to these factors there are a whole slew of other effects, which can skew the range calcuation, including things like the amount of water floating around in the air . For these, at least in the FPV world it is best to keep a ‘link margin’ of around 10-12dB. ^(1) Power in dB (a logarithmic unit) is as simple as addition and subtraction, for example, a 27dBm trasmitter, followed by a 6dB attenuator, results in 21dBm (27 – 6).
{"url":"https://www.immersionrc.com/rf-range-demystified/","timestamp":"2024-11-02T19:04:38Z","content_type":"text/html","content_length":"56746","record_id":"<urn:uuid:dc24860e-905c-4380-9804-881e9b86f5cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00336.warc.gz"}
Few-Shot Learning & Meta-Learning | Tutorial - RBC Borealis From traditional Machine Learning models like linear regression to Deep Learning models like transformers, machine learning techniques usually require thousands to millions of examples to learn a new concept. However, in some cases, it may only take a human a couple of examples to gain the same level of knowledge. Is it possible to develop some ML algorithms to achieve similar behaviours? In this tutorial, you will learn the progress researchers have made in the few-shot and meta-learning area and how it helps close the gap we mentioned above. What is Few-shot Learning? Few-shot learning refers to the ability of learning new concepts by training machine learning models with only a few examples. It can be very helpful in cases where: • One wants to avoid data hunger due to the high resource and computation cost of training a model with large amount of data. • It’s nearly impossible to have access to large amounts of labeled data due to the nature of the problem or privacy concerns. • The solution will be used in scenarios where the model needs to adapt quickly to new tasks or domains before a large amount of labeled data is collected. Few-shot Learning Problem Setup Few-shot learning is usually studied using N-way-K-shot classification. N-way-K-shot classification aims to discriminate between N classes with K examples of each. A typical problem size might be to discriminate between N = 10 classes with only K = 5 samples from each to train from. How to achieve Few-shot Learning? We cannot train a classifier using conventional methods here; any modern classification algorithm will depend on far more parameters than there are training examples and will generalize poorly. If the data is insufficient to constrain the problem, then one possible solution is to gain experience from other similar problems. To this end, most approaches to achieve few-shot learning fall in the meta-learning area. What is Meta-learning? Meta-learning, or learning to learn, performs the learning through multiple training episodes. During this process, it learns how to improve the learning algorithm itself. Hence, it has demonstrated better performance at generalization, especially when a limited amount of data is given. The meta-learning framework for few-shot learning In the classical learning framework, we learn how to classify from training data and evaluate the results using test data. In the meta-learning framework, we learn how to learn to classify given a set of training tasks and evaluate using a set of test tasks (figure 1); In other words, we use one set of classification problems to help solve other unrelated sets. Figure 1. Meta-learning framework. An algorithm is trained using a series of training tasks. Here, each task is a 3-way-2-shot classification problem because each training task contains a support set with three different classes and two examples of each. During training the cost function assesses performance on the query set for each task in turn given the respective support set. At test time, we use a completely different set of tasks, and evaluate performance on the query set, given the support set. Note that there is no overlap between the classes in the two training tasks {cat, lamb, pig}, {dog, shark, lion} and between those in the test task {duck, dolphin, hen}, so the algorithm must learn to classify image classes in general rather than any particular set. Here, each task mimics the few-shot scenario, so for N-way-K-shot classification, each task includes $N$ classes with $K$ examples of each. These are known as the support set for the task and are used for learning how to solve this task. In addition, there are further examples of the same classes, known as a query set, which are used to evaluate the performance on this task. Each task can be completely non-overlapping; we may never see the classes from one task in any of the others. The idea is that the system repeatedly sees instances (tasks) during training that match the structure of the final few-shot task but contain different classes. At each step of meta-learning, we update the model parameters based on a randomly selected training task. The loss function is determined by the classification performance on the query set of this training task, based on knowledge gained from its support set. Since the network is presented with a different task at each time step, it must learn how to discriminate data classes in general rather than a particular subset of classes. To evaluate few-shot performance, we use a set of test tasks. Each contains only unseen classes that were not in any of the training tasks. For each, we measure performance on the query set based on knowledge of their support set. Approaches to meta-learning Approaches to meta-learning are diverse, and there is no consensus on the best approach. However, there are three distinct families, each of which exploits a different type of prior knowledge: Prior knowledge about similarity: We learn embeddings in training tasks that tend to separate different classes even when they are unseen. Prior knowledge about learning: We use prior knowledge to constrain the learning algorithm to choose parameters that generalize well from few examples. Prior knowledge of data: We exploit prior knowledge about the structure and variability of the data and this allows us to learn viable models from few examples. An overview these methods can be seen in figure 2. In this review, we will consider each family of methods in turn. Figure 2. Few-shot learning methods can be divided into three families. The first family learns prior knowledge about the similarity and dissimilarity of classes (in the form of embeddings) from training tasks. The second family exploits prior knowledge about how to learn that it has garnered from training tasks. The third family exploits prior knowledge about the data and its likely variation that is has learned from training tasks. Prior knowledge of similarity Figure 3. Pairwise comparators. a) Siamese networks take two examples $\mathbf{x}_{a}$ and $\mathbf{x}_{b}$ and return the probability $Pr(y_{a}=y_{b})$ that they are the same class. They do this by passing each example through an identical network (hence Siamese) and then using the pairwise difference between the embeddings as the basis of the decision. b) Triplet networks take two examples of the same class $\mathbf{x}_{a}$ and $\mathbf{x}_{+}$ and one of a different class $\mathbf{x}_{-}$ and pass all three through identical networks to create three embeddings. The triplet loss encourages the embeddings of examples from the same class to be closer together than those from different classes. c) In the test phase for triplet networks, we pass two examples $\mathbf{x}_{a}$ and $\mathbf{x}_{b}$ through the same network and judge whether they come from the same class or not based on the distance. This family of algorithms aims to learn compact representations (embeddings) in which the data vector is mostly unaffected by intra-class variations but retains information about class membership. Early work focused on pairwise comparators, which aim to judge whether two data examples are from the same or different classes, even though the system may not have seen these classes before. Subsequent research focused on multi-class comparators which allow the assignment of new examples to one of several classes. Pairwise comparators Pairwise comparators take two examples and classify them as either belonging to the same or different classes. This differs from the standard N-way-K-shot configuration and does not obviously map onto the above description of meta-learning although as we will see later, there is, in fact a close relationship. Siamese networks (Koch et al. (2015) trained a model that outputs the probability $Pr(y_a=y_{b})$ that two data examples $\mathbf{x}_{a}$ and $\mathbf{x}_{b}$ belong to the same class (figure 3a). The two examples are passed through identical multi-layer neural networks (hence Siamese) to create two embeddings. The component-wise absolute distance between the embeddings is computed and passed to a subsequent comparison network that reduces this distance vector to a single number. This is passed through a sigmoidal output for classification as being the same or different with a cross-entropy loss. During training, each pair of examples are randomly drawn from a super-set of training classes. Hence, the system learns to discriminate between classes is general rather than two classes in particular. In testing, completely different classes are used. Although this does not have the formal structure of the N-way-K-shot task, the spirit is similar. Triplet networks Triplet networks (Hoffer & Ailon 2015) consist of three identical networks that are trained by triplets $\{\mathbf{x}_{+},\mathbf{x}_{a},\mathbf{x}_{-}\}$ of the form (positive, anchor, negative). The positive and anchor samples are from the same class, whereas the negative sample is from a different class. The learning criterion is triplet loss which encourages the anchor to be closer to the positive example than it is to the negative example in the embedding space (figure 3b). Hence it is based on two pairwise comparisons. After training, the system can take two examples and establish whether they are from the same or different classes, by thresholding the distance in the learned embedding space. This was employed in the context of face verification by Schroff et al. (2015). This line of work is part of a greater literature on learning distance metrics (see Suarez et al. 2018 for an overview). Multi-class comparators Pairwise comparators can be adapted to the N-way-K-shot setting by assigning the class for an example in the query set based on its maximum similarity to one of the examples in the support set. However, multi-class comparators attempt to do the same thing in a more principled way; here the representation and final classification are learned in an end-to-end fashion. In this section, we’ll use the notation $\mathbf{x}_{nk}$ to denote the $k$th support example from the $n$th class in the N-Way-K-Shot classification task, and $y_{nk}$ to denote the corresponding label. For simplicity, we’ll assume there is a single query example $\hat{\mathbf{x}}$ and the goal is to predict the associated label $\hat{y}$. Matching Networks Matching networks (Vinyals et al. 2016) predict the one-hot encoded query-set label $\hat{\mathbf{y}}$ as a weighted sum of all of the one-hot encoded support-set labels $\{\mathbf{y}_{nk}\}_{n,k=1}^ {NK}$. The weight is based on a computed similarity $a[\hat{\mathbf{x}},\mathbf{x}_{nk}]$ between the query-set data $\hat{\mathbf{x}}$ and each training example $\{\mathbf{x}_{nk}\}_{n,k=1}^{N,K}$. $$\hat{\mathbf{y}} = \sum_{n=1}^{N}\sum_{k=1}^{K} a[\mathbf{x}_{nk},\hat{\mathbf{x}}]\mathbf{y}_{nk} \tag{1.1}$$ where the similarities have been constrained to be positive and sum to one. To compute the similarity $a[\hat{\mathbf{x}},\mathbf{x}_{nk}]$, they pass each support example $\mathbf{x}_{nk}$ through a network $\mbox{ f}[\bullet]$ to produce an embedding and pass the query example $\hat{\mathbf{x}}$ through a different network $\mbox{ g}[\bullet]$ to produce a different embedding. They then compute the cosine similarity between these embeddings (figure 5a) $$d[\mathbf{x}_{nk}, \hat{\mathbf{x}}] = \frac{\mbox{ f}[\mathbf{x}_{nk}]^{T}\mbox{ g}[\hat{\mathbf{x}}]} {||\mbox{ f}[\mathbf{x}_{nk}]||\cdot||\mbox{ g}[\hat{\mathbf{x}}]||}, \tag{1.2}$$ and normalize using a softmax function: $$a[\hat{\mathbf{x}}_{nk},\mathbf{x}] = \frac{\exp[d[\mathbf{x}_{nk},\hat{\mathbf{x}}]]}{\sum_{n=1}^{N}\sum_{k=1}^{K}\exp[d[\mathbf{x}_{nk},\hat{\mathbf{x}}]]}. \tag{1.3}$$ to produce positive similarities that sum to one. This system can be trained end to end for the N-way-K-shot learning task.^1 At each learning iteration, the system is presented with a training task; the predicted labels are computed for the query set (the calculation is based on the support set) and the loss function is the cross entropy of the ground truth and predicted labels. Matching networks compute similarities between the embeddings of each support example and the query example. This has the disadvantage that the algorithm is not robust to data imbalance; if there are more support examples for some classes than others (i.e., we have departed from the N-way-K-shot scenario), the ones with more frequent training data may dominate. Prototypical Networks Prototypical networks (Snell et al. 2017) are robust to data imbalance by construction; they average the embeddings $\{\mathbf{z}_{nk}\}_{k=1}^{K}$ of the examples for class $n$ to compute their mean embedding or prototype $\mathbf{p}_{n}$. They then use the similarity between each prototype and the query embedding (figures 4 and 5 b) as a basis for classification. Figure 4. Prototypical networks. The support examples $\mathbf{x}_{nk}$ are all mapped to the embedding space to create embedding $\mathbf{z}_{nk}$ (coloured circles). All of the embeddings for class $k$ are averaged to create a prototype $\mathbf{p}_{n}$. To classify query examples $\hat{\mathbf{x}}$, we first compute its embedding $\hat{\mathbf{z}}$ and then base the decision on the relative distance to the prototypes. The similarity is computed as a negative multiple of the Euclidean distance (so that larger distances now give smaller numbers). They pass these similarities to a softmax function to give a probability over classes. This model effectively learns a metric space where the average of a few examples of a class is a good representation of that class, and class membership can be assigned based on distance. They noted that (i) the choice of distance function is vital as squared Euclidean distance outperformed cosine distance, (ii) having a higher number of classes in the support set helps to achieve better performance, and (iii) the system works best when the support size of each class is matched in the training and test tasks. Ren et al. (2018) extended this system to take advantage of additional unlabeled data, which might be from the test task classes or from other distractor classes. Oreshkin et al. (2018) extended this approach by learning a task-dependent metric on the feature space so that the distance metric changes from place to place in the embedding space. Relation Networks Matching networks and prototypical networks both focus on learning the embedding and comparing examples using a pre-defined metric (cosine and Euclidean distance, respectively). Relation networks ( Santoro et al. 2016) also learn a metric for comparison of the embeddings (figure 5c). Similarly to prototypical networks, the relation network averages the embeddings of each class in the support set together to form a single prototype. Each prototype is then concatenated with the query embedding and passed to a relation module. This is a learnable non-linear operator that produces a similarity score between 0 and 1 where 1 indicates that the query example belongs to this class prototype. This approach is clean and elegant and can be trained end-to-end. Holistic view of pairwise comparators and mutli-class comparators All of the pairwise and multi-class comparators are closely related to one another. Each learns an embedding space for data examples. In matching networks, there are different embeddings for support and query examples, but in the other models, they are the same. For prototypical networks and relation networks, multiple embeddings from the same class are averaged to form prototypes. Distances between support set embeddings/prototypes and query set embeddings are computed using either pre-determined distance functions such as Euclidean or cosine distance (triplet networks, matching networks, prototypical networks) or by learning a distance metric (Siamese networks and relation networks). Figure 5. Multi-class comparators. a) Matching networks compute separate embeddings for support examples (here $\mathbf{x}_{11},\mathbf{x}_{12},\mathbf{x}_{21},\mathbf{x}_{22}$) and the query example $\hat{\mathbf{x}}$. Here $\mathbf{x}_{nk}$ is the $k$th example from the $n$th class. They compute the cosine similarity between each support embedding and the the query embedding, and then use these similarities to choose the class. This has the disadvantage that if there are many more examples of one class than the others, the relatively abundant class may be chosen too frequently. b) Prototypical networks embed the query and support examples using the same network, but average together support embeddings to make prototypes for each class, and so it doesn’t matter if the numbers are unbalanced. The Euclidean distance between query embeddings and prototypes is used to support classification. c) Relation networks replace this Euclidean distance with a learned non-linear distance metric. The multi-class networks have the advantage that they can be trained end-to-end for the N-way-K-shot classification task. This is not true for the pairwise comparators which are trained to produce a similarity or distance between pairs of data examples (which could itself subsequently be used to support multi-class classification). Although it is not obvious how the pairwise comparators map to the meta-learning framework, it is possible to consider their data as consisting of minimal training and test tasks. For Siamese networks, each pair of examples is a training task consisting of one support example and one query example, where their classes may not necessarily match. For triplet networks, there are two support examples (from different classes) and one query example (from one of the classes). Recent Advancements in meta-learning using prior knowledge of similarity Beyond the above classical works, many new algorithms have been proposed by researchers to improve few-shot learning based on prior knowledge of similarity. Below, we listed a couple of works: TADAM: Task dependent adaptive metric for improved few-shot learning (Oreshkin et al. 2018) – Introduced learnable parameters for metric scaling to replace static similarity metrics like Euclidian distance and cosine similarity metric. It also added a task embedding network and auxiliary co-learning tasks on top of Prototypical networks to improve the learning performance. RelationNet2: Deep Comparison Columns for Few-Shot Learning (Zhang et al. 2018) – Improves the work of RelationNet by learning relations based on different levels of feature representation simultaneously instead of just based on the final layer of embedding. Also, instead of learning discriminate features, it learns a distribution with parameterized Gaussian noise to help with model Cross attention network for few-shot classification (Hou et al. 2019) – Different from existing methods, after getting the embedding representation of the support set and query example independently, they introduced a Cross Attention Module (CAM) to model the semantic relevance between the two before sending it for downstream similarity-based classification. Self-supervised learning for few-shot image classification (Chen et al. 2019) – Argued that the embedding learned through supervised learning is the bottleneck of the few-shot learning methods due to the small size of the support set. They propose to train a more generalized embedding network with self-supervised learning, which provides a more robust representation for downstream tasks by learning from the data itself. Recent surveys comparing few-shot learning models Comprehensive study and performance comparison of a range of few-shot learning models can be found in recent surveys: [1] Wang, Yaqing, et al. “Generalizing from a few examples: A survey on few-shot learning.” ACM computing surveys (csur) 53.3 (2020): 1-34. [2] Parnami, Archit, and Minwoo Lee. “Learning from few examples: A summary of approaches to few-shot learning.” arXiv preprint arXiv:2203.04291 (2022). Key learnings and future reading In part I of this tutorial, we have described few-shot learning and meta-learning and introduced a taxonomy of methods, including prior knowledge about similarity, prior knowledge about algorithm and priory of data. We discussed pairwise comparators, including Siamese Networks and Triplet Networks. We also introduced multi-class comparators, including Matching Networks, Prototypical Networks and Relation Networks. Both types of comparators use a series of training tasks to learn prior knowledge about the similarity and dissimilarity of classes that can be exploited for future few-shot tasks. This knowledge takes the form of data embeddings that reduce within-class variance relative to between-class variance and hence make it easier to learn from just a few data points. Interested in further expanding your knowledge of few-shot learning and meta-learning? In part II of this tutorial series, we dive deeper into few-shot learning and meta-learning. We’ll discuss methods that incorporate prior knowledge about how to learn models and that incorporate prior knowledge about the data itself. Work with us! Impressed by the work of the team? RBC Borealis is looking to hire for various roles across different teams. Visit our career page now to find the right role for you and join our team!
{"url":"https://rbcborealis.com/research-blogs/tutorial-2-few-shot-learning-and-meta-learning-i/","timestamp":"2024-11-01T23:52:56Z","content_type":"text/html","content_length":"229738","record_id":"<urn:uuid:f7c0ccc0-7c6b-4fae-b3f3-50dc385b6cbf>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00194.warc.gz"}
some technical steganography [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] some technical steganography • Subject: some technical steganography • From: [email protected] (Eric Hughes) • Date: Sun, 6 Mar 94 18:28:40 -0800 • In-Reply-To: Jim Miller's message of Sun, 6 Mar 94 18:12:27 -0600 <[email protected]> • Sender: [email protected] >How many different "notions of randomness" >are there? Notions of randomness fall into two basic categories, probabilistic and statistical. The dividing line between the two of them is whether you are doing inference forward or reverse. In both cases the randomness means evenly distributed. Probabilistic randomness is inference forward. One assumes a distribution of states before, the priors, and calculates the expected distribution of states after, the posteriors. Quantum mechanical randomness is probabilistic randomness, since quantum randomness is held to be inherent in nature, and from that predictions can be made about the future. The analysis of gambling strategies is probabilistic, since one assumes something random, like dice rolls or deck shuffles, and infers what the likely outcomes might be. Statistical randomness is inference backward. One takes an observed set of posteriors and tries to deduce whatever is available about the priors. Cryptographic randomness is of this nature, since one is presented with ciphertext and asked to figure out the plaintext. Two major questions about statistical randomness and decidability, "Can I see a pattern in it?", and compressibility, "Can I make a smaller representation of it?" Something is statistically random if one cannot answer questions about it more accurately than by guessing. There are various sorts of statistical randomness, depending on what analytical tools are available. If you allow any Turing machine, you get algorithmic complexity concepts like Kolmogorov-Chaitin randomness. There is randomness which is incompressibility to a particular coder. There is randomness with respect to statistical measures; one can take the difference of an observed posterior distribution and a probabilistically calculated posterior distribution and apply standard statistical tests. How far is this distribution from expected, and is the likelihood for this difference? >I prefer random bit >sequences. Or perhaps I should say - bit sequences with no apparent Your clarification makes a difference. Randomness as lack of structure can be quantified by looking for conditional probabilities. E.g. P( x_0 = 1 | x_3 = 0 ) is the conditional probability that x_0 is 1 in the case that x_3 = 0. If this probability is not 1/2 exactly, then you have a correlation. Conditional probabilities in general get hairy fast, even when the predicates, i.e. the events, are limited to particular bits equalling zero or one, and the standard propositional connectives "and", "or", & "not". There are questions of independence whose resolution requires a detour into predicate logic. E.g. P( x = 0 | x = 1 ) = 0, clearly, because the two events are logically dependent. One of the ways of measuring these probabilities in the aggregate is with entropy measures. The entropy of a probability distribution is the expected value of the negative logarithm. If you can determine an entropy which is not maximal, then you've found a correlation, even if exploiting the correlation might not be obvious. This maximality must be exact, and not approximate. For example, in the example I gave with 16 zero bits prepended to a random message, the bit entropy deviates ever so slightly from maximal, but that indicates a correlation. The problem is that that entropy is a probabilistic entropy, not a statistical one. Had we measured the same entropy value, it would not have allowed us to conclude anything, if all we had was the entropy. We could have also just looked at the first few bits. Anyway, since entropies are expected values on probabilities, one can also have conditional entropies as well. The criteria for non-recognizability is that all conditional entropies are maximal. This, again, is a probabilistic notion, since the calculation of all conditional entropies for a particular message is an exponential time
{"url":"https://cypherpunks.venona.com/date/1994/03/msg00324.html","timestamp":"2024-11-02T14:04:16Z","content_type":"text/html","content_length":"8298","record_id":"<urn:uuid:17fee935-0e32-4dac-9151-7a3cabc94ba7>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00785.warc.gz"}
Machine Learning – Dimensionality Reduction Cognitive Class Exam Quiz Answers - Clear My Certification All Certification Exam AnswersMachine Learning - Dimensionality Reduction Cognitive Class Exam Quiz Answers Machine Learning – Dimensionality Reduction Cognitive Class Certification Answers Module 1: Data Series Quiz Answers – Cognitive Class Question 1: Which of the following techniques can be used to reduce the dimensions of the population? • Exploratory Data Analysis • Principal Component Analysis • Exploratory Factor Analysis • Cluster Analysis Question 2: Cluster Analysis partitions the columns of the data, whereas principal component and exploratory factor analyses partition the rows of the data. True or false? Question 3: Which of the following options are true? Select all that apply. • PCA explains the total variance • EFA explains the common variance • EFA identifies measures that are sufficiently similar to each other to justify combination • PCA captures latent constructs that are assumed to cause variance Module 2: Data Refinement Quiz Answers – Cognitive Class Question 1: Which of the following options is true? • A matrix of correlations describes all possible pairwise relationships • Eigenvalues are the principal components • Correlation does not explain the covariation between two vectors • Eigenvectors are a measure of total variance, as explained by the principal components Question 2: PCA is a method to reduce your data to the fewest ‘principal components’ while maximizing the variance explained. True or false? Question 3: Which of the following techniques was NOT covered in this lesson? • Parallel analysis • Percentage of Common Variance • Scree Test • Kaiser-Guttman Rule Module 3: Exploring Data Quiz Answers – Cognitive Class Question 1: EFA is commonly used in which of the following applications? Select all that apply. • Customer satisfaction surveys • Personality tests • Performance evaluations • Image analysis Question 2: Which of the following options is an example of an Oblique Rotation? • Regmax • Varimax • Softmax • Promax Question 3: An Orthogonal Rotation assumes that factors are correlated with each other. True or false? Machine Learning – Dimensionality Reduction Final Exam Answers – Cognitive Class Question 1: Why might you use cluster analysis as an analytic strategy? • To identify higher-order dimensions • To identify outliers • To reduce the number of variables • To segment the market • None of the above Question 2: Suppose you have 100,000 individuals in a dataset, and each individual varies along 60 dimensions. On average, the dimensions are correlated at r = .45. You want to group the variables together, so you decide to run principle component analysis. How many meaningful, higher-order components can you extract? • 60 • 3 • 20 • 24 • The answer cannot be determined Question 3: What technique should you use to identify the dimensions that hang together? • Principal axis factoring • Confirmatory factor analysis • Exploratory factor analysis • Two of the above • None of the above Question 4: What are loadings? • Covariance between the two factors • Correlations between each variable and its factor • Correlations between each variable and its component • Two of the above • None of the above Question 5: When would you use PCA over EFA? • When you want to use an orthogonal rotation • When you are interested in explaining the total variance in a variance-covariance matrix • When you have too many variables • When you are interested in a latent construct • None of the above Question 6: What is uniqueness? • A measure of replicability of the factor • The amount of variance not explained by the factor structure • The amount of variance explained by the factor structure • The amount of variance explained by the factor • None of the above Question 7: Suppose you are looking to extract the major dimensions of a parrot’s personality. Which technique would you use? • Maximum likelihood • Principal component analysis • Cluster analysis • Factor analysis • None of the above Question 8: Suppose you have 60 variables in a dataset, and you know that 2 components explain the data very well. How many components can you extract? • 45 • 5 • 60 • 2 • None of the above Question 9: When would you use an orthogonal rotation? • When correlations between the variables are large • When you observe small correlations between the variables in the dataset • When you think that the factors are uncorrelated • All of the above • None of the above Question 10: When would you use confirmatory factor analysis? • When you want to validate the factor solution • When you want to explain the variance in the matrix accounting for the measurement error • When you want to identify the factors • Two of the above • None of the above Question 11: Which of the following is NOT a rule when deciding on the number of factors? • Newman-Frank Test • Percentage of common variance explained • Scree test • Kaiser-Guttman • None of the above Question 12: What is one assumption of factor analysis? • A number of factors can be determined via the Scree test • Factor analysis will extract only unique factors • A latent variable causes the variance in observed variables • There is no measurement error • None of the above Question 13: What is an eigenvector? • The proportion of the variance explained in the matrix • A higher-order dimension that subsumes all of the lower-order errors • A higher-order dimension that subsumes similar lower-order dimensions • A higher-order dimension that subsumes all lower-order dimensions • None of the above Question 14: What is a promax rotation? • A rotation method that minimizes the square loadings on each factor • A rotation method that maximizes the variance explained • A rotation method that maximizes the square loadings on each factor • A rotation method that minimizes the variance explained • None of the above Question 15: What is the cut-off point for the Common Variance Explained rule? • 80% of variance explained • 50% of variance explained • 3 variables • 1 unit • None of the above Question 16: Why would you try to reduce dimensions? • Individuals need to be placed into groups • Variables are highly-correlated • Many variables are likely assessing the same thing • Two of the above • All of the above Question 17: If you have 20 variables in a dataset, how many dimensions are there? • At most 20 • At least 20 • As many as the number of factors you can extract • Not enough information • None of the above Question 18: What term describes the amount of variance of each variable explained by the factor structure? • Eigenvector • Commonality • Similarity • Communality • None of the above Question 19: What package contains the necessary functions to perform PCA and EFA? • ggplot2 • FA • psych • factAnalis • None of the above Question 20: What is the best method for identifying the number of factors to extract? • Parallel Analysis • Scree test • Newman-Frank Test • Percentage of common variance explained • All of the above Introduction to Machine Learning – Dimensionality Reduction Dimensionality reduction is a technique in machine learning and statistics that involves reducing the number of input variables or features in a dataset. The goal is to simplify the dataset while retaining its essential information. This can be particularly useful in scenarios where the original dataset has a large number of features, potentially leading to increased computational complexity, the curse of dimensionality, and overfitting. Two common methods for dimensionality reduction are Principal Component Analysis (PCA) and t-Distributed Stochastic Neighbor Embedding (t-SNE). Principal Component Analysis (PCA): 1. Overview: □ PCA is a linear technique that transforms the original features into a new set of uncorrelated features called principal components. □ The first principal component explains the maximum variance in the data, followed by the second, and so on. 2. Steps: □ Standardize the data (subtract the mean and divide by the standard deviation). □ Compute the covariance matrix of the standardized data. □ Calculate the eigenvectors and eigenvalues of the covariance matrix. □ Sort the eigenvalues in descending order and choose the top k eigenvectors, forming the new feature space. □ Project the original data into the new feature space. 3. Use Cases: □ Dimensionality reduction for visualization. □ Feature engineering to reduce the number of features while retaining most of the information. □ Noise reduction. t-Distributed Stochastic Neighbor Embedding (t-SNE): 1. Overview: □ t-SNE is a non-linear technique for dimensionality reduction that focuses on preserving the pairwise similarities between data points. □ It is particularly effective at revealing the local structure of the data. 2. Steps: □ Define pairwise similarities between data points in the high-dimensional space. □ Construct a probability distribution over pairs of high-dimensional points that is similar to the pairwise similarities. □ Repeat the process in a low-dimensional space. □ Minimize the divergence between the high-dimensional and low-dimensional probability distributions. 3. Use Cases: □ Visualization of high-dimensional data in two or three dimensions. □ Clustering analysis to identify groups of similar data points. □ Exploration of the local structure of the data. Considerations for Dimensionality Reduction: 1. Loss of Information: □ Dimensionality reduction involves a trade-off between simplifying the dataset and losing some information. It’s important to assess the impact on model performance. 2. Choice of Method: □ The choice between linear methods like PCA and non-linear methods like t-SNE depends on the nature of the data and the goals of dimensionality reduction. 3. Parameter Tuning: □ Some methods, like t-SNE, have hyperparameters that need to be tuned. Experimentation and validation are essential for finding the optimal settings. 4. Data Scaling: □ Scaling or standardizing the data is often crucial, especially for methods like PCA, which are sensitive to the scale of the features. 5. Application to Specific Problems: □ Different dimensionality reduction techniques may be more suitable for specific types of problems. Understanding the characteristics of your data and the requirements of your task is In summary, dimensionality reduction is a valuable technique in machine learning for handling high-dimensional datasets. The choice of method depends on the nature of the data, the desired outcome, and computational considerations. It’s important to experiment with different techniques and evaluate their impact on model performance in the context of your specific problem.
{"url":"https://clearmycertification.com/machine-learning-dimensionality-reduction-cognitive-class-exam-quiz-answers/","timestamp":"2024-11-10T01:24:51Z","content_type":"text/html","content_length":"148029","record_id":"<urn:uuid:0d23988d-b557-443b-abb0-acf59f0b32dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00504.warc.gz"}
Ball Mills Size And Rating Calculate Ball Mill Grinding Capacity The sizing of ball mills and ball milling circuits from laboratory grinding tests is largely a question of applying empirical equations or factors based on accumulated experience. Different manufacturers use different methods, and it is difficult to check the validity of the sizing estimates when estimates from different sources are widely divergent. It is especially difficult to teach mill ... Table 1:table Of Ball Mill Sizes And Horsepower … 4000 hp ball mill motors. The mill used for this comparison is a 44-meter diameter by 136 meter long ball mill with a 5000 hp drive motor it is designed for approximately 90 ston per hour this type two-compartment mill is a state- of-the-art shell supported cement finish mill... Ball Mill Design/Power Calculation The basic parameters used in ball mill design (power calculations), rod mill or any tumbling mill sizing are; material to be ground, characteristics, Bond Work Index, bulk density, specific density, desired mill tonnage capacity DTPH, operating % solids or pulp density, feed size as F80 and maximum ‘chunk size’, product size as P80 and maximum and finally the type of circuit open/closed ... raymond ball mills capacity and size Ball mills give a controlled final grind and produce flotation feed of a uniform size. Ball mills tumble iron or steel balls with the ore. The balls are initially 5–10 cm diameter but gradually wear away as grinding of the ore proceeds. The feed to ball mills (dry basis) is. Get Price . small roller mills calculation of cement ball indonesia … size of roller cement mill - ansambel-uzmah.eu ... mineral processing ball mill feed size Ball Mill DesignPower Calculation. The basic parameters used in ball mill design power calculations rod mill or any tumbling mill sizing are material to be ground characteristics Bond Work Index bulk density specific density desired mill tonnage capacity DTPH operating solids or pulp density feed size as F80 and maximum ‘chunk size’ product size as P80 and maximum and finally the type of ... The Largest Size Of A Ball Mill - appartementhaus … Ball Mill Can Grinidng The Largest Mineral Size. Ball mill an overview sciencedirect topics a ball mill is grinder equipment used in the pharmacy to reduce the particle size of in general ball mills can be operated either wet or dry and are capable of today the largest ball mill in operation is 853 m diameter and 1341 m long with live chat Ball Mill: Operating principles, components, Uses ... A ball mill also known as pebble mill or tumbling mill is a milling machine that consists of a hallow cylinder containing balls; mounted on a metallic frame such that it can be rotated along its longitudinal axis. The balls which could be of different diameter occupy 30 – 50 % of the mill volume and its size depends on the feed and mill size. The large balls tend to break down the coarse ... Types Of Ball Mill Manufacturer With Power Rating crushing ball mill rated power. jaw crusher and ball mill for stone stone size and power rating of jaw crusher www ball mill shanghai Zhongbo Machinery is professional manufacturer in mining machine such as stone crusherball millJaw CrusherImpact CrusherCone Crusher Grinding MillStone Crusher MachineSand making Get price. Get Price Ball mill - Wikipedia Planetary ball mills are smaller than common ball mills and mainly used in laboratories for grinding sample material down to very small sizes. A planetary ball mill consists of at least one grinding jar which is arranged eccentrically on a so-called sun wheel. The direction of movement of the sun wheel is opposite to that of the grinding jars (ratio: 1:−2 or 1:−1). The grinding balls in the grinding jars are … Mill (grinding) - Wikipedia A VSI mill throws rock or ore particles against a wear plate by slinging them from a spinning center that rotates on a vertical shaft. This type of mill uses the same principle as a VSI crusher.. Tower mill. Tower mills, often called vertical mills, stirred mills or regrind mills, are a more efficient means of grinding material at smaller particle sizes, and can be used after ball mills in a machine components of wet ball mill machine machine components of wet ball mill machine. A ball mill also known as pebble mill or tumbling mill is a milling machine that consists of a hallow cylinder containing balls mounted on a metallic frame such that it can be rotated along its longitudinal axis The balls which could be of different diameter occupy 30 50 of the mill volume and its size depends on the feed and mill size Optimization of mill performance by using Ball mills are usually the largest consumers of energy within a mineral concentrator. Comminution is responsible for 50% of the total mineral processing cost. In today’s global markets, expanding mining groups are trying to optimize mill performances. Since comminution is concerned with liberating valuable minerals for recovery in the separation process, it is crucial to run the mills at the nano size material by ball milling nano size material by ball milling,In our research we use the highenergy ball milling technique to synthesize various nanometer powders with an average particle size down to several nm including nanosized a Fe 2 O 3 based solid solutions mixed with varied mole percentages of SnO 2 ZrO 2 and TiO 2 separately for ethanol gas sensing application stabilized ZrO 2 based and TiO 2 based solid ... small discharge size ball mill for mineral processing Ball mill - Wikipedia. A ball mill is a type of grinder used to grind, blend and sometimes for mixing of materials for use in mineral dressing processes, paints, pyrotechnics, ceramics, and selective laser sintering.It works on the principle of impact and attrition: size reduction is done by impact as the balls drop from near the top of the shell. size nassetti ball mills - grill-restaurant-zagreb.de Size reduction with Planetary Ball Mills. PLANETARY BALL MILLS 5 Planetary Ball Mills PM 100, PM 200 and PM 400 RETSCH Planetary Ball Mills are used wherever the highest degree of fine-ness is required. Apart from the clas-sical mixing and size reduction pro-cesses, the mills also meet all the. Get Price . Ball mills - liming. With more than 100 years of experience in ball mill technology Radial Ball Bearings - Life and Load Ratings | AST … Dynamic load ratings are determined by bearing geometry, number and size of balls, bearing pitch diameter, and ring and ball material. This load rating is used in conjunction with the actual applied radial load to calculate bearing fatigue life. The static load rating relates to limiting loads applied to non-rotating bearings. The static load rating depends on the maximum contact stress ... Modeling of power consumption of ball mill Comparison of the stirred ball mill has been made with a conventional ball mill at the same energy input per ton of material and at three pulp densities. The comparison was made at optimum ... Bond Ball Mill Index Test | JKTech A Bond Ball Mill Work Index may also be used in the simulation and optimisation of existing mill(s) and the associated grinding circuit(s). Sample Requirements: A minimum of 8 kg of material crushed to nominally minus 10 mm is preferred. JKTech would stage crush the sample to minus 3.35 mm, as required for the Bond Ball Mill Work Index test ... Charts & Calculators - Destiny Tool As a result, a small cusp of material, called a scallop, will remain between these cuts on any surrounding walls or on the machined surface if a ball end mill is used. The size of the step-over distance and the tool diameter will determine the scallop height between each step. ball mill in size reduction - grill-restaurant … ball mill for size reduction - Blogger Cucina. Mar 25, 2016· BALL MILL Principle: The ball mill works on the impact between the rapidly moving ball and the powder material, both enclosed in a hollow cylinder. Thus, in the ball mill, impact or attrition or both are responsible for the size reduction. Fig: Ball mill. Get Price . Ball Mill - saVRee. Closed circuits return a certain amount of the
{"url":"https://castorlandbt.eu/crushing/ball-mills-size-and-rating-7158/","timestamp":"2024-11-09T13:12:58Z","content_type":"application/xhtml+xml","content_length":"15655","record_id":"<urn:uuid:0d5c1a0f-108a-4e67-9309-c36ac512b7f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00136.warc.gz"}
Y Intercept - Meaning, Examples | Y Intercept Formula - [[company name]] [[target location]], [[stateabr]] Y-Intercept - Meaning, Examples As a learner, you are always seeking to keep up in class to prevent getting overwhelmed by topics. As parents, you are constantly researching how to support your kids to prosper in school and It’s specifically essential to keep up in math because the theories constantly founded on themselves. If you don’t grasp a particular topic, it may hurt you in next lessons. Comprehending y-intercepts is the best example of topics that you will revisit in mathematics repeatedly Let’s look at the basics regarding the y-intercept and let us take you through some tips and tricks for solving it. If you're a mathematical whiz or just starting, this preface will enable you with all the information and instruments you require to dive into linear equations. Let's dive right in! What Is the Y-intercept? To completely understand the y-intercept, let's think of a coordinate plane. In a coordinate plane, two straight lines intersect at a junction known as the origin. This section is where the x-axis and y-axis link. This means that the y value is 0, and the x value is 0. The coordinates are stated like this: (0,0). The x-axis is the horizontal line traveling across, and the y-axis is the vertical line traveling up and down. Every axis is numbered so that we can locate points along the axis. The counting on the x-axis grow as we shift to the right of the origin, and the values on the y-axis grow as we drive up from the origin. Now that we have reviewed the coordinate plane, we can define the y-intercept. Meaning of the Y-Intercept The y-intercept can be thought of as the initial point in a linear equation. It is the y-coordinate at which the graph of that equation intersects the y-axis. Simply put, it signifies the number that y takes while x equals zero. Next, we will illustrate a real-life example. Example of the Y-Intercept Let's imagine you are driving on a long stretch of road with a single lane going in respective direction. If you start at point 0, location you are sitting in your vehicle this instance, then your y-intercept will be equal to 0 – since you haven't shifted yet! As you begin driving down the track and started gaining speed, your y-intercept will increase unless it reaches some greater number once you arrive at a end of the road or stop to make a turn. Therefore, once the y-intercept may not look typically applicable at first look, it can provide knowledge into how objects transform over a period of time and space as we shift through our world. So,— if you're ever stranded attempting to understand this concept, bear in mind that just about everything starts somewhere—even your trip down that straight road! How to Locate the y-intercept of a Line Let's consider regarding how we can discover this value. To help with the procedure, we will outline a few steps to do so. Next, we will provide some examples to demonstrate the process. Steps to Locate the y-intercept The steps to find a line that intersects the y-axis are as follows: 1. Search for the equation of the line in slope-intercept form (We will expand on this later in this tutorial), which should look as same as this: y = mx + b 2. Replace 0 in place of x 3. Figure out y Now once we have gone through the steps, let's see how this procedure will work with an example equation. Example 1 Find the y-intercept of the line explained by the equation: y = 2x + 3 In this example, we could replace in 0 for x and solve for y to discover that the y-intercept is the value 3. Thus, we can say that the line crosses the y-axis at the coordinates (0,3). Example 2 As one more example, let's consider the equation y = -5x + 2. In such a case, if we substitute in 0 for x one more time and work out y, we discover that the y-intercept is equal to 2. Therefore, the line intersects the y-axis at the coordinate (0,2). What Is the Slope-Intercept Form? The slope-intercept form is a technique of depicting linear equations. It is the commonest kind used to convey a straight line in mathematical and scientific applications. The slope-intercept formula of a line is y = mx + b. In this function, m is the slope of the line, and b is the y-intercept. As we checked in the previous section, the y-intercept is the coordinate where the line goes through the y-axis. The slope is a measure of how steep the line is. It is the rate of deviation in y regarding x, or how much y shifts for every unit that x changes. Considering we have revised the slope-intercept form, let's check out how we can utilize it to find the y-intercept of a line or a graph. Find the y-intercept of the line described by the equation: y = -2x + 5 In this case, we can see that m = -2 and b = 5. Consequently, the y-intercept is equal to 5. Thus, we can say that the line goes through the y-axis at the point (0,5). We can take it a step higher to depict the angle of the line. In accordance with the equation, we know the inclination is -2. Replace 1 for x and calculate: y = (-2*1) + 5 y = 3 The answer tells us that the next coordinate on the line is (1,3). Whenever x changed by 1 unit, y changed by -2 units. Grade Potential Can Support You with the y-intercept You will review the XY axis repeatedly across your science and math studies. Concepts will get more complicated as you advance from solving a linear equation to a quadratic function. The time to master your comprehending of y-intercepts is now prior you lag behind. Grade Potential offers expert teacher that will help you practice finding the y-intercept. Their customized interpretations and solve questions will make a positive distinction in the results of your examination scores. Anytime you believe you’re lost or stuck, Grade Potential is here to support!
{"url":"https://www.alamedainhometutors.com/blog/y-intercept-meaning-examples-y-intercept-formula","timestamp":"2024-11-01T22:38:23Z","content_type":"text/html","content_length":"75557","record_id":"<urn:uuid:d50934af-4607-492a-a18e-d2e189a7b154>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00640.warc.gz"}
Introduction to Linear Algebra Linear algebra and calculus are the two most important foundational pillars on which modern mathematics is built. They are studied by almost all mathematics students at university, though typically labelled as different subjects and taught in parallel. Over time, students discover that linear algebra and calculus are inseparable (but not identical) twins that interlock to form the backbone of almost all applications of mathematics to physical and biological sciences, engineering and computer science. It is recommended that participants in the MOOC Introduction to Linear Algebra have already taken, or take in parallel, the MOOC Introduction to Calculus.
{"url":"https://www.coursera.org/learn/introduction-to-linear-algebra","timestamp":"2024-11-05T10:31:07Z","content_type":"text/html","content_length":"1031058","record_id":"<urn:uuid:7f627eab-7b05-41e6-b9ac-ee5ffa783089>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00174.warc.gz"}
Glossary - electowikiGlossaryGlossary Voting theory contains many unique terms and symbols. Symbols from set theory, mathematics, and more are used very frequently throughout. Some articles with further glossary information are Pairwise counting#Terminology and binary relations theory. • C • G • M □ Minimal pairwise dominant set: Also known as the Smith set, it is the smallest dominating set, which is any group of candidates who beat all candidates not in the group. The pairwise champion will always be the only member of this set when they exist. ☆ Note that the terms dominating/dominant are often used as shorter versions of pairwise-dominant. • O • P □ pairwise — evaluating two candidates at a time. The following terms are often used when discussing pairwise preferences □ Pairwise matchup: Also known as a head-to-head matchup, it is when voters are asked to indicate their preference between two candidates or winner sets, with the one that voters prefer (i.e. give more votes to) winning. It is usually done on the basis of majority rule (i.e. if more voters prefer one candidate over the other than the number of voters who have the opposing preference, then the candidate preferred by more voters wins the matchup) using choose-one voting, though see the Strength of preference section for alternative ways. Pairwise matchups can be simulated from ranked or rated ballots and then assembled into a table to show all of the matchups simultaneously. □ Pairwise win/beat and pairwise lose/defeated: When one candidate receives more votes in a pairwise matchup/comparison against another candidate, the former candidate "pairwise beats" the latter candidate (is "pairwise preferred" to the latter candidate), and the latter candidate "pairwise loses." Often this is represented by writing "Pairwise winner>Pairwise loser"; this can be extended to show a beatpath by showing, for example, "A>B>C>D", which means A pairwise beats B, B pairwise beats C, and C pairwise beats D (though it may or may not be the case, depending on the context, that, for example, A pairwise beats C). □ Pairwise winner and pairwise loser: The candidate who pairwise wins a matchup is the pairwise winner of the matchup (not to be confused with the pairwise champion; see the definition two spots below). The other candidate is the pairwise loser of the matchup. (Note that sometimes "pairwise loser" is also used to refer to a Condorcet loser, which is a candidate who is pairwise defeated in all of their matchups). □ Pairwise tie: Occurs when two candidates receive the same number of votes in their pairwise matchup. (Note that sometimes it is also called a tie when there is pairwise cycling, though this is different; see the definition two spots below.) Note that some cycles can be symmetrical ties i.e. you can swap the candidates' names without changing the result. (See the Condorcet paradox article for an example, and the neutrality criterion and tie for more information). □ Pairwise champion: Also known as a beats-all winner or Condorcet winner, it is a candidate who pairwise beats every other candidate. Due to pairwise ties (see above) and pairwise cycling (see below), there is not always a pairwise champion. □ Pairwise cycling: Also known as a Condorcet cycle, it is when within a set of candidates, each candidate has at least one pairwise defeat (when looking only at the matchups between the candidates in the set). □ Pairwise order/ranking: Also known as a Condorcet ranking, it is a ranking of candidates such that each candidate is ranked above all candidates they pairwise beat. Sometimes such a ranking does not exist due to the Condorcet paradox. As a related concept, there is always a Smith ranking that applies to groups of candidates, and which reduces to the Condorcet ranking when one • S
{"url":"https://electowiki.org/wiki/Vocabulary","timestamp":"2024-11-08T04:21:56Z","content_type":"text/html","content_length":"47930","record_id":"<urn:uuid:49b086e8-68b8-4199-84ba-27817fd23ecc>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00224.warc.gz"}
Cite as Ting-Yu Kuo, Yu-Han Chen, Andrea Frosini, Sun-Yuan Hsieh, Shi-Chun Tsai, and Mong-Jen Kao. On Min-Max Graph Balancing with Strict Negative Correlation Constraints. In 34th International Symposium on Algorithms and Computation (ISAAC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 283, pp. 50:1-50:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023) Copy BibTex To Clipboard author = {Kuo, Ting-Yu and Chen, Yu-Han and Frosini, Andrea and Hsieh, Sun-Yuan and Tsai, Shi-Chun and Kao, Mong-Jen}, title = {{On Min-Max Graph Balancing with Strict Negative Correlation Constraints}}, booktitle = {34th International Symposium on Algorithms and Computation (ISAAC 2023)}, pages = {50:1--50:15}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-289-1}, ISSN = {1868-8969}, year = {2023}, volume = {283}, editor = {Iwata, Satoru and Kakimura, Naonori}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2023.50}, URN = {urn:nbn:de:0030-drops-193524}, doi = {10.4230/LIPIcs.ISAAC.2023.50}, annote = {Keywords: Unrelated Scheduling, Graph Balancing, Strict Correlation Constraints}
{"url":"https://drops.dagstuhl.de/search/documents?author=Tsai,%20Shi-Chun","timestamp":"2024-11-04T23:25:39Z","content_type":"text/html","content_length":"64054","record_id":"<urn:uuid:99551027-f106-4cce-94f7-e5a477dc07f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00717.warc.gz"}
T1: Coding Theory (2) Reminder: This post contains 1314 words · 4 min read · by Xianbin This post consider the following channel coding problem in the point-to-point system. A sender wants to reliably send a message \(M\) at a rate \(R\) bits per transmission to a receiver over a noisy communication channel. Towards this end, the sender first encodes the message into a codeword \(X^n\) and transmit it over the channel. ONce the decoder receives the noisy sequence \(Y^n\), it decodes into \(\hat M\). The goal is to find the channel capacity that is the highest rate \(R\) such that the probability of decoding error is made to decay to 0 asymptotically with the code block length \(n\). Discrete Memoryless Channels (DMC) DMC consists of three parts, the finite input set \(\mathcal{X}\), the finite output set \(\mathcal{Y}\) and a collection of conditional probability mass functions \(p(y \mid x)\) on \(\mathcal{Y}\) for every \(x \in \mathcal{X}\). \(\textbf{Theorem 1}\) (Channel Coding Theorem). The capacity of the DMC \(p(y \mid x)\) is given by the information capacity formula \[C = \max_{p(x)}I(X;Y)\] Memoryless means that \[p(Y^n \mid X^n) = \prod^n_{i=1}p(y_i\mid x_i)\] \[W \to \textup{Encoder} \to X^n \to \textup{Channel } p(y\mid x)\to Y^n \to \textup{Decoder} \to \hat W\] Shannon’s second theorem shows that the information channel capacity is equal to the operation channel capacity, i.e. the highest rate (in bits) per channel can use at which information is sent with arbitrarily low error. Key Questions 1. How fast can we transmit information over a channel? 2. Is that possible, the error can be nearly 0?
{"url":"https://blog-aaronzhu.site/coding-theory-2/","timestamp":"2024-11-05T23:14:17Z","content_type":"text/html","content_length":"4223","record_id":"<urn:uuid:957b9395-e7d7-46bd-a67e-203b2ba1c8bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00817.warc.gz"}
Melanie Schöllhammer strategy and design portfolio - Workshop "Digital Communication", Designschule München The aim of the workshop was to give design teachers an overview of current trends in digital communication, and to discuss the wider implications of digitization for the role of designers and for society as a whole. In the creative practice module, teachers were asked to emphasize with their "user group", i. e. their students, and think of a common problem that students might encounter on a day-to-day basis. Then the teachers were asked to come up with a digital solution for this problem and to visualize their idea. They were given three questions to base their prototype on: Why is the product relevant? What is its specific use for the audience? How does it work? In the weeks following the workshop, the teachers implemented ideas, creative techniques and tools from the workshop into their lectures and student projects. Introduction, case studies and discussion Creative task: think of a common problem your students have
{"url":"https://melanieschoellhammer.de/workshop-digital-communnication-designschule-muenchen","timestamp":"2024-11-05T09:26:00Z","content_type":"text/html","content_length":"33771","record_id":"<urn:uuid:b3f39b1d-7937-4dd9-bc32-731207658341>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00569.warc.gz"}
A Force Prediction Model for the Plough Introducing its Geometrical Characteristics and its Comparison with Gorjachkin and Gee Clough Models. Volume 02, Issue 11 (November 2013) A Force Prediction Model for the Plough Introducing its Geometrical Characteristics and its Comparison with Gorjachkin and Gee Clough Models. DOI : 10.17577/IJERTV2IS110250 Download Full-Text PDF Cite this Publication Amara Mahfoud, Feddal Mohamed Amine, 2013, A Force Prediction Model for the Plough Introducing its Geometrical Characteristics and its Comparison with Gorjachkin and Gee Clough Models., INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY (IJERT) Volume 02, Issue 11 (November 2013), • Open Access • Total Downloads : 341 • Authors : Amara Mahfoud, Feddal Mohamed Amine • Paper ID : IJERTV2IS110250 • Volume & Issue : Volume 02, Issue 11 (November 2013) • Published (First Online): 02-12-2013 • ISSN (Online) : 2278-0181 • Publisher Name : IJERT • License: This work is licensed under a Creative Commons Attribution 4.0 International License Text Only Version A Force Prediction Model for the Plough Introducing its Geometrical Characteristics and its Comparison with Gorjachkin and Gee Clough Models. Amara Mahfoud, Feddal Mohamed Amine Laboratoire de MachinismeAgricole Génie Rural- ENSA To calculate the effort of the agricultural tools for the ploughing, several mathematical models are proposed. These models generally disregard geometrical characteristic of active surfaces of the working parts. For this reason, tests on channel of traction were carried out to check the validity of two models very frequently used, namely those of Gorjachkin and Gee Clough. The results showed that for the same form and in identical work conditions, the efforts were definitely different from a model to another. Tests were also carried out on two active forms of surface. The effort calculated using separately one of the two models is the same one for two different surfaces. Whereas the values determined on channel, are completely different from a form to another. From where interest to propose a more universal model connecting the effort with the state of the soil and especially the geometrical characteristics of active surfaces. The model establishes by the modelling method (Buckingham-Vachy) form: Ft = .R e 2 0.15 E4.13 5.94 16.01 k0.98 k 12.98 k 2.74 g d b3 . g b 1 2 This model then was checked and compared with the models of Gorjachkin and Gee Clough on two forms of active surfaces of ploughs made in Algeria by companies ENPMA (farming form) and SACRA (cylindrical form). The efforts calculated using this model, are closer to the values measured on channel, than those calculated with Gorjachkin and Gee Clough models. Key words: Energy, Effort, Speed, ploughing, geometrical characteristic, modelling, Width, depth Résumé : Pour calculer leffort de résistance à la traction des outils aratoires pour la préparation du sol, plusieurs modèles mathématiques sont proposés. Ces modèles font généralement abstraction des caractéristiques géométriques des surfaces actives des pièces travaillantes. Pour cette raison, des essais sur canal de traction ont été réalisés pour vérifier la validité de deux modèles très fréquemment utilisés, à savoir ceux de Gorjachkin et de Gee Clough. Les résultats ont montré que pour la même forme et dans des conditions de travail identiques, les efforts de résistance à la traction étaient nettement différents dun modèle à un autre. Des essais ont été également réalisés sur deux formes de surface actives. Leffort calculé à laide dun des deux modèles et ce séparément est le même pour deux surfaces différentes. Alors que les valeurs déterminées sur canal, sont totalement différentes dune forme à une autre. Doù lintérêt de proposer un modèle plus universel mettant en relation leffort avec létat du sol et surtout les caractéristiques géométriques des surfaces actives. Le modèle établit par la méthode de modélisation (Buckingham-Vachy) est de la forme : Ft = .R e 2 0.15 E4.13 5.94 16.01 k0.98 k 12.98 k 2.74 g d b3 . g b 1 2 Ce modèle a été ensuite vérifié et comparé aux modèles de Gorjachkin et de Gee Clough sur deux formes de surfaces actives de corps de charrue à socs fabriqués en Algérie par les entreprises ENPMA (forme culturale) et SACRA (forme cylindrique). Les efforts calculés à laide de ce modèle sont plus proches des valeurs mesurées sur canal, que celles calculées avec les modèles de Gorjachkin et de Gee Clough. Mots clef: Energie, Effort, vitesse, labour, caractéristiques géométriques, modèle, largeur, profondeur. 1. Introduction In the last few years several mathematical models were developed for the evaluation of the effort which the soil opposes to the working parts advance. These models are of two types, the first with two dimensions relates to the tools known as simple such as the blades and the ploughshares; the second type called to three dimensions, is relating to the tools on complex active surfaces like those of the bodies of ploughs. In what follows the interest will be related to this second type. The models usually used for the effort determination are chronologically consigned below: GORJACHKIN and SOEHNE model(1960) Ft f .G K.a.b .a.b.v Modèle LARSON and al (1968) C 1.50 v 2 F .b3 0.42 1.53.tg .0.23. 0.42 1.53.tg .0.035. BINESSE model (1970). F S C .(0.85 sin ) t .cos GEE GLOUGH and al model (1972) Ft a.b.13.30. .a 3.06. . g KUCZEWSKI model (1978) Ft FXY □ FZY □ FXZ OSKOUI and al (1982) v 2 G K1 □ K 2 . .(1 cos ). GRISSO and al model(1983) Ft a.b.( .b.N c.NC Ad .Na ) QIONG and al (1986) Ft .a.(b1 b2 .v 2 ) The analysis of these models shows: In a general way, these models introduce the work depth, its width and speed as well as physical and soil mechanics characteristics, like cohesion and the density. However, the geometrical characteristics of active surfaces such as the working angles, dimensions characteristic of surfaces are not taken into account. Among the models quoted above, themodel suggested by Gorjachkin introduces a coefficient , characterizing the shape of the plough used. Considering the shape complexity of the many existing plough, the determination of this coefficient () is very difficult. Its values lie between 1500 and 2000 N. s2/ m4. According to Ros V. (1993), if the angles and dimensions of active surfaces of the plough were studied, it is practically within the framework of the description of these working parts or in that their effects on the qualitative indices of work of the soil, but not to calculate the effort. Work of Nichols and Kummer, 1932; Doner and Nichols, 1934; and those of GaoQiong and al., 1986, were carried out to describe the active surface of the plough and to classify the forces produced during the execution of the ploughing as well as the relation between these forces and the properties of the soil.We will also announce that several of these models predicting the effort for the plough like those of Larson et al., 1968 and of Gee Clough et al., 1972, were developed on the basis of dimensional analysis. Lastly, the choice of one of these models for the evaluation with precision of the effort is often delicate. Indeed, if we consider for example the models of Gee Clough (1972) and Gorjachkin (1960), we will notice that for the same form of active surface and under same terms of soil and employment, the values obtained are very different. Which will be thus the most reliable model for a precise evaluation of consumption in energy? To answer this need, the objective of this work is the proposal for a mathematical model of the effort withmore precision and taking account of the form of active surfaces of plough. The selected geometrical characteristics for mathematical modelling are respectively: The angle of penetration: The angle of atack: Angle of inclination of active surface: The k ratio = a/b The k1 ratio = L1/h The k2 ratio = d1/d3 The k1 and k2 ratios were selected in order to differentiate the two studied forms active surfaces. In addition to these parameters, the speed (v) and the dry density of the soil (d) were taken into account considering their unquestionable effects on the effort. 2. MaterialsandMethod After geometrical characterization of the two shapes of active surfaces studied (cylindrical and farming form), three small-scale models (scales 1/4,1/3 and 1/2) for each form were designed (figures 5,6 and 7).These small- scale models were used to determine the effort (Ft) on channel of traction (figure4).The use of the channel allowed the control of the work conditions and to correctly analyze the effect of the geometrical characteristics of active surfaces on effort (Ft). The results obtained allowed the establishment of a mathematical model of the effort Ft taking account geometrical characteristics of the active surface.The established model then is checked and compared with Gorjachkin and Gee Clough models. 1. Geometrical characterization of two active surfaces Our tests related to two bodies of plough (figures 1 and 2) most usually used on the Algerian farms. figure1:Form ENPMA (farming form) figure.2:Form SACRA (cylindrical form) The geometrical main features of the two shapes of plough are consigned in table 1: Table 1: Constructive characteristics of used ploughs. Body of plough ENPMA (farming) SACRA (cylindrical) Height of the body h (mm) 440 425 Projected length l (mm) 940 740 Width b (mm) 350 310 Angle of penetration (°) 29 17 Angle of attack (°) 38 39 Angle of inclination (°) 35 33 l 2 C l 1 B L 1 L E C L c h p D B D A A X C' X D d1 d 3 B D' C M' L' X figure3: Dimensional specifications of a plough used to determinek1 and k2 2. Effort analysis required by these two forms The effort analysis was carried out on a channel of traction (figure: 3) with ploughs models on three scales reduced to 1/4, 1/3 and 1/2 (figures 5,6et7). figure 4: Small-scale Ploughs models figure 5: Models reduced on scale 1/4 assembled on channel of traction figure6: Models reduced on scale 1/3 figure7: Models reduced on scale 1/2 3. Modelling of the effort The principal stages of the establishment of the mathematical model are respectively: ☆ Establishment of the general equation: the required general equation is form: Ft = f (E, d, v, , , k, k1, k2, g ) ☆ Definition and characterization of all the parameters of the equation: The various parameters of this equation are defined on table 2 following: Table 2: Definition and characterization of the parameters of equation symbolsunitsdimensions ○ DependantParameter: Effort………………… ………….FtdaN [M.L.T-2] ○ Independent Parameters: – Works conditions Speed .. ……………………………………………………v m/s [L.T-1] Density of the soil…………………………………………..d g/cm3 [M.L-3] Scale…………………………………………………………………….E – – ○ Constructive Angles: Angle of penetration………………………………………. radians – Angle of attack …………………………………………….radians – Angle of inclination …………………………………………………radians – ○ Ratio lengths: Depth/width of work …………………………………………………k – – Maximum length of the plough/maximum height…k1 – – Back width plough/width at the point of maximum curve of the plough…………………………… k2 – – Aacceleration due to gravity………………………………………gm/s2[M.T-2] ■ Correlation between the dependant parameter (Ft) and those analyzes independent (v, d, E, , and , K, k1, k2) The interest of this analysis is to confirm the significant effect of these various parameters on the effort and to maintain them or not on the final equation. As regards the angle of attack , this one being indirectly considered in the ratio k = a/b, (sin () = b/length of the sharpened of the plough) it will not be introduced on the model.The equation obtained by polynomial regression is the following one: Ft = – 39,71 + 54,86 E + 32,83 d + 13,36 v + 84,33 222,45 + 30,75 k +21,84 k1 + 13,95 k2 The analysis of this relation makes it possible to classify in the order of the effect importance of each parameter studied on the effort.We will notice that the coefficient of the angle of inclination had the greatest absolute value (222,45), this shows the importance of the effect of active surface form on the effort Ft. ■ Determination of the adimensional parameters (- terms). The definite adimensional parameters are respectively: 1= Ft ; 2 = d.g.b3 gb ; 3= k ; 4= k1 , 5= k2 will be form: 6= ; 7= Taking account the theorem of Buckingham-Vachy. (inLanghaar H.L. 1954), the final relation Ft v2 = f( , k , k , k , ,) d .g.b3 gb 1 2 And according to Kuszewski, (1982) this equation will be written in the form of a product of powers: Ft v2 a b c d e i Cste = ( ) . ( k ) d .g.b3 gb .( k1 ) .( k2 ) . () . () . e The problem thus amounts determining the values of the powers a, b, c, d, e, f and the constantCste, for that the use of the logarithms properties, is necessary. For that tests on channel of traction were realized in order to determine the effect of the various parameters on the effort.The final model giving the effort Ft in relation to the geometrical characteristics of active surfaces is: Ft = .R e 2 0.15 E4.13 5.94 16.01 k0.98 k 12.98 k 2.74 g d b3 . g b 1 2 The values of R (proportionality factor) are respectively R = 1,931 for the cylindrical form and 1,976 for the farming form for the small-scale model with scale 1/2. The values of µ (coefficient of correction) are respectively of: = 1000 for the form SACRA (cylindrical form plough) = 10 for form ENPMA (farming form plough) The two values allotted to show that the form of active surface has an important effect on the effort. The units of the various parameters of this model are: Effort: Ft (daN) Speed: v (m/s) Angles:and(radians) Apparent density: d (kg/m3) Terrestrial acceleration: g (m/s2) Width of work: b (m) Characteristics of form: k, k1 and k2(without unit) 4. Application of the model The application of this relation for real conditions of work quoted below would give the following results: Real conditions of work ☆ Speed of ploughing : v =1,5 m/s(5,4 km/h) ☆ Density of soil: d =1.29 g/cm3 (1290 kg/m3), this last transformation are necessary for the application of the models of Gorjatchkin and Gee Clough. ☆ Width of ploughing : b =0,31 mfor SACRA form and b =0.35 mfor ENPMA form ☆ Depth of ploughing: a =0.25 meters ☆ Report/ratio k = a/b: k = 0,806 for the form SACRA form, ad k = 0.714 for ENPMA form Geometrical characteristics of the two ploughsactive surfaces k1 ratio = L1/h = 1,714 for SACRA form and k1 = 2,136 for ENPMA form k2 ratio = d1 / d3 = 1,290 for SACRA form and k2 = 1,464 for ENPMA form SACRA Form = 17 degrees = 0,297 rad = 33 degrees = 0,576 rad ENPMA form = 29 degrees = 0,506 rad = 35 degrees = 0,611 rad Using these values in relation suggested, for a real plough size, we will have: Table 3: Effort Ft calculated using the model suggested Speed (m/s) 0,23 0,29 0,43 0,87 1,5 Ft (SACRA form) (daN) 104,50 112,03 126,08 155,76 183,42 Ft (ENPMA form) (daN) 304,38 326,30 367,23 453,69 534,23 3. Comparison between the model established and those of Gorjachkin and Gee Clough. In order to check the reliability of the established model, a comparative analysis with the Gorjachkin and Gee Clough models frequently used was carried out. For that certain parameters used in these the last two models are defined. The relations of the effort of these two researchers are respectively: Ft f .G K.a.b .a.b.v Ft a.b.13.30. .a 3.06. . g The tests being realized on the same type of altered soil (light soil textures), the value of K is the same one. It is of 3500 daN/m2, this value is the higher limit for the light soils and the lower limit for the soils known as average.These values will be applied to the Gorjachkin models. The values chosen, for the coefficients of formare respectively 200 daN.s2 /m 4 for the cylindrical form and 150 daN.s2 /m 4 for the farming form.The choice of this parameter is often very delicate to determine, the number of active forms surfaces being very important. The effort values obtained with these models, for the same conditions of speed and soil are: Table 4: Effort Ft calculated using Gorjachkin model Speed (m/s) 0,23 0,29 0,43 0,87 1,50 Ft (SACRAform)(daN) 272,07 272,55 274,11 282,98 306,12 Ft (ENPMAform) (daN) 306,94 307,35 308,67 316,18 335,78 Table 5: Effort Ft calculated using Gee Clough model Speed (m/s) 0,23 0,29 0,43 0,87 1,50 Ft (SACRA form)(daN) 334,03 334,99 338,07 355,57 401,25 Ft (ENPMA form)(daN) 377,13 378,21 381,69 401,45 453,02 The application of these two models for the effort determination, confirms the results of our work, namely that the ENPMA farming form is more demanding in energy for the ploughing realization.That thus highlights the importance of the geometrical characteristics of the active plough surface introduced in the established model. These results are illustrated on the following graphs (Fig.8, a and b). The simplification of the Gorjachkin model, in the form Ft = k.a.b, without taking account the speed and form of active surfaces, will give the same value for Ft some is the form of active plough 1. Comparaison de Ft en relation avec le modèle choisi pour la forme cylindrique Effort Ft (daN) Effort Ft (daN) Ftmod FtGor FtGee 0.23 0.29 0.43 0.87 1.5 speeds (m/s) 2. Comparaison de Ft en relation avec le modèle choisi pour la forme culturale Effort Ft (daN) Effort Ft (daN) FtGor FtGee FtGor FtGee 0.23 0.29 0.43 0.87 1.5 speed (m/ s) figure8 : Analyse comparative des efforts Ft en fonction de la vitesse davancement pour chaque modèle et pour chaque forme a) cylindrique et b) culturale Analyses and discussion Analyses of the established model watch clearly the effect of the geometrical characteristics of active surface on the effort.The angle of curveand the characteristick1 are the geometrical characteristics most influential on the effort. When increases the effort decreases on the other hand whenk1 increases the effort increases. The model suggested can be used for the evaluation of the effort of tensile strength for any form of active surfaces of plough. The Gorjachkin and Gee Clough models are usable only for precise forms. For cylindrical forms we use Gee Clough model and for the farming forms the Gorjachkin model will be used. Two cases are to be considered for the use this model: If this model is used by an agronomist, for the evaluation the required effort for the realization the ploughing, some of the parameters of the relation are constant (constructive parameters) such as the angles, the parameter k (depth/width), the parameters k1 and k2.It will thus be interested in the choice the speed of work, the soil density in order to correctly choose the best work conditions to reduce the energy needs at the time of the ploughings realization. So on the other side, when this relation is used by an originator of agricultural tools, he will be interested more particularly in the constructive parameters by holding account obviously work conditions and technical agro requirements of the ploughings. That will allow the design plough adapted to preset conditions. Binesse M., 1970 : Cisaillement et résistance spécifique du sol lors du labour classique. Etudes du CNEEMA, n°341-342 France. Doner, R.D. and Nichols, M.L , 1934 : The Dynamic Properties of Soil V. Dynamics of Soil on Plow Mouldboard Surfaces Related to Scouring. Journal of ASAE, Vol 15, n°1: 9-13 GaoQiong; Pitt, R.E.; Ruina, A. , 1986 : A Model to Predict Soil Forces on the Plough Mouldboard. Journal of Agricultural Engineering Research 35, p.141-155. Gee Clough, D.G. et al., 1978: The empirical prediction of tractor implement field performance. J. Terramechanics, 15 (2) : 81-94 Gorjatchkin, V.P. et Sohene 1960 :Collected Works in Three Volumes. Ed. N. D. Luchinskii. Translated 1972. Jerusalem, Israel: Ketter Press. Grisso et al., 1983 : A soil model based on limit equilibrumanalisio. Transaction of ASAE, vol. 26 n° 4 , p. 991-995 Kuczewski J. , 1978 : Eléments Théoriques des Machines Agricoles . Edition VarsoviePologne. Langhaar H. L. , 1954: Dimensional Analysis and Theory of Models, Ed. New York . John Wiley and Sons, Inc. Larson, L. W. et al. ,1968 : Predicting draft forces using mouldboardplows in agricultural soils. Transaction of ASAE, 11: 665-668. Nichols, M.L. and Kummer, T. H. , 1932 : The Dynamic Properties of Soil IV. A Method of Analysis of PlowMoldboard Design Based Upon Dynamic Properties of Soil. Agricultural Engineering 13(11)279-285. Oskoui K.E. et al, 1982 : The Determination of Plough Draught. Part II. The Measurement and Prediction of Plough Draught for Two Mouldboard Shape in Three Soil Series. Journal of Terrramechanics, 19, p. 153- 164. Ros V. et al., 1993 : Analysis of a Tillage Tool Geometry. ASAE Paper nr93, St.Joseph, Mich., USA. 1995 You must be logged in to post a comment.
{"url":"https://www.ijert.org/a-force-prediction-model-for-the-plough-introducing-its-geometrical-characteristics-and-its-comparison-with-gorjachkin-and-gee-clough-models-2","timestamp":"2024-11-13T07:36:15Z","content_type":"text/html","content_length":"88880","record_id":"<urn:uuid:a15dd5d0-72a2-4c3b-8a9f-57942ab61fa3>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00499.warc.gz"}
ball mill foundation design The paper presents the results of the modal analysis of a ball mill foundation, an element of the processing technological line in an ore enrichment plant in Poland. ... The design method requires ... WhatsApp: +86 18838072829 The paper presents the results of the modal analysis of a ball mill foundation, an element of the processing technological line in an ore enrichment plant in Poland. The modal analysis was performed in two ways: numerically, using FEM, and experimentally, using OMA. ... The results show a surprisingly good supporting structure design in an era ... WhatsApp: +86 18838072829 This project is to design and fabricate the mini ball mill that can grind the solid state of various type of materials into nanopowder. The cylindrical jar is used as a mill that would rotate the WhatsApp: +86 18838072829 Ball Mill Foundation Drawing Pdf. Prompt : Caesar is a famous mining equipment manufacturer wellknown both at home and abroad, major in producing stone crushing equipment, mineral separation equipment, limestone grinding equipment, etc. Hot sale copper ball mill pdf design drawing ... and construction of see ball mill assembly diagrams and ... WhatsApp: +86 18838072829 homemade ball mill plans. By this drawing, it is suggested that a typical homemade laboratory rod mill or ball mill might be fabricated from 20 cm (8 inches) diameter schedule40 type 316 stainless steel pipe and would be about 38 cm (15 inches) long.. The plans show stainless steel grinding rods for this size of mill may be a graduated charge from 25 to 10 mm diameter (1 inch to 1/2 inch) but ... WhatsApp: +86 18838072829 Dun Bradstreet gathers Foundation, Structure, and Building Exterior Contractors business information from trusted sources to help you understand company performance, growth potential, and competitive pressures. View 87 Foundation, Structure, and Building Exterior Contractors company profiles below. WhatsApp: +86 18838072829 Design soul plate, fabricate, machine and installation of your equipment by our skilled, trained and knowledgeable craftsmen. Our skilled millwrights are experts at setting soul plates and grouting them in place. The millwrights are experienced with correct soul plate design. A correctly designed and installed foundation soul plate will Prevent Failures: a. Lower Vibration by increasing ... WhatsApp: +86 18838072829 Proven mill design Buying a new mill is a huge investment. With over a century of ball mill experience and more than 4000 installations worldwide, rest assured we have the expertise to deliver the right solution for your project. Our ball mill is based on standard modules and the highly flexible design can be adapted to your requirements. WhatsApp: +86 18838072829 foundation of ball mill with gmd There are a number of Ball mills in operation around the world with diameter up to 8 m. Aspect ratio L/D varies for ball mills, L/D >1, typically to factor. WhatsApp: +86 18838072829 A ball mill, a type of grinder, is a cylindrical device used in grinding (or mixing) materials like ores, chemicals, ceramic raw materials and paints. Ball mills rotate around a horizontal axis, partially filled with the material to be ground plus the grinding medium. Different materials are used as media, including ceramic balls, flint pebbles ... WhatsApp: +86 18838072829 ball mill: n porcelain jar containing rollers spun on larger rollers, used to grind substances, such as homeopathic remedies, to a fine powder. WhatsApp: +86 18838072829 To design the ball mill machine foundation. SCOPE To do thorough study of ball mill machine foundation. Analysis which includes the calculation of static and dynamic loads acting on the ball mill machine foundation in different conditions. Design of ball mill machine foundation which includes the designing of piles group, raft and pedestals. WhatsApp: +86 18838072829 Introduction. The ball mill is the key equipment for grinding the minerals after the ore is crushed. With the continuous development of the industrial level, the development of ball mills is also moving towards the trend of largescale 1 to the large shock force generated during the operation of the large ball mill, the foundation of the ball mill will vibrate 5, 6. WhatsApp: +86 18838072829 ABSTRACT The dynamic analysis of ball mill foundation is a typical problem of soilstructure interaction, and the substructure method is used to estimate the structural vibration. ... Ball mill shell supported design In mining industry, ball mills normally operate with an approximate ball charge of 30% with a rotational speed close to 11 rpm. WhatsApp: +86 18838072829 ResearchGate | Find and share research WhatsApp: +86 18838072829 Comchambered with the design of Φ × 13m threechamber ball mill, the design process of ball mill is described in detail. ... supported by the National Science Foundation for Y oung ... WhatsApp: +86 18838072829 The dynamic analysis of ball mill foundation is a typical problem of soilstructure interaction, and the substructure method is used to estimate the structural vibration. WhatsApp: +86 18838072829 Quantum Nanostructures (QDs): An Overview. D. Sumanth Kumar, ... Mahesh, in Synthesis of Inorganic Nanomaterials, 2018 Ball Milling. A ball mill is a type of grinder used to grind and blend bulk material into QDs/nanosize using different sized balls. The working principle is simple; impact and attrition size reduction take place as the ball drops from near the top of a rotating ... WhatsApp: +86 18838072829 Mill Type Overview. Three types of mill design are common. The Overflow Discharge mill is best suited for fine grinding to 75 106 microns.; The Diaphram or Grate Discharge mill keeps coarse particles within the mill for additional grinding and typically used for grinds to 150 250 microns.; The CenterPeriphery Discharge mill has feed reporting from both ends and the product discharges ... WhatsApp: +86 18838072829 It is highly acknowledged for effective and quick grinding in several industries such as limestone, cement, coal, iron ore, chrome ore and many others. The major highlight in the mill is its fully automatic function with PLC control and instrumentation. We are manufacturer and supplier of ball mills since 1980. 400 installations worldwide with ... WhatsApp: +86 18838072829 Mill, gear and pinion friction multiplier: Mill Power required at pinionshaft = (240 x x ) ÷ = 5440 Hp. Speed reducer efficiency: 98%. 5440 Hp ÷ = 5550 HP (required minimum motor output power). Therefore select a mill to draw at least 5440 Hp at the pinionshaft. WhatsApp: +86 18838072829 Contribute to zhosuren/es development by creating an account on GitHub. WhatsApp: +86 18838072829 CERAMIC LINED BALL MILL. Ball Mills can be supplied with either ceramic or rubber linings for wet or dry grinding, for continuous or batch type operation, in sizes from 15″ x 21″ to 8′ x 12′. High density ceramic linings of uniform hardness male possible thinner linings and greater and more effective grinding volume. WhatsApp: +86 18838072829 The silo was constructed on a relatively stiff circular raft foundation 25 m in diameter, to withstand a design average maximum bearing pressure of 300 ... Clinker, gypsum, and fly ash are fed to a ball mill by a feed conveyor with a certain proportion. The ball mill has two chambers. In the first chamber, the material is crushed roughly with ... WhatsApp: +86 18838072829 Dry and wet ball mills have the same basic components, but there are some structural differences: 3 Discharging part Discharging port: Dry ball mill: The ball mill needs to be equipped with an air induction device, a dust exhaust pipe and a dust collector. The structure is more complicated, and the discharge port is straight. WhatsApp: +86 18838072829 Design of ball mill for the capacity of 440 liters () for mixing of alkyd resin, pigment and solvent. A. Eq. iDesign Of Mill: Assuming, length of mill= outside diameter of mill [8] differential energy required to produce this lifting effort. Volume= = R=412 mm D=824mm 825mm ... WhatsApp: +86 18838072829 The mill was expected to produce a product of 80% passing 150 μm. The feed rate to the mill was 300 t/h. The ball mill grindability test at 65 mesh showed 12 kWh/t. The internal diameter of the ball mill was m and the lengthtodiameter ratio was The steel balls occupied 18% of the mill. The total load occupied 45% of the mill volume. WhatsApp: +86 18838072829 sbm book of belt conveyor design calculations pdfconveyor design book pdf Grinding Mill China Handbook of Conveyor Belt design.pdf Coal Surface of Conveyor Belt design.pdf Description : 5 Fenner Dunlop 2009 Conveyor This "CONVEYOR HANDBOOK" is provided by FENNER DUNLOP to allow . » Learn design handbook ebook pdf Grinding Mill China ... WhatsApp: +86 18838072829 iEE has extensive expertise in the design, detailing, and review of Mining Structures including: The Crusher building (supporting the rock breaker, crusher and conveyors); Sag Mill support foundation (massive structure to support the Sag Mills and their vibrations); Ball Mill support foundation (massive structure to support the Ball Mills); WhatsApp: +86 18838072829 ball mill design pdf free download 9 Sep 2013 . ball mill design pdf for gold BINQ Miningball mill design pdf, india for . applied Heavy construction ... Which oil is best for ball mill trunnion WhatsApp: +86 18838072829 small ball mill, with a mill diameter less than m and small dynamic loads, the method of free vibration analysis (also called modal analysis) can be used. The natural frequencies of foundation and piers can be calculated using the free vibration analysis to avoid the resonance. The natural WhatsApp: +86 18838072829 The basic parameters used in ball mill design (power calculations), rod mill or any tumbling mill sizing are; material to be ground, characteristics, Bond Work Index, bulk density, specific density, desired mill tonnage capacity DTPH, operating % solids or pulp density, feed size as F80 and maximum 'chunk size', product size as P80 and maximum and finally the type of circuit open/ closed ... WhatsApp: +86 18838072829 Contribute to lbsid/en development by creating an account on GitHub. WhatsApp: +86 18838072829 hemisphere. It employs the world's largest SAG mill, 40 ft diameter and two 22 ft ball mills in the grinding circuit. In that project, the owners considered it necessary to assess the dynamic response of the foundations for the SAG and ball mills due to vibration problems experienced in other installations. Numerical models of the foundations WhatsApp: +86 18838072829 Ball Mill Application and Design. Ball mills are used the size reducing or milling of hard materials such as minerals, glass, advanced ceramics, metal oxides, solar cell and semiconductor materials, nutraceuticals and pharmaceuticals materials down to 1 micron or less. The residence time in ball mills is long enough that all particles get ... WhatsApp: +86 18838072829
{"url":"https://agencja-afisz.pl/8897/ball-mill-foundation-design.html","timestamp":"2024-11-12T06:32:57Z","content_type":"application/xhtml+xml","content_length":"30726","record_id":"<urn:uuid:2f79aa90-56d7-4052-8651-827a15a10e84>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00363.warc.gz"}
Rank-Nullity Theorem - (Linear Algebra and Differential Equations) - Vocab, Definition, Explanations | Fiveable Rank-Nullity Theorem from class: Linear Algebra and Differential Equations The rank-nullity theorem is a fundamental result in linear algebra that relates the dimensions of the kernel and the image of a linear transformation to the dimension of the domain. Specifically, it states that for a linear transformation from a vector space to another, the sum of the rank (the dimension of the image) and the nullity (the dimension of the kernel) equals the dimension of the domain. This theorem highlights key aspects of linear transformations and provides insights into their structure and properties. congrats on reading the definition of Rank-Nullity Theorem. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. The rank-nullity theorem can be expressed mathematically as: $$ ext{dim}( ext{Ker}(T)) + ext{dim}( ext{Im}(T)) = ext{dim}(V)$$ where $$T$$ is a linear transformation, $$V$$ is its domain, Ker is kernel, and Im is image. 2. This theorem applies to any linear transformation between finite-dimensional vector spaces, providing a powerful tool for understanding their structure. 3. If a linear transformation has full rank (equal to the dimension of its codomain), then its nullity is zero, meaning it is injective or one-to-one. 4. Conversely, if a linear transformation has a nullity greater than zero, it means there are non-trivial solutions to the homogeneous equation associated with it, indicating it's not injective. 5. In practical applications, knowing the rank and nullity helps in solving systems of linear equations by providing information about consistency and the number of free variables. Review Questions • How does the rank-nullity theorem relate to solving systems of linear equations? □ The rank-nullity theorem provides insights into solving systems of linear equations by revealing relationships between consistent and inconsistent systems. If the rank equals the number of variables, then there are unique solutions. However, if the nullity is greater than zero, it indicates free variables exist, leading to infinitely many solutions. This understanding helps determine whether a system can be solved and how many solutions might exist. • Compare and contrast the concepts of rank and nullity in terms of their impact on the properties of linear transformations. □ Rank measures how much information a linear transformation retains by assessing how many dimensions are covered in its image, while nullity measures how many dimensions are lost by identifying vectors that map to zero. A higher rank indicates more preserved information about input vectors, whereas a higher nullity reveals more vectors being collapsed into zero. Together, these concepts illustrate how transformations can alter dimensions and influence whether they are injective or surjective. • Evaluate the implications of applying the rank-nullity theorem in determining the behavior of specific linear transformations within finite-dimensional spaces. □ Applying the rank-nullity theorem allows us to evaluate linear transformations by determining their injectivity or surjectivity based on their ranks and nullities. For instance, if we find that a transformation has full rank, we conclude that it is both injective and surjective, making it an isomorphism. On the other hand, if we observe a high nullity, it may indicate redundancy among inputs, affecting how solutions can be characterized in practical scenarios like computer graphics or engineering systems. © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/linear-algebra-and-differential-equations/rank-nullity-theorem","timestamp":"2024-11-12T15:19:45Z","content_type":"text/html","content_length":"157838","record_id":"<urn:uuid:a0bd400f-385d-40eb-af5d-4e0b53af593e>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00490.warc.gz"}
Data Science vs. Statistics - One in the Same? - insideAI News I recently ran across a thought-provoking post on the USC Anneberg Innovation Lab blog – “Why Do We Need Data Science when We’ve Had Statistics for Centuries.” With all the debate of late surrounding the relatively new “data science” term, I’ve been thinking a lot about this question, so I thought I’d analyze this notion here on insideAI News by picking apart the article. I’d love to hear your take on this, so feel free to leave a note. Here are some excerpts from the article along with my commentary: Use of the term data science is increasingly common, as is big data … but what does it mean? Is there something unique about it? What skills do data scientists need to be productive in a world deluged by data? What are the implications for scientific inquiry?” This is the big question, how does data science differ from statistics and computer science? I think the answer is related to big data, but not exclusively so. Big data does require the use of a very different technology stack than used previously with statistical analysis. Hadoop represents a paradigm shift to address these needs. So a statistician from 20 years ago, would not be equipped to deal with doing analysis on huge data sets on a time-scale that’s often required by today’s business applications. Does data science involve scientific inquiry? You betcha! I see data science in the same light as say the data analysis phase of an astrophysics or genomics project. You’re applying the scientific method with data collected using scientific principles. In a previous life, I carried out the scientific method with astrophysical data sets on a routine basis. Now that I’m doing business-oriented data science, I don’t really see a difference. … defines data science as being essentially the systematic study of the extraction of knowledge from data. But analyzing data is something people have been doing with statistics and related methods for a while. Why then do we need a new term like data science when we have had statistics for centuries? The fact that we now have huge amounts of data should not in and of itself justify the need for a new term.” This is the same observation many people are seeing these days, and the question is quite valid. As I stated above, the 3 V’s of big data definitely contribute to the need for a new science of data. But that’s not the end of it. It is also the use of disparate data sets including social media, the speed of analysis, near real-time deployment requirements, and the advancement of the fields of machine learning and visualization also contribute to the new data science. There really is something new going on! In short, it’s all about the difference between explaining and predicting. Data analysis has been generally used as a way of explaining some phenomenon by extracting interesting patterns from individual data sets with well-formulated queries. Data science, on the other hand, aims to discover and extract actionable knowledge from the data, that is, knowledge that can be used to make decisions and predictions, not just to explain what’s going on.” I really like this differentiation. There is definitely an engineering component of modern data science. The work of data scientists ultimately becomes part of production systems; think Amazon’s or Netflix’s recommender systems. This aspect is relatively new – the actionable part. Many new start-ups are driven by actionable knowledge from machine learning applications. This is light-years beyond yesterday’s data analysis. The raw materials of data science are not independent data sets, no matter how large they are, but heterogeneous, unstructured data set of all kinds, – e.g., text, images, video. The data scientist will not simply analyze the data, but will look at it from many angles, with the hope of discovering new insights.” This is another huge reason any today’s data science differs from what was done previously. The variety of big data goes way beyond the data warehouse that was the common denominator a decade ago. Diversity of data has led data science in a number of new and exciting new directions; think sentiment and credibility analysis algorithms. Most of us are trained to believe theory must originate in the human mind based on prior theory, with data then gathered to demonstrate the validity of the theory. Machine learning turns this process around. Given a large trove of data, the computer taunts us by saying, If only you knew what question to ask me, I would give you some very interesting answers based on the data. Such a capability is powerful since we often do not know what question to ask. . .” So true! Unsupervised statistical learning, coupled with the processing power to yield insightful clusters, allows us to ask new questions. In days gone by, these questions largely remained Data scientists should also have good computer science skills, – including data structures, algorithms, systems and scripting languages, – as well as a good understanding of correlation, causation and related concepts which are central to modeling exercises involving data.” As I mentioned above, the marriage of statistical methods and computer science is really the crux of the new discipline of data science. It is for this reason that I believe data science is justified as a distinct field of study. Further, I see it evolving quickly, especially in the past couple of years. The next 5 years should be exciting to be a data scientist. Like computing, one of the most exciting part of data science is that it can be applied to many domains of knowledge. But, doing so effectively requires domain expertise to identify the important problems to solve in a given area, the kinds of questions we should be asking and the kinds of answers we should be looking for, as well as how to best present whatever insights are discovered so they can be understood by domain practitioners in their own terms.” This declaration is very well articulated. This is what I love most about data science – it can be applied to any field. A good data scientist has experience working with domain experts to pick their brains on critical parameters of the business. Sure, a data scientist with specific knowledge of say, agriculture, would be ideal but not necessarily. We’re generally pretty quick studies! Daniel – Managing Editor, insideAI News Sign up for the free insideAI News newsletter. A very well written article that has helped me to understand the term Data science by giving simple comparison between Data analysis and Data Science. Hire a statistician when you know the question. Hire a data scientist when you don’t know the question. Data Science is closer to Computer Science and Statistics is closer to mathematics, they both deal with data so they meet in the middle. It appears that formal university training in Data Science evolves as a hybrid between Computer Science and Statistics, with a technical focus towards Big Data technologies. But keep in mind that a Data Scientist will never have the computing knowledge of a computer scientist nor the mathematical knowledge of a statistician.
{"url":"https://insideainews.com/2014/05/22/data-science-vs-statistics-one/","timestamp":"2024-11-08T20:53:50Z","content_type":"application/xhtml+xml","content_length":"115936","record_id":"<urn:uuid:4271651f-77fe-4918-b3ca-368c7329907e>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00021.warc.gz"}
Geometry and Mesh Define a geometry and discretize it using a triangular or tetrahedral mesh Unified finite element analysis workflow uses an fegeometry object to define a geometry. You can simply assign a geometry to the Geometry property of femodel. Define a geometry using one of these sources: The general PDE workflow, as well as the domain-specific workflows, use a DiscreteGeometry and AnalyticGeometry objects. Typically, you can define these objects using the same sources as for Mesh a geometry using the generateMesh function. The toolbox uses the finite element method (FEM) to solve PDEs. For details about meshing, see Mesh Data. For details about the components of geometries and meshes and the relationships between them, see Geometry and Mesh Components. Creation and Visualization importGeometry Import geometry from STL or STEP file geometryFromMesh Create 2-D or 3-D geometry from mesh geometryFromEdges Create 2-D geometry from decomposed geometry matrix decsg Decompose constructive solid 2-D geometry into minimal regions multicuboid Create geometry formed by several cubic cells multicylinder Create geometry formed by several cylindrical cells multisphere Create geometry formed by several spherical cells triangulation Create triangulation object from fegeometry (Since R2023b) pdegplot Plot PDE geometry addCell Combine two geometries by adding one inside a cell of another (Since R2021a) addFace Fill void regions in 2-D and split cells in 3-D geometry (Since R2020a) addVertex Add vertex on geometry boundary addVoid Create void regions inside 3-D geometry (Since R2021a) extrude Vertically extrude 2-D geometry or specified faces of 3-D geometry (Since R2020b) mergeCells Merge geometry cells (Since R2023b) rotate Rotate geometry (Since R2020a) scale Scale geometry (Since R2020a) translate Translate geometry (Since R2020a) cellEdges Find edges belonging to boundaries of specified cells (Since R2021a) cellFaces Find faces belonging to specified cells (Since R2021a) faceEdges Find edges belonging to specified faces (Since R2021a) facesAttachedToEdges Find faces attached to specified edges (Since R2021a) nearestEdge Find edges nearest to specified point (Since R2021a) nearestFace Find faces nearest to specified point (Since R2021a) PDE Modeler App pdecirc Draw circle in PDE Modeler app pdeellip Draw ellipse in PDE Modeler app pdepoly Draw polygon in PDE Modeler app pderect Draw rectangle in PDE Modeler app generateMesh Create triangular or tetrahedral mesh meshQuality Evaluate shape quality of mesh elements findElements Find mesh elements in specified region findNodes Find mesh nodes in specified region area Area of 2-D mesh elements volume Volume of 3-D mesh elements pdemesh Plot PDE mesh pdeplot Plot solution or mesh for 2-D problem pdeplot3D Plot solution or surface mesh for 3-D problem pdeviz Create and plot PDE visualization object (Since R2021a) Legacy Functions csgdel Delete boundaries between subdomains pdearcl Represent arc lengths as parametrized curve wgeom Write geometry function to file adaptmesh Create adaptive 2-D mesh and solve PDE initmesh Create initial 2-D mesh meshToPet [p,e,t] representation of FEMesh data jigglemesh (Not recommended) Jiggle internal points of triangular mesh refinemesh Refine triangular mesh fegeometry Geometry object for finite element analysis (Since R2023a) DiscreteGeometry Discrete 2-D or 3-D geometry description AnalyticGeometry Analytic 2-D geometry description FEMesh Mesh object PDEVisualization Properties PDE visualization of mesh and nodal results (Since R2021a) PDE Modeler Create complex 2-D geometries by drawing, overlapping, and rotating basic shapes • Mesh Data Recommended workflow uses FEMesh objects to represent meshes. • Generate Mesh Adjust a mesh by using additional arguments of the generateMesh function. • Find Mesh Elements and Nodes by Location Find mesh elements and nodes by their geometric location or proximity to a particular point or node. • Assess Quality of Mesh Elements Evaluate the shape quality of mesh elements.
{"url":"https://nl.mathworks.com/help/pde/geometry-and-mesh.html","timestamp":"2024-11-14T07:48:18Z","content_type":"text/html","content_length":"95073","record_id":"<urn:uuid:83fb6db0-4517-4b60-a8bc-e235e5d6dacd>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00797.warc.gz"}
About the Authors: Theory of Computing: An Open Access Electronic Journal in Theoretical Computer Science About the Authors Jack Murtagh is a graduate student at Harvard University where he is advised by Salil Vadhan. As an undergraduate, Jack studied Tufts University . Jack is broadly interested in complexity theory and currently works on derandomization and data privacy. Salil Vadhan Vicky Joseph Professor of Computer Science and Applied Mathematics Harvard University, Cambridge, MA
{"url":"https://www.theoryofcomputing.org/articles/v014a008/about.html","timestamp":"2024-11-11T18:20:07Z","content_type":"text/html","content_length":"5620","record_id":"<urn:uuid:ee056078-ff33-4129-ac46-00063ca3181e>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00309.warc.gz"}
Principal Component Analysis and Regression — Lrnr_pca This learner provides facilities for performing principal components analysis (PCA) to reduce the dimensionality of a data set to a pre-specified value. For further details, consult the documentation of prcomp from the core package stats. This learner object is primarily intended for use with other learners as part of a pre-processing pipeline. Learner object with methods for training and prediction. See Lrnr_base for documentation on learners. A numeric value indicating the number of components to be produced as a result of the PCA dimensionality reduction. For convenience, this defaults to two (2) components. A logical value indicating whether the input data matrix should be centered before performing PCA. This defaults to TRUE since that is the recommended practice. Consider consulting the documentation of prcomp for details. A logical value indicating whether the input data matrix should be scaled (to unit variance) before performing PCA. Consider consulting the documentation of prcomp for details. Other optional parameters to be passed to prcomp. Consider consulting the documentation of prcomp for details. Common Parameters Individual learners have their own sets of parameters. Below is a list of shared parameters, implemented by Lrnr_base, and shared by all learners. A character vector of covariates. The learner will use this to subset the covariates for any specified task A variable_type object used to control the outcome_type used by the learner. Overrides the task outcome_type if specified All other parameters should be handled by the invidual learner classes. See the documentation for the learner class you're instantiating See also Other Learners: Custom_chain, Lrnr_HarmonicReg, Lrnr_arima, Lrnr_bartMachine, Lrnr_base, Lrnr_bayesglm, Lrnr_bilstm, Lrnr_caret, Lrnr_cv_selector, Lrnr_cv, Lrnr_dbarts, Lrnr_define_interactions, Lrnr_density_discretize, Lrnr_density_hse, Lrnr_density_semiparametric, Lrnr_earth, Lrnr_expSmooth, Lrnr_gam, Lrnr_ga, Lrnr_gbm, Lrnr_glm_fast, Lrnr_glm_semiparametric, Lrnr_glmnet, Lrnr_glmtree, Lrnr_glm, Lrnr_grfcate, Lrnr_grf, Lrnr_gru_keras, Lrnr_gts, Lrnr_h2o_grid, Lrnr_hal9001, Lrnr_haldensify, Lrnr_hts, Lrnr_independent_binomial, Lrnr_lightgbm, Lrnr_lstm_keras, Lrnr_mean, Lrnr_multiple_ts, Lrnr_multivariate, Lrnr_nnet, Lrnr_nnls, Lrnr_optim, Lrnr_pkg_SuperLearner, Lrnr_polspline, Lrnr_pooled_hazards, Lrnr_randomForest, Lrnr_ranger, Lrnr_revere_task, Lrnr_rpart, Lrnr_rugarch, Lrnr_screener_augment, Lrnr_screener_coefs, Lrnr_screener_correlation, Lrnr_screener_importance, Lrnr_sl, Lrnr_solnp_density, Lrnr_solnp, Lrnr_stratified, Lrnr_subset_covariates, Lrnr_svm, Lrnr_tsDyn, Lrnr_ts_weights, Lrnr_xgboost, Pipeline, Stack, define_h2o_X(), undocumented_learner # load example data ncomp <- 3 covars <- c( "apgar1", "apgar5", "parity", "gagebrth", "mage", "meducyrs", outcome <- "haz" # create sl3 task task <- sl3_Task$new(cpp_imputed, covariates = covars, outcome = outcome) # define learners glm_fast <- Lrnr_glm_fast$new(intercept = FALSE) pca_sl3 <- Lrnr_pca$new(n_comp = ncomp, center = TRUE, scale. = TRUE) pcr_pipe_sl3 <- Pipeline$new(pca_sl3, glm_fast) # create stacks + train and predict pcr_pipe_sl3_fit <- pcr_pipe_sl3$train(task) pcr_pred <- pcr_pipe_sl3_fit$predict()
{"url":"https://tlverse.org/sl3/reference/Lrnr_pca.html","timestamp":"2024-11-03T22:03:45Z","content_type":"text/html","content_length":"17705","record_id":"<urn:uuid:1514a185-7357-4a11-a7d2-6ee05753e32b>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00559.warc.gz"}
Topics: Types of Fiber Bundles Differentiable Fibre Bundles $ Def: A fiber bundle (B, E, G, F, π), where B, E, G, F are differentiable manifolds, π is a differentiable mapping, the covering {U[j]} of B is an admissible atlas, and the transition functions g[jk are differentiable. Trivial Fiber Bundles * Triviality Criteria: - P(E) trivial iff P(E) admits a cross-section; - E trivial iff the transition functions can be written as g[ij] = λ[i](x) λ[j]^−1(x); - P(E) trivial implies E trivial; - B contractible implies E trivial; - F contractible implies E has a cross-section; - G contractible implies E trivial. * Results: All SU(2) bundles over 3-manifolds are trivial. Vector Bundles > s.a. Jet Bundles; tangent bundles. * Idea: A topological space E, a continuous projection π: E → B, and a vector space (over a field \(\mathbb K\)) structure on each fiber π^−1(x), with local triviality, i.e., a fiber bundle with F = \(\mathbb K\)^n and G = GL(n, \(\mathbb K\)). @ References: in Milnor & Stasheff 74, ch 2–3. > Online resources: see MathWorld page; Wikipedia page. Tensor Bundles > Online resources: see Encyclopedia of Mathematics page. Other Fiber Bundles and Additional Structure > s.a. curvature; Hopf Fibration; Jet; principal fiber bundle; sheaf; Universal Bundle. * Triviality criteria: An R-bundle is trivial iff it admits n nowhere-dependent cross-sections. @ General references: Trautman RPMP(76) [classification, and use in physics]; Crowley & Escher DG&A(03) [S^3-bundles over S^4]; Lerman JGP(04) [contact fiber bundles]. @ Generalizations: Manton CMP(87) [discrete bundles]; Brzeziński & Majid CMP(98) [coalgebra bundles]; Vacaru & Vicol IJMMS(04)m.DG [higher-order, and Finsler]; Bruce et al a1605-proc [graded main page – abbreviations – journals – comments – other sites – acknowledgements send feedback and suggestions to bombelli at olemiss.edu – modified 12 may 2016
{"url":"https://www.phy.olemiss.edu/~luca/Topics/f/fb_types.html","timestamp":"2024-11-07T19:59:20Z","content_type":"text/html","content_length":"6418","record_id":"<urn:uuid:6b650140-c889-49aa-84fb-7cd9888ebf10>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00240.warc.gz"}
NB/T 10613-2021 PDF in English NB/T 10613-2021 (NB/T10613-2021, NBT 10613-2021, NBT10613-2021) │ Standard ID │ Contents │USD│ STEP2 │ [PDF] delivered in │ Name of Chinese Standard │Status│ │ │ [version] │ │ │ │ │ │ │ NB/T │ English │335│Add to Cart│ 0-9 seconds. │ Technical specification of power quality measurement and evaluation for electric vehicle battery charging/swap │Valid │ │ 10613-2021 │ │ │ │ Auto-delivery. │ station │ │ Standards related to (historical): NB/T 10613-2021 PDF Preview NB/T 10613-2021: PDF in English (NBT 10613-2021) NB/T 10613-2021 NB ENERGY INDUSTRY STANDARD OF THE PEOPLE’S REPUBLIC OF CHINA ICS 29.020 CCS K 04 Technical specification of power quality measurement and evaluation for electric vehicle battery charging/swap station ISSUED ON: APRIL 26, 2021 IMPLEMENTED ON: JULY 26, 2021 Issued by: National Energy Administration Table of Contents Foreword ... 4 1 Scope ... 5 2 Normative references ... 5 3 Terms and definitions ... 6 4 Measurement items ... 8 5 Measurement methods and requirements ... 8 5.1 Selection of measurement points ... 8 5.2 Measurement equipment requirements ... 9 5.3 Measurement duration and measurement conditions ... 9 5.4 Data record ... 10 5.5 Measurement methods ... 10 6 Measurement result evaluation ... 10 6.1 Supply voltage deviation ... 10 6.2 Harmonics ... 11 6.3 Inter-harmonics ... 11 6.4 Three-phase unbalance ... 11 6.5 Voltage flicker ... 11 6.6 Rapid voltage changes ... 11 6.7 Power factor ... 11 6.8 Comprehensive index evaluation ... 12 Annex A (Informative) Schematic diagram of the connection of a typical electric vehicle charging station or swap station to the power grid ... 13 Annex B (Normative) Measurement method for rapid voltage change ... 15 Annex C (Informative) A brief introduction to the influence of electric vehicle charging on power quality of power supply points and the countermeasures when it exceeds the standard ... 17 Annex D (Informative) Example of power quality comprehensive index evaluation ... 20 Bibliography ... 27 Technical specification of power quality measurement and evaluation for electric vehicle battery charging/swap station 1 Scope This Standard specifies the power quality measurement items, measurement methods, measurement results evaluation requirements for electric vehicle battery charging/swap station. This Standard is applicable to the power quality measurement and evaluation for electric vehicle battery charging/swap station powered by dedicated power grids of 10kV and above. Electric vehicle charging stations or swap stations that are powered by other voltage levels or not powered by the public grid can be implemented by using this Standard as the reference. 2 Normative references The following referenced documents are indispensable for the application of this document. For dated references, only the edition cited applies. For undated references, the latest edition of the referenced document (including any amendments) applies. GB/T 12325-2008, Power quality - Deviation of supply voltage GB/T 12326-2008, Power quality - Voltage fluctuation and flicker GB/T 14549-1993, Quality of electric energy supply. Harmonics in public supply network GB/T 15543-2008, Power quality - Three-phase voltage unbalance GB/T 17626.30, Electromagnetic compatibility - Testing and measurement techniques - Power quality measurement methods GB/T 19862-2016, General requirements for monitoring equipment of power quality GB/T 24337-2009, Power quality - Inter-harmonics in public supply network GB/T 29316-2012, Power quality requirements for electric vehicle charging/battery swap infrastructure GB/T 29317-2012, Terminology of electric vehicle charging/battery swap infrastructure GB 50966-2014, Code for design of electric vehicle charging station 3 Terms and definitions For the purposes of this document, the terms and definitions defined in GB/T 29317-2012 as well as the followings apply. 3.1 electric vehicle (EV) battery charging station a place that provides charging services for electric vehicles and consists of three or more electric vehicle charging equipment [Source: GB/T 29781-2013, 3.4, modified] 3.2 EV battery swap station a place that provides battery replacement services for electric vehicles and charges power batteries [Source: GB/T 29317-2012, 5.2, modified] 3.3 rapid voltage change; RVC the phenomenon of rapid transition of the voltage rms value between two voltage steady states NOTE 1: The characteristic indexes that characterize the rapid voltage change event include the start time, the end time (duration), the maximum voltage change ΔUmax, and the steady-state voltage change ΔUss. NOTE 2: The rapid voltage changes described in this document are limited to voltage changes under steady state conditions and do not involve voltage changes under transient conditions. NOTE 3: The voltage steady-state is related to the rapid voltage change threshold. 3.4 voltage steady-state 100 consecutive half-cycle voltage rms; slide by half cycle time interval; the average value after sliding does not exceed the threshold range of rapid voltage changes based on the average value before sliding power grid. The measurement duration shall not be less than 24h. NOTE: The measurement duration can be adjusted according to the change cycle of the actual load size of the charging station or swap station. 5.4 Data record The measurement data and its recording interval are as follows: a) Supply voltage deviation, harmonics, unbalance, power factor measurement data recording time interval include 1min, 3min, 5min or 10min. It is advisable to use 1min; b) The long-term voltage flicker value shall be continuously recorded and stored a set of data every 2h; c) For the captured rapid voltage change events, the characteristic values are recorded, including the start time, duration, maximum voltage change ΔUmax and steady-state voltage change ΔUss. 5.5 Measurement methods 5.5.1 The measurement methods of power supply voltage deviation, harmonics, inter-harmonics, three-phase unbalance and voltage flicker shall comply with the requirements of GB/T 12325-2008, GB/T 14549-1993, GB/T 24337-2009, GB/T 15543-2008 and GB/T 12326-2008 respectively. 5.5.2 The rapid voltage change measurement method is carried out in accordance with Annex B. 5.5.3 The power factor correlation measurement method is as follows: a) For the voltage and current measurement methods in the power factor measurement process, see the power supply voltage class A measurement method in GB/T 17626.30; b) Conduct simultaneous calculation of active power, reactive power and power factor. 6 Measurement result evaluation 6.1 Supply voltage deviation Give the maximum value of positive and negative voltage deviations. Evaluate whether the measurement results of power supply voltage deviation meet the limit requirements of GB/T 12325-2008. 6.2 Harmonics Give the total harmonic voltage distortion rate, the 2~50th harmonic voltage content rate, and the 95% probability maximum value of the 2~50th harmonic current. Evaluate whether the harmonic voltage content rate and the measurement results of the harmonic current injected into the power supply point meet the limit requirements of GB/T 14549-1993. 6.3 Inter-harmonics Give the 95% probability maximum value of the inter-harmonic voltage content rate in the frequency range of 0Hz~800Hz. Evaluate whether it meets the limit requirements of GB/T 24337-2009. 6.4 Three-phase unbalance Give the three-phase voltage unbalance, 95% probability maximum value and maximum value of negative sequence current. Evaluate whether the three- phase voltage unbalance and the negative sequence current injected into the power supply point meet the limit requirements of GB/T 15543-2008. 6.5 Voltage flicker Give the maximum value of long-term voltage flicker. Evaluate whether the voltage flicker measurement results meet the limit requirements of GB/T 12326- 2008. 6.6 Rapid voltage changes If a rapid voltage change event is captured, the characteristic index of the rapid voltage change event is given. Analyze the correlation with changes in charging load. 6.7 Power factor Give the power factor at peak charging load. Evaluate whether the power factor at the peak of the charging load meets the level A equipment limit 0.95 specified in GB/T 29316-2012 or 0.95 specified in GB 50966-2014. Annex C (Informative) A brief introduction to the influence of electric vehicle charging on power quality of power supply points and the countermeasures when it exceeds the standard C.1 Overview The change of electric vehicle charging and swap station load presents randomness and diversity. It is closely related to the number of charging vehicles in the region, the type of charging vehicles (residential vehicles or commercial vehicles, etc.), and the charging cycle (working days or holidays) and other factors. Therefore, different types of charging facilities and basic power supply facilities shall be built according to different charging needs. Reasonable matching of different charging strategies can reduce the impact of the charging process on the power supply point of the power grid and promote the coordinated development of electric vehicles and the power grid. C.2 Impact of electric vehicle charging on power supply point The influence of electric vehicle charging load on the power supply point is mainly reflected in the increase in the load rate of the upper line and transformer at the power supply point and the power quality exceeding the standard. The increase in the load rate of lines and transformers leads to an increase in the loss of the distribution network. It makes indirect deterioration of power quality indexes. The non-linearity, impact and uncertainty of electric vehicle charging load are easy to cause power quality indexes such as supply voltage and harmonics to exceed the standard. It is easy to cause problems such as voltage harmonic oscillation and voltage mutation and affect the power quality of other users in the surrounding area. To improve the quality of electric vehicle charging power and ensure the coordinated development of electric vehicles and power grids, it needs to start with infrastructure, such as optimal power supply point, increase power supply point transformer and line power supply capacity. It is also necessary to start with the power quality control method and take measures to reduce the impact of power quality. C.3 Countermeasures for power quality exceeding standard C.3.1 Supply voltage deviation The countermeasures for the situation where the power supply voltage deviation exceeds the standard are suggested as follows: a) If the deviation of the operating supply voltage of the charging station or the swap station does not meet the requirements of the national standard limit, and the deviation of the background power supply voltage does not exceed the standard, it is advisable to put forward measures to alleviate the problem of voltage deviation exceeding the standard in combination with the change of charging power, especially the change of reactive power (see B.3.6); b) If the deviation of the operating power supply voltage of the charging station or the swap station does not meet the requirements of the national standard limit, and the deviation of the power supply voltage of the background power grid also exceeds the standard, voltage control measures shall be taken. If necessary, the power supply lines or power supply transformers shall be expanded and transformed to reduce the impact. C.3.2 Harmonics and inter-harmonics The countermeasures for the over-standard harmonics and inter-harmonics are suggested as follows: a) If the harmonic voltage, inter-harmonic voltage or injected harmonic current exceeds the standard and the background power grid harmonic voltage or inter-harmonic voltage does not exceed the standard, corresponding harmonic control measures shall be taken at charging and swap stations. Or combine the relationship between harmonics and charging power trend and adopt an orderly charging strategy; b) If the harmonic voltage, inter-harmonic voltage or injected harmonic current exceeds the standard and the background power grid harmonic voltage or inter-harmonic voltage also exceeds the standard, the reasons shall be comprehensively analyzed, and corresponding remedial measures shall be taken. C.3.3 Three-phase unbalance If the three-phase voltage unbalance and the negative sequence current injected into the power supply point exceed the standard, the charging and auxiliary power loads in the charging and swap stations shall be reasonably arranged according to the balance principle, or corresponding control measures shall be taken. C.3.4 Voltage flicker If the voltage flicker exceeds the standard, corresponding voltage control measures shall be taken. a) UPQI (avg) represents the normalized average value of each item index at the measurement point. b) UPQI (max) represents the normalized maximum value of each item index at the measurement point. c) UPQI (node) represents the unified power quality index value of the measurement point. If the measurement point harmonics, voltage flicker and other project indexes are all less than 1 after normalization, the value is the maximum value among the normalized values of each project index. Otherwise, the value is 1 and gradually accumulates the part of each exceeding index value minus 1. d) UPQI (system) represents the unified power quality index value of a charging and swap station system. Similar approach to single measurement point: If the UPQI (node) value of each measurement point is less than 1, the value is the maximum index of UPQI (node) in each measurement point. Otherwise, the value is 1 and the remaining value after subtracting 1 from the UPQI (node) value of each exceeding measurement point is gradually accumulated. e) UPQI (system/avg) represents the improved unified power quality index value (referred to as the improved value). If the indexes of each point are less than 1, the UPQI (system/avg) is the maximum index of the UPQI (node) value of each measurement point. Otherwise, UPQI (system/avg) is the average of the sum of the remaining values after subtracting 1 from the UPQI (node) value of each over-standard point and the number of nodes. f) The power quality comprehensive evaluation index results of system 1 are shown in Table D.5. When the UPQI (avg) results are all qualified, the conclusion of the characterization measurement is wrong and unscientific. When the UPQI (max) index value does not affect the measurement conclusion, but the corresponding measurement point 2 and measurement point 3 are the same, the measurement point 2 has two indexes exceeding the standard, and the measurement point 3 has one index exceeding the standard, it shows that the UPQI (max) index is also unreasonable. The UPQI (node) index value can also consider the comprehensive hazards of different indexes without affecting the measurement conclusion. It has good applicability, so the comprehensive evaluation index of single measurement point power quality shall use UPQI (node) index. Table D.5 -- Evaluation results of comprehensive indexes of system 1 power quality ...... Source: Above contents are excerpted from the PDF -- translated/reviewed by: www.chinesestandard.net / Wayne Zheng et al.
{"url":"https://www.chinesestandard.net/PDF.aspx/NBT10613-2021","timestamp":"2024-11-06T21:44:06Z","content_type":"application/xhtml+xml","content_length":"50601","record_id":"<urn:uuid:9e7987a1-40ba-42b0-92b5-812f22e4bf3b>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00649.warc.gz"}