content
stringlengths
86
994k
meta
stringlengths
288
619
Excel DB Function The Excel DB Function Related Function: DDB Function When calculating the depreciation of an asset, several different calculations can be used. One of the most popular methods of calculating depreciation is the Fixed Declining-Balance Method, in which, for each period of an asset's useful lifetime, the calculated value is reduced by a fixed percentage of the asset's value at the start of the period. The Excel DB Function uses the Fixed Declining Balance Method, using the following constant percentage reduction per period (rounded to 3 decimal places) : • Salvage is the final value of the asset at the end of its lifetime • Cost is the initial cost of the asset • Life is the number of periods over which the depreciation occurs Basic Description The Excel DB function calculates the depreciation of an asset, using the Fixed Declining Balance Method, for each period of the asset's lifetime. The format of the function is : DB( cost, salvage, life, period, [month] ) where the arguments are as shown in the table below: cost - The initial cost of the asset salvage - The value of the asset at the end of the depreciation life - The number of periods over which the asset is to be depreciated period - The period number for which we want to calculate the depreciation An optional argument that is used to specify a partial year for the first period of depreciation [month] - If the [month] argument is supplied, this should be a integer that specifies how many months of the year are used in the calculation of the first period of depreciation. The number of months in the last period of depreciation is then calculated as 12 - [month] Excel DB Function Example 1 In the example below, the DB function is used to find the yearly depreciation of an asset that cost $10,000 at the start of year 1, and has a salvage value of $1,000 after 5 years. Note that, in this example, the yearly rate of depreciation, calculated from the equation 1-(Salvage/Cost)^(1/Life) is calculated to be 36.9% Excel DB Function Example 2 In the example below, the DB function is used with the same cost, salvage and life argument values as in Example 1 above. However, in the following example, the depreciation calculation starts 6 months into year 1. As the cost, salvage and life arguments are the same as in the previous example, the yearly rate of depreciation, is again calculated to be 36.9% Further examples of the Excel DB function can be found on the Microsoft Office website. DB Function Common Errors If you get an error from the Excel DB Function, this is likely to be one of the following : Common Errors Occurs if either: - The supplied cost or the supplied salvage argument is < 0 #NUM! - - The supplied life or the supplied period argument is ≤ 0 - The supplied [month] argument is ≤ 0 or is > 12 - period > life (and the [month] argument is omitted) - period > life+1 (and the [month] argument is supplied and is < 12) #VALUE! - Occurs if any of the supplied arguments are not numeric values
{"url":"http://www.excelfunctions.net/Excel-Db-Function.html","timestamp":"2014-04-18T08:04:12Z","content_type":null,"content_length":"18730","record_id":"<urn:uuid:fcadcb48-f438-4664-8897-83730b484b1f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
Project Prioritization Mathematics Yes, there is mathematical theory for prioritizing projects. One result is the ranking theorem: If independent projects are ranked based on the ratio of benefit-to-cost, and selected from the top down until the budget is exhausted, the resulting project portfolio will create the greatest possible value (ignoring the error introduced if the portfolio doesn't consume the entire budget). This solution is useful because it clarifies key information needed to optimize project decisions: (1) the cost of each candidate project, and (2) dollar worth of the benefits to be derived if the project is conducted. There are, as well, useful theories and methods for quantifying project benefits (for example, AHP, real options, decision analysis, and multi-attribute utility analysis). In addition, there are practical and effective methods for measuring and accounting for risk. Finally, there are mathematical methods more accurate than the ranking theorem that allow you to, among other things, optimally select from alternative project versions (e.g., different project funding levels), account for people and other resource limitations, compare projects that return benefits over different time periods, and make choices that achieve specified performance targets. To learn the details, read my paper Mathematical Theory for Prioritizing Projects.
{"url":"http://www.prioritysystem.com/math.html","timestamp":"2014-04-18T10:34:07Z","content_type":null,"content_length":"3555","record_id":"<urn:uuid:513351df-50f2-4bdd-9227-82e8a642ed22>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
24 Days of Hackage: data-memocombinators Today we look at a tiny little library that packs an awfully strong punch for it’s size. That library is data-memocombinators, maintained by Luke Palmer and Dmitry Malikov. data-memocombinators is one of a handful of memoization libraries available for Haskell, and to me, stands out for its simplicity. Before we go into the library, lets recap on what it means for a function to be memoized, and why we would need a library for this in Haskell. Memoization is a technique for trading run-time space for execution time, and is often used to improve the performance of heavily recursive definitions in order to save recomputing the same information over and over again. One of the canonical examples of memoization is the Fibonacci series. Naively, one might write this as: fib :: Int -> Int fib 0 = 1 fib 1 = 1 fib n = fib (n - 1) + fib (n - 2) However, to compute fib 4 we have to compute fib 3 and fib 2. To compute fib 3 we also need to compute fib 2 and fib 1. Thus we actually need to compute fib 2 twice - once for fib 4 (n - 2) and once for fib 3 (n - 1). There is already plenty of material on this, so I’ll leave further research for you to do, if you’ve not already encountered this. Of interest to the working Haskell programmer is the question: why do I care? Surely the calculation fib 2 is the same every time - after all, fib :: Int -> Int, and due to referential transparency, the result of this computation applied to the same argument must be constant. Why can’t our compiler help us out and prevent duplicate work? The answer is due to how terms are evaluated. When we assign a value to a name, we usually store an unevaluated thunk, that when forced will finally produce the value under that computation. Once forced, the binding now points to the final value, and not the computation. Thus, subsequent accesses will incur almost no cost at all. This is why in let y = 4 * 4 in y + y, we would only calculate 4 * 4 once. Once these bindings go out of scope, the garbage collector is free to come along and remove all of this work. Now we are in a position to understand the poor performance of our fib definition. In fib, we never bound recursive calls to names, and so there is no ability to share information between calls. Therefore, there is no ability to share the work, so we end up creating new thunks every time we recurse. Confused? Don’t worry I was too, and it’s a tricky topic. Thankfully, you’re not the first person to be confused, and there is some great information on Stack Overflow discussing this problem: While it is often possible to reformulate code to achieve sharing, sometimes it’s just more convenient to stick on some explicit memoization over the top, and that’s what data-memocombinators is all data-memocombinators provides a few combinators that transform a function into an equivalent function that uses it first argument as the memoization key. There are no data structures that you as a user have to worry about, this is all taken care behind the scenes. Thus the code is exactly as expressive as before - you are not restricted to working under newtypes or passing around explicit caches. Remarkably, data-memocombinators is entirely pure - which at first glance seems impossible. After all, isn’t memoization about mutating some sort of shared store of data? With lazy evaluation available, we’ll see that it’s possible to get memoization and purity. For example, we can add memoization to our fib example from earlier, by the use of the integral combinator. This is the same example from the data-memocombinators documentation, but I’ll reproduce it here to discuss how it works: fibFast :: Int -> Int fibFast = Memo.integral fib' where fib' 0 = 1 fib' 1 = 1 fib' n = fibFast (n - 1) + fibFast (n - 2) The type of our fib-like function has not changed, but we’ve moved the work out into a locally defined fib' function. Notice the relation between fibFast and fib' - when we call fibFast we first check (indirectly) if this value has already been computed, and if not, we use fib' to perform the work. fib' then calls fibFast with smaller value, and the cycle repeats. Now we’ve got the sharing that was mentioned earlier, and we only have to pay for the cost of computing fib n once (for each n). But how on earth does this all work? data-memocombinators is built by building an infinitely large trie. Each value in the trie can be thought of as a thunk to compute that value. Building up the thunks is almost free, and it’s only when we lookup the value for a specific key (function argument) that we pay for the work. After that, we still have a binding to that value, and that’s why subsequent accesses don’t require the work to be calculated again. Now while data-memocombinators has a simple interface that is powerful to use, it does sadly only go so far. For example, there is no general memoization routine for types where all we have is an Eq instance. This seems like it should be possible, but unfortunately we are given no such combinator. However, data-memocombinators does have some points for extension. If you are able to define a Bits instance on the result type, then you can use the bits combinator to build a memoization routine, to consider one example. When you do need to start working with more arbitrary types you have a few options. If you want to stay in data-memocombinators you may be able to form an isomorphism with your type and something that is easily memoized - for example, an isomorphism to a unique Integer. The other option is to bite the bullet and use something more powerful, like working inside a Map - here monad-memo may be more help. I’ll finish by saying that not only is data-memocombinators a very powerful library, it’s a fantastic starting point for exploring some programming techniques that truly shine in Haskell. The underlying “trick” in data-memocombinators is really due to the idea of sharing in lazy evaluation. data-memocombinators is a tiny amount of code, and I highly encourage you to dig into the guts of this library and teach yourself how it all works. I guarantee, even if you don’t have a need to use data-memocombinators, you’ll come away feeling a little more enlightened. You can contact me via email at ollie@ocharles.org.uk or tweet to me @acid2. I share almost all of my work at GitHub. This post is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. I accept Bitcoin donations: 14SsYeM3dmcUxj3cLz7JBQnhNdhg7dUiJn
{"url":"http://ocharles.org.uk/blog/posts/2013-12-08-24-days-of-hackage-data-memocombinators.html","timestamp":"2014-04-16T04:40:31Z","content_type":null,"content_length":"12551","record_id":"<urn:uuid:026dd1b2-4719-4f55-b98d-1a880f10c66e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
CS U290: Logic and Computation Spring 2008 [Main Page] [Lectures] [Assignments] CS U290 is a 4-credit course. The Office of the Registrar has useful information. Course Description Introduces formal logic and its connections to computer and information science. Offers an opportunity to learn to translate statements about the behavior of computer programs into logical claims and to gain the ability to prove such assertions both by hand and using automated tools. Considers approaches to proving termination, correctness, and safety for programs. Discusses notations used in logic, propositional and first order logic, logical inference, mathematical induction, and structural induction. Introduces the use of logic for modeling the range of artifacts and phenomena that arise in computer and information science. We will use the following textbook. • Computer-Aided Reasoning: An Approach. Matt Kaufmann, Panagiotis Manolios, and J Strother Moore. Kluwer Academic Publishers, June, 2000. (ISBN: 0-7923-7744-3) Note: An updated paperback version is available on the Web. This is much cheaper than the hardcover version. Tentative Syllabus Here is an overview of the material that we expect to cover. We reserve the right to make modifications. 1. The ACL2 programming language □ Data types □ Primitive functions □ Defining functions □ Common recursions □ Tail recursion □ Multiple values □ Mutual recursion □ Macros □ Assertions and testing 2. The ACL2 logic □ Quantifier-free first orer logic □ Axioms of ACL2 □ Equational Reasoning □ Recursive definitions and the definitional principle □ Induction □ Quantification □ Termination □ Godel's completeness theorem 3. Mechanization of ACL2 □ Organization of ACL2 □ Simplification □ Decision procedures □ Proof techniques □ The method □ Inspecting failed proofs □ Proof strategies and modularity 4. Applications □ Data Structures □ Logic Design □ Compilers □ Video Games □ ...
{"url":"http://www.ccs.neu.edu/home/pete/courses/Logic-and-Computation/2008-Spring/syllabus.html","timestamp":"2014-04-16T19:02:32Z","content_type":null,"content_length":"3242","record_id":"<urn:uuid:d4057fff-b5e7-4f3e-8ff2-6b4d0ebea28e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
sunrise industries wishes to accumulate funds to provide a retirement annuity for its vice Posted in Category : sunrise industries wishes to accumulate funds to provide a retirement annuity for its vice presidentbr of research, jill moran. ms. moran, by contract, will retire at the end of exactly 12 years. upon br retirement she is entitled to receive an annual end of year payment of $42,000 for exactly 20 years.br if she dies prior to the end of the 20 year period, the annual payments will pass to her heirs. duringbr the 12- year ampquot accumulation periodampquot, sunrise wishes to fund the annuity by making equal, annual,br end-of-year deposits into an account earning 9% interest. once the 20-year ampquotdistribution periodampquotbr begins, sunrise plans to move the accumulated monies into an account earning a guarantee 12% perbr year. at the end of the distribution period, the account balance will equal zero. note that the firstbr deposit will be made at the end of year 1 and that the firs distribution payment will be received at the end of year 13.br br to dobr a. draw a time line depicting all the cash flows associated with sunrise039s view of the retirementbr annuity.br br b.how large a sum must sunrise accumulate by the end of year 12 to provide the 20- year,$42,000 annuity?br br c.how long must sunrise039s equal, annual, end-of-year deposits into the account be over the 12 year accumulation period to fund fully ms moran039s retirement annuity?br br d.how much would sunrise have to deposit annually during the accumulation period if it could earn 10% rather than 9% during the accumulation period?br br e.how much would sunrise have to deposit annually during the accumulation period if ms. moran039s retirement annuity were a perpetuity and all other terms were the same as initiallybr described? br br br br Time Left: -838:59:59 Status: Awaiting Expert Reply Note: Answers Not shown.
{"url":"http://expresshelpline.com/question.php?qid=9929","timestamp":"2014-04-17T01:38:05Z","content_type":null,"content_length":"16986","record_id":"<urn:uuid:e73442c0-3adb-43fc-8cbc-1bc38c61e640>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: PLZ HELP!! Which of the following statements is NOT a good definition? An angle bisector is a ray that divides an angle into two congruent angles. Two lines are parallel if and only if they are coplanar and do not intersect. A segment is a part of a line. An angle is a right angle if it measures exactly 90 degrees. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/506c6e04e4b060a360fe7f7d","timestamp":"2014-04-20T01:02:12Z","content_type":null,"content_length":"69093","record_id":"<urn:uuid:39b16a13-1869-456a-9d02-ef6c6ec7eb1a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
working with closed covers and finite sets October 16th 2012, 06:07 PM #1 Sep 2012 working with closed covers and finite sets I am not sure how to proceed with this proof. Say that a set K in a metric space is 'Tompac' if every closed cover of K has a finite subcover. show that such a K must be a finite set. (hint: any singleton set in a metric space is ...) I know that any singleton set in a metric space is closed. my thinking on this is that a closed set contains the set and its limit points so if the set has a closed subcover say B[p, epsilon] then the set must be a subset of the cover but I am not sure how to show the set must be finite. Re: working with closed covers and finite sets With that observation, the problem is now actually easier than you suspect. It's now just a matter of seeing the right "bad" closed cover of K that will lead you to the conclusion. With that observation, think of a really bad, meaning really big, closed cover of K. Try to make the cover as big as possible - meaning containing as many closed sets as possible. Do so in a way that has as little overlap as possible. OK - now that cover you just produced has a *finite* subcover. Some finite number of those closed sets in your cover are sufficient to still cover K. What does that tell you about K? Re: working with closed covers and finite sets so let each x in M in the metric (M,d) have a closed cover G then the union of all the covers will be big enough to cover K then any finite subcover that covers K must imply that K is finite. Re: working with closed covers and finite sets I didn't understand that. Are you describing a particular cover? If so, can you describe your particular cover exactly? October 16th 2012, 06:53 PM #2 Super Member Sep 2012 Washington DC USA October 16th 2012, 07:17 PM #3 Sep 2012 October 16th 2012, 07:35 PM #4 Super Member Sep 2012 Washington DC USA
{"url":"http://mathhelpforum.com/advanced-math-topics/205493-working-closed-covers-finite-sets.html","timestamp":"2014-04-20T10:55:41Z","content_type":null,"content_length":"38387","record_id":"<urn:uuid:eb6c793b-832b-48ac-ae7e-7aeaf84e4c31>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
How random is JavaScript's Math.random? up vote 49 down vote favorite For 6 years I've had a random number generator page on my website. For a long time, it was the first or second result on Google for "random number generator" and has been used to decide dozens, if not hundreds of contests and drawings on discussion forums and blogs (I know because I see the referrers in my web logs and usually go take a look). Today, someone emailed me to tell me it may not be as random as I thought. She tried generating very large random numbers (e.g., between 1 and 10000000000000000000) and found that they were almost always the same number of digits. Indeed, I wrapped the function in a loop so I could generate thousands of numbers and sure enough, for very large numbers, the variation was only about 2 orders of Here is the looping version, so you can try it out for yourself: It includes both a straightforward implementation taken from the Mozilla Developer Center and some code from 1997 that I swiped off a web page that no longer exists (Paul Houle's "Central Randomizer 1.3"). View source to see how each method works. I've read here and elsewhere about Mersenne Twister. What I'm interested in is why there wouldn't be greater variation in the results from JavaScript's built-in Math.random function. Thanks! javascript random sarnath'd by everyone – annakata Jun 30 '09 at 10:34 I don't get it. "sarnath'd"? – Andrew Hedges Jun 30 '09 at 11:15 "sarnath'd" as in, beaten to the punch, or in this case, the answer – maetl Jun 30 '09 at 11:29 2 If you're looking for the answer to the question in the title, see stackoverflow.com/questions/2344312/… – Andrew B. Jun 7 '12 at 23:53 add comment 8 Answers active oldest votes Given numbers between 1 and 100. • 9 have 1 digit (1-9) • 90 have 2 digits (10-99) • 1 has 3 digits (100) Given numbers between 1 and 1000. • 9 have 1 digit up vote 91 down vote • 90 have 2 digits accepted • 900 have 3 digits • 1 has 4 digits and so on. So if you select some at random, then that vast majority of selected numbers will have the same number of digits, because the vast majority of possible values have the same number of digits. 3 Your idea of randomness meaning perfectly and evenly distributed is intriguing... – Roger Pate Jun 30 '09 at 10:34 7 @R.Pate - random number generation isn't much use unless it is evenly distributed on a long scale – annakata Jun 30 '09 at 10:56 1 Read again. @David is only stating what kind of numbers there are between the limits, not the result of selecting N random numbers. I do admit the titling is misleading. – nikc.org Jun 30 '09 at 10:57 3 For the record, I voted up both this and @jwoolard's answers. I chose this one as the accepted answer because the examples make it clear as crystal why the distribution of numbers is skewed to numbers with more digits. – Andrew Hedges Jun 30 '09 at 11:13 1 @andrew-hedges quite right - this is the clearer answer, but thanks :) – jwoolard Jul 1 '09 at 9:04 show 1 more comment Your results are actually expected. If the random numbers are uniformly distributed in a range 1 to 10^n, then you would expect about 9/10 of the numbers to have n digits, and a up vote 34 down further 9/100 to have n-1 digits. 4 Exactly. The distribution of the number of digits is expectedly going to be skewed. The distribution of the log of the number of digits shoudl be uniform however. – Noldorin Jun 30 '09 at 10:34 add comment There different types of randomness. Math.random gives you an uniform distribution of numbers. If you want different orders of magnitude, I would suggest using an exponential function to create what called a power law distribution: up vote 20 down vote should give you rougly the same number of 1-digit numbers as 2-digit numbers and as 3-digit numbers. There are also other distributions for random numbers like the normal distribution (also called Gaussian distribution). 2 That's helpful, thanks! – Andrew Hedges Jul 2 '09 at 10:21 add comment The following paper explains how math.random() in major Web browsers is (un)secure: "Temporary user tracking in major browsers and Cross-domain information leakage and attacks" by Amid Klein (2008). It's no more stronger than typical Java or Windows built-in PRNG functions. up vote 12 On the other hand, implementing SFMT of the period 2^19937-1 requires 2496 bytes of the internal state maintained for each PRNG sequence. Some people may consider this as unforgivable down vote cost. +1: The mentioned paper is great, far beyond what the original question was about. – Roland Illig Mar 11 '11 at 21:44 add comment Looks perfectly random to me! (Hint: It's browser dependent.) Personally, I think my implementation would be better, although I stole it off from XKCD, who should ALWAYS be acknowledged: up vote 5 down vote random = 4; // Chosen by a fair dice throw. Guaranteed to be random. 9 +1 for mentioning it's browser dependent, -1 for borrowing xkcd without linking. – Roger Pate Jun 30 '09 at 10:32 Required or not, since it's xkcd, it's getting attributed. :) – Arafangion Jun 30 '09 at 10:35 OT: I'm surprised and happy that "XKCD" was the answer to a University Challenge question this week :D – Matt Sach Jul 23 '09 at 11:00 -1 for not citing XKCD correctly… – Bergi Jun 3 '13 at 16:42 1 Bergi: A direct link isn't enough? – Arafangion Jun 4 '13 at 1:50 add comment If you use a number like 10000000000000000000 you're going beyond the accuracy of the datatype Javascript is using. Note that all the numbers generated end in "00". up vote 4 down vote 1 That's not his problem in this case, though. – Јοеу Jun 30 '09 at 10:31 2 @Johannes - it's one of his problems :) – annakata Jun 30 '09 at 10:33 add comment I tried JS pseudorandom number generator on Chaos Game. up vote 3 down My Sierpiński triangle says its pretty random: 2 Would you mind sharing the triangle code here and jsfiddle/jsbin so we can easily check it out in practice for different browsers? – Fabrício Matté Jan 8 '13 at 2:44 1 OK, but give me few days, because I need to translate code to english. Now it is polish-english and I have a lot of work. – zie1ony Jan 11 '13 at 13:24 1 @zie1ony a couple days are up. – trusktr Apr 14 '13 at 2:00 Still waiting on that jsfiddle! – André Terra Jul 10 '13 at 14:57 1 usp :( work, work, work Link: kubaplas.vot.pl/green/fractal First parameter is nr of vertex. The second one is a point of intersection (from 0 to 1) of line segment. Just experiment. – zie1ony Jul 16 '13 at 21:55 add comment Well, if you are generating numbers up to, say, 1e6, you will hopefully get all numbers with approximately equal probability. That also means that you only have a one in ten chance of getting a number with one digit less. A one in a hundred chance of getting two digits less, etc. I doubt you will see much difference when using another RNG, because you have a uniform up vote 2 distribution across the numbers, not their logarithm. down vote add comment Not the answer you're looking for? Browse other questions tagged javascript random or ask your own question.
{"url":"http://stackoverflow.com/questions/1062902/how-random-is-javascripts-math-random/4415496","timestamp":"2014-04-20T02:16:25Z","content_type":null,"content_length":"117668","record_id":"<urn:uuid:f3b568f3-4a2c-4d18-a7f5-085f901497c7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
ALIAS, a System Solving Library Based on Interval Analysis by Jean-Pierre Merlet ALIAS is a C++/Maple library based on interval arithmetics, designed for system solving and optimization problems. It has been developed in the framework of the COPRIN project since 1999. Next to general purpose schemes it addresses systems with a particular structure, and intensively uses symbolic computation. Interval arithmetics is a well known method that enables one to compute bounds for almost any analytically defined expression F(X), being given ranges for the unknowns. Such computations are required to determine, for example, the range of action of a robot, or possible molecular structures, satisfying certain constraints. The interval obtained by applying interval arithmetics on a mathematical expression is called the interval evaluation of the expression and can be implemented so that numerical round-off errors are taken into account, ie the interval evaluation will always include the exact value of the mathematical expression for any instance of the unknowns within their ranges. A direct application of this property for systems solving is that if the interval evaluation of the left hand side of at least one equation of the system F(X) = 0 does not include 0, then there is no solution for the system within the given ranges for the unknowns. A straightforward solving algorithm for determining the real roots of a system within some search space can easily be implemented using a branch-and-bound method. A box is defined as a set of ranges, one for each unknown, and a list L of boxes B[0], B[1], B[2], ...,B[n] will be maintained during the solving procedure and will be initialized with a box B[0] which describes the initial search space. An index i, initialized to 0, will indicate the number of the currently processed box in L. The solving algorithm proceeds along the following steps: • compute the interval evaluation of all equations of the system for the unknowns with the range indicated in box Bi • if the interval evaluation of one equation does not include 0, then increment i and goto step 1 • if all interval evaluations include 0: □ a) if the widths of the ranges for all unknowns is lower than a given threshold, then store Bi as a solution. Increment i and goto step 1 □ b) otherwise choose one unknown and bisect its range. Two new boxes are created and are put at the end of L. Increment i and goto step 1. This algorithm stops when all boxes in L have been processed, and returns as solutions a set of boxes. It may be seen already that this basic algorithm may be extended without effort to deal with inequalities constraints and global optimization. Furthermore, distributed implementation is possible as the processing of one box is independent from the processing of all other boxes in L. This basic algorithm may be drastically improved by using: • filtering operators : these operators use the structure of the equation to reduce the width of the ranges of a given box • exclusion operators : these operators determine that there is no solution to the system within a given box. Interval evaluation is one example of an exclusion operator • existence and uniqueness operators : these operators may determine that there is one unique solution within some sub-box of a given box and, if this is the case, provide a numerical scheme that allows to safely compute this solution. One example of an existence operator is the Kantorovitch theorem, which uses the Jacobian and Hessian of the system, and allows to determine a box such that the Newton scheme will always converge toward the unique solution that lies within the box. Possible operators have been proposed from different disciplines, numerical analysis and constraints programming, but no available software offers the possibility of hybrid solving based on a combination of operators. However, various experimental trials have shown that an appropriate combination of numerical analysis and constraints programming operators leads to the most efficient solver. Hence, our objectives are twofold: • implement and combine already proposed operators • develop new operators and improve existing ones. New operators and their mathematical analysis have already been provided by COPRIN (for example the 3B operator), but we are also working on the mathematical background of existing operators. For example we have proven that for distance equations (see the molecular biology example below) the result of the general purpose existence operator based on the Kantorovitch theorem may drastically be improved. ALIAS is a relatively large C++ library (200,000 lines of code) that implements various general purpose solving schemes for systems solving and optimization problems. It includes a large library of filtering, exclusion and existence operators, some of them quite original in this field. ALIAS also includes a set of specific purpose solving schemes devoted to classes of systems with a particular structure (such as distance equations). A theoretical analysis based on the structure of the equations has been used to optimize the efficiency of the general purpose operators and to develop operators that are specific to the system, thereby drastically improving the efficiency of the Another important feature of ALIAS is its intensive use of symbolic computation. First of all, the ALIAS C++ library may be used directly from Maple: within a Maple session it is possible to call a specific solving Maple procedure that will automatically create the necessary C++ code, compile and execute it, and return the result to the Maple session. Symbolic computation is also used to improve the efficiency of the C++ code and to create the C++ code of operators that are completely specific to the system at hand. ALIAS is the software platform of the COPRIN project and has been in development since 1999. The library can freely be downloaded (see the information at http://www-sop.inria.fr/ coprin/logiciels/ ALIAS/ALIAS.html). ALIAS has been used to solve various problems in very different domains. Here we give four examples. Robotics 1 We consider a robot with 3 translation degrees of freedom x, y, z. Due to the mechanical structure of the robot only a limited workspace may be reached by the hand of the robot. Find the largest cube enclosed in this workspace such that all real roots r of a given polynomial, whose coefficients depend on x, y, z, satisfy 1/3 &Mac178; r &Mac178; 1. Robotics 2 We consider a robot with 3 successive revolving joints. The geometry of the robot is defined by a set of 30 parameters that indicate the direction of the joint axis, the lengths and respective orientations of the links connecting the joints, and the location of the base of the robot. Find the possible values of these parameters such that the hand of the robot may reach 5 pre-defined poses. This is a very demanding problem that has never been solved previously. After 5 days of computation on a cluster of 25 PCs we have been able to find 98 possible solutions within a relatively large search space. Molecular Biology Being given a molecule with approximately 100 atoms and constraints equations indicating that the distances between some pairs of atoms should be equal to a constant, find all possible 3D shapes of the molecule, ie find all possible locations of the atoms. This involves solving a system of about 400 distance equations. The results have been obtained in 15 minutes on a laptop. Design of an algorithm for processor sharing policy in an integrated service network. We have here a problem in two variables a, b and it must be shown that there exists at least one solution for b > 1 to a system of one equation F(a,b) = 0 and one inequality G(a) &Mac178; 0.2, where F and G contain algebraic and exponential functions. ALIAS software library: http://www-sop.inria.fr/coprin/logiciels/ALIAS/ALIAS.html Please contact: Jean-Pierre Merlet, INRIA Tel: +33 4 92 38 77 61 E-mail: jean-pierre.merlet@inria.fr
{"url":"http://www.ercim.eu/publication/Ercim_News/enw50/merlet.html","timestamp":"2014-04-21T04:51:39Z","content_type":null,"content_length":"12307","record_id":"<urn:uuid:2e5bb7ba-893d-48df-9d10-80759a061d69>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Proof "from the book" of the incompleteness theorem H. Enderton hbe at math.ucla.edu Mon Aug 30 13:02:07 EDT 2004 Martin Davis wrote: >> How about: >> The set of arithmetic theorems of any formal system is recursively >> enumerable, while the set of arithmetic truths is not. So any sound >> formal system must fail to prove some arithmetic truth. >And then Arnon Avron objected: >The trouble with this proof is that it misses ... the actual >construction of a *true* sentence which the system fails to prove, >and a *proof* that it is true. I think the proof Martin Davis describes works better than you are giving it credit for. We show that the set of arithmetic truths is not r.e. by showing that recursive sets are arithmetical, whence K and its complement (for example) are many-one reducible to the theory of true arithmetic. Then for any recursive set of true axioms ("true" to avoid Torkel's point), that fact that K is creative does indeed yield the construction of a specific true but unprovable sentence. Moreover, examination of that sentence shows that it says "I am unprovable"! So maybe there are not so many different proofs after all. (Supporting details are on page 257-258 of the the second edition of my logic book.) --Herb Enderton More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-August/008429.html","timestamp":"2014-04-18T16:08:35Z","content_type":null,"content_length":"3750","record_id":"<urn:uuid:ebd183c9-a348-47c6-9e23-62a2bfb203eb>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Bear Geometry Tutor Find a Bear Geometry Tutor ...S. to my students. I am a Delaware and New Jersey certified teacher with over 5 years of teaching experience and a strong desire to see students succeed. My instructional strengths in Reading are in helping students improve and develop in reading comprehension, fluency, phonics and decoding. 13 Subjects: including geometry, reading, writing, elementary (k-6th) ...I also know classical Russian authors, including Tolstoy, Dostoevesky, Chekov and others. This course does not have a set curriculum. It often is called "World Cultures," a survey course that examines the great civilizations of the past (Mesopotamia, China, India, Egypt, Meso-America, Greece, R... 32 Subjects: including geometry, English, chemistry, biology ...My philosophy of education is that all students can learn given the right guidance. My teaching style vary depending upon the student that I am teaching. Each lesson is geared toward the learning styles of my student and aimed at increasing his/her ability to learn on their own in a variety of ... 30 Subjects: including geometry, chemistry, statistics, trigonometry ...I look forward to working with you and expanding your mathematical abilities!The Integrated Algebra Regents goes over the topics of the algebra 1, statistics, probability, and the very basic geometry ideas of area and perimeter. The Geometry Regents goes over the topics of geometry (construction... 22 Subjects: including geometry, physics, statistics, GRE ...I have extensive experience working with amateur runners training for 5k's up through marathons. I am qualified to teach fitness because of my bachelor's in exercise science and my master's degree in kinesiology and applied physiology. I have experience coaching and was the captain of my track and cross country teams and was a member of my high school basketball team. 39 Subjects: including geometry, reading, calculus, chemistry
{"url":"http://www.purplemath.com/bear_geometry_tutors.php","timestamp":"2014-04-19T23:36:21Z","content_type":null,"content_length":"23822","record_id":"<urn:uuid:198e32ae-0e02-4bea-9147-3717a906c0c1>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
Coefficients of ergodicity and the scrambling index Akelbek, Mahmud and Kirkland, Steve (2009) Coefficients of ergodicity and the scrambling index. Linear Algebra and its Applications, 430 (4). pp. 1111-1130. ISSN 0024-3795 For a primitive stochastic matrix S, upper bounds on the second largest modulus of an eigenvalue of S are very important, because they determine the asymptotic rate of convergence of the sequence of powers of the corresponding matrix. In this paper, we introduce the definition of the scrambling index for a primitive digraph. The scrambling index of a primitive digraph D is the smallest positive integer k such that for every pair of vertices u and v, there is a vertex w such that we can get to w from u and v in D by directed walks of length k; it is denoted by k(D).We investigate the scrambling index for primitive digraphs, and give an upper bound on the scrambling index of a primitive digraph in terms of the order and the girth of the digraph. By doing so we provide an attainable upper bound on the second largest modulus of eigenvalues of a primitive matrix that make use of the scrambling index. Repository Staff Only(login required)
{"url":"http://eprints.nuim.ie/2060/","timestamp":"2014-04-18T18:22:23Z","content_type":null,"content_length":"22187","record_id":"<urn:uuid:dc289ab4-ca6a-4fb1-b2a6-fa5c82d50f0d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
Bound and linear equality/inequality constrained optimization This article discusses minbleic subpackage - optimizer which supports boundary and linear equality/inequality constraints. This subpackage replaces obsolete minasa subpackage. BLEIC algorithm (b oundary, linear equality-inequality constraints) can solve following optimization problems: About algorithm BLEIC as active set algorithm Active set algorithm is a name for a family of methods used to solve optimization problems with equality/inequality constraints. Method name comes from the classification of constraints, which divides them into active at the current point and inactive ones. The method reduces equality/inequality constrained problem to a sequence of equality-only constrained subproblems. Active inequality constraints are treated as equality ones, inactive ones are temporarily ignored (although we continue to track them). Informally speaking, current point travels through feasible set, "sticking" to or "unsticking" from boundaries. Image below contains example for some problem with three boundary constraints: 1. we start from the initial point, where all constraints are inactive 2. we solve first (unconstrained) subproblem and arrive at (0,1), where we activate constraint (1) 3. second subproblem leads us to (0,0), where we "stick" to the boundary 4. we activate constraint (3) and deactivate constraint (1) 5. finally, we arrive to (1,0), where algorithm stops The most important feature of active set family is that active set method is easy to implement for problems with linear constraints. Equality-constrained subproblems can be easily solved by projection onto subspace spanned by active constraints. In the linear case active set method outperforms its main competitors (penalty, barrier or modified Lagrangian methods). In particular, constraints (both boundary and linear) are satisfied with much higher accuracy. Additional information In this section we'll consider key features of the BLEIC algorithm (implemented by minbleic subpackage). Below we'll use following notation: N will denote number of variables, K will denote number of general form linear constraints (both equality and inequality ones), Ki will denote number of general form inequality constraints. Following properties should be noted: • we use nonlinear conjugate gradient method as underlying optimization algorithm • equality constraints are handled by projection of the target function onto equality-constrained subspace • inequality constraints are handled by activation/deactivation (i.e. by treating them as equality ones, if necessary, or dropping from consideration otherwise) • current version of the BLEIC algorithm does not support modification of the preconditioner on the fly. You can change it during algorithm iterations, but it won't take effect until restart of the • algorithm modifies problem by addition of the slack variables to linear (non-boundary) inequality constraints. As result, the only inequality constraints we have are boundary ones. All linear inequality constraints are transformed into equality ones plus one additional boundary constraint (non-negativity) for the slack variable. • boundary constraints are handled separately from the general linear ones as important special case. Boundary constraints have lower computational overhead and are always satisfied exactly - both in the final point and in all intermediate points. For example, in case of non-negativity constraint we won't even evaluate function at points with negative x. • general form linear constraints (both equality and inequality ones) require more time than boundary ones. Both equality and inequality constraints can be satisfied with small error (about N·ε in magnitude). For example, with x+y≤1 as constraint we can stop at the point which is slightly beyond x+y=1 (about N·ε away from the feasible set). • constraints add computational overhead. Additional computations are required at two moments: when we activate/deactivate constraints and when we evaluate target function. When we activate/ deactivate even one constraint we have to reorthogonalize constraint matrix, which will require O((N+Ki)·K^ 2) operations. Every evaluation of the target function will require O(N) additional operations for boundary constraints and O((N+Ki)·K) operations for linear ones. • Computational overhead for handling constraint is almost independent of whether it is active or not. The only situation when computational overhead is insignificant is when we have boundary constraint like -∞<x or x<+∞. Starting to use algorithm Choosing between analytic and exact gradient Before starting to use L-BFGS/CG optimizer you have to choose between numerical diffirentiation or calculation of analytical gradient. For example, if you want to minimize f(x,y)=x^ 2+exp(x+y)+y^ 2, then optimizer will need both function value at some intermediate point and function derivatives df/dx and df/dy. How can we calculate these derivatives? ALGLIB users have several options: 1. gradient is calculated by user, usually through symbolic differentiation (so called analytical or exact gradient). This option is the best one because of two reasons. First, precision of such gradient is close to the machine ε (that's why it is called exact gradient). Second, computational complexity of N-component gradient is often only several times (not N times!) higher than calculation of function itself (knowledge of the function's structure allows us to calculate gradient in parallel with function calculation). 2. gradient is calculated by ALGLIB through numerical differentiation, using 4-point difference formula. In this case user calculates function value only, leaving all differentiation-related questions to ALGLIB package. This option is more convenient than previous one because user don't have to write code which calculates derivative. It allows fast and easy prototyping. However, we can note two significant drawbacks. First, numerical gradient is inherently inexact (even with 4-point differentiation formula), which can slow down algorithm convergence. Second, numerical differentiation needs 4·N function evaluations in order to get just one gradient value. Thus numerical differentiation will be efficient for small-dimensional (tens of variables) problems only. On medium or large-scale problems algorithm will work, but very, very slow. 3. gradient is calculated by user through automatic differentiation. As result, optimizer will get cheap and exact analytical gradient, and user will be freed from necessity to manually differentiate function. ALGLIB package does not support automatic differentiation (AD), but we can recommend several AD packages which can be used to calculate gradient and pass it to ALGLIB Depending on the specific option chosen by you, you will use different functions to create optimizer and start optimization. If you want to optimize with user-supplied gradient (either manually calculated or obtained through automatic differentiation), then you should: • create optimizer with minbleiccreate function • pass callback calculating function value and gradient (simultaneously) to minbleicoptimize. If you erroneously pass callback calculating function value only then optimizer will generate exception on the first attempt to use gradient. If you want to use ALGLIB-supplied numerical differentiation, then you should: • create optimizer with minbleiccreatef function. This function accepts one additional parameter - differentiation step. Numerical differentiation is done with fixed step. However, step size can be different for different variables (depending on their scale set by minbleicsetscale call). • pass callback calculating function value (but not gradient) to minbleicoptimize. If you erroneously pass callback calculating gradient then optimizer will generate exception. Scale of the variables Before you start to use optimizer, we recommend you to set scale of the variables with minbleicsetscale function. Scaling is essential for correct work of the stopping criteria (and sometimes for convergence of optimizer). You can do without scaling if your problem is well scaled. However, if some variables are up to 100 times different in magnitude, we recommend you to tell solver about their scale. And we strongly recommend to set scaling in case of larger difference in magnitudes. We recommend you to read separate article on variable scaling, which is worth reading unless you solve some simple toy problem. Preconditioning is a transformation which transforms optimization problem into a form more suitable to solution. Usually this transformation takes form of the linear change of the variables - multiplication by the preconditioner matrix. The most simple form of the preconditioning is a scaling of the variables (diagonal preconditioner) with carefully chosen coefficients. We recommend you to read article about preconditioning, below you can find the most important information from it. You will need preconditioner if: • your variables have wildly different magnitudes (thousand times and higher) • your function rapidly changes in some directions and slowly - in other ones • analysis of Hessian matrix suggests that your problem is ill-conditioned • you want to accelerate optimization Sometimes preconditioner just accelerates convergence, but in some difficult cases it is impossible to solve problem without good preconditioning. ALGLIB package supports several preconditioners: • default one, which does nothing (just identity transform). It can be activated by calling minbleicsetprecdefault. • diagonal Hessian-based preconditioner. In order to use this preconditioner you have to calculate diagonal of the approximate Hessian (not necessarily exact Hessian) and call minbleicsetprecdiag function. Diagonal matrix must be positive definite - algorithm will throw an exception on matrix with zero or negative elements on the diagonal. This preconditioner can be used for convex functions, or in situations when function is possibly non-convex, but you can guarantee that approximate Hessian will be positive definite. • diagonal scale-based preconditioner. This preconditioner can be turned on by minbleicsetprecscale function. It can be used when your variables have wildly different magnitudes, which makes it hard for optimizer to converge. In order to use this preconditioner you should set scale of the variables (see previous section). Stopping conditions Four types of inner stopping conditions can be used. • gradient-based - scaled gradient norm is small enough (scaled gradient is a gradient which is componentwise multiplied by vector of the variable scales) • stepsize-based - scaled step norm is small enough (scaled step is a step which is componentwise divided by vector of the variable scales) • function-based - function change is small enough • iteration-based - after specified number of iterations You can set one or several conditions in different combinations with minbleicsetcond function. We recommend you to use first criterion - small value of gradient norm. This criterion guarantees that algorithm will stop only near the minimum, independently if how fast/slow we converge to it. Second and third criteria are less reliable because sometimes algorithm makes small steps even when far away from minimum. Note #1 You should not expect that algorithm will be terminated by and only by stopping criterion you've specified. For example, algorithm may take step which will lead it exactly to the function minimum - and it will be terminated by first criterion (gradient norm is zero), even when you told it to "make 100 iterations no matter what". Note #2 Some stopping criteria use variable scales, which should be set by separate function call (see previous section). BLEIC algorithm supports following kinds of constraints: • boundary constraints, i.e. constraints of the form l[i ]≤x[i ]≤u[i ] • linear inequality constraints, i.e. constraints of the form a[0 ]·x[0 ]+...+a[N-1 ]·x[N-1 ]≥b or a[0 ]·x[0 ]+...+a[N-1 ]·x[N-1 ]≤b • linear equality constraints, i.e. constraints of the form a[0 ]·x[0 ]+...+a[N-1 ]·x[N-1 ]=b Boundary constraints can be set with minbleicsetbc function. These constraints are handled very efficiently - computational overhead for having N constraints is just O(N) additional operations per function evaluation. Finally, these constraints are always exactly satisfied. We won't calculate function at points outside of the interval given by [l[i ],u[i ]]. Optimization result will be inside [l[i ],u[i ]] or exactly at its boundary. General linear constraints can be either equality or inequality ones. These constraints can be set with minbleicsetlc function. Linear constraints are handled less efficiently than boundary ones: they need O((N+Ki)·K) additional operations per function evaluation, where N is a number of variables, K - is a number of linear equality/inequality constraints, Ki is a number of inequality constraints. We also need O((N+Ki)·K^ 2) in order to reorthogonalize constraint matrix every time we activate/deactivate even one constraint. Finally, unlike boundary constraints, linear ones are not satisfied exactly - small error is possible, about N·ε in magnitude. For example, when we have x+y≤1 as constraint we can stop at the point which is slightly beyond the boundary specified by x+y=1. Both types of constraints (boundary and linear ones) can be set independently of each other. You can set boundary only, linear only, or arbitrary combination of boundary/linear constraints. In order to help you use BLEIC algorithm we've prepared several examples: • minbleic_d_1 - this example demonstrates optimization with boundary constraints • minbleic_d_2 - this example demonstrates optimization with general linear constraints • minbleic_ftrim - this example shows how to minimize function with singularities at the domain boundaries. This example is discussed in more details in another article. • minbleic_numdiff - this example shows how to minimize function using numerical differentiation. We also recommend you to read 'Optimization tips and tricks' article, which discusses typical problems arising during optimization. This article is intended for personal use only. Download ALGLIB C# source. C++ source. C++, multiple precision arithmetic C++ source. MPFR/GMP is used. GMP source is available from gmplib.org. MPFR source is available from www.mpfr.org. FreePascal version. Delphi version. VB.NET version. VBA version. Python version (CPython and IronPython are supported).
{"url":"http://www.alglib.net/optimization/boundandlinearlyconstrained.php","timestamp":"2014-04-19T14:37:38Z","content_type":null,"content_length":"31449","record_id":"<urn:uuid:ee6568ca-1a87-4549-bfc3-c30567217688>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
A gallery is about to open an exhibit in a room that is 22 ft long, 24 ft wide and 16 ft high. the edge of the door... - Homework Help - eNotes.com A gallery is about to open an exhibit in a room that is 22 ft long, 24 ft wide and 16 ft high. the edge of the door to the exhibit is is on the 22 ft side, 4 ft from 1 wall. For special effects lightning as a guest enters the door, let u be the vector representing the distance from the edge of the door (at the floor) to the farthest bottom corner of the room, and let v be the vector from the same point to the farthest upper corner Find |v|, the angle between u and v using cos theta = u.v/|u||v|. Verify result using right triangle trigonometry. A gallery is about to open an exhibit in a room that is 22 ft long, 24 ft wide and 16 ft high. The edge of the door to the exhibit is is on the side that has a length of 22 ft and is 4 ft from the `vec u` represents the vector from the edge of the door (at the floor) to the farthest bottom corner of the room, and `vec v` is the vector from the same point to the farthest upper corner. The magnitude of `vec v` is `sqrt(18^2+24^2+16^2)` = 34 The magnitude of `vec u` is `sqrt(18^2+24^2)` = 30 If the edge of the door is taken as (0,0,0) , `vec u` = [24,18,0] and `vec v` = [24,18,16] `vec u @ vec v` = 24*24 + 18*18 + 16*0 = 900 `cos theta = 900/(30*34)` => `theta = cos^-1(900/(30*34))` = 28.072 degrees The angle between the vectors `vec u` and `vec v` using the right triangle formed is : `tan^-1(16/30)` `~~` 28.07 degrees. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/gallery-about-open-an-exhibit-room-that-22-ft-442882","timestamp":"2014-04-19T01:05:57Z","content_type":null,"content_length":"26944","record_id":"<urn:uuid:1783578a-96dc-48b5-8c24-12a6376d3963>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithm Improvement through Performance Measurement: Part 6 The Counting Sort algorithm uses an array of counts, which is reasonable in size for 8-bit and 16-bit numbers (256 counts and 64K counts respectively). Each count is a 32-bit value allowing the algorithm to handle an array up to 4 billion elements. Thus, for 8-bit numbers the count array uses 1K bytes (256 entries at 32-bits each), and for 16-bit number the count array uses 256 K bytes. For 8-bit algorithm, the counts array fits inside L1 cache of modern processors. For 16-bit algorithm, the counts array fits inside L2 or L3 cache. Arrays of 4 billion elements are beyond the limits of 32-bit operating systems and 32-bit processors, and thus unsigned 32-bit values for counts are safe to use. However, when sorting arrays of 32-bit numbers, the required count array grows to 4 billion counts, due to 4 billion possible values for a 32-bit number. At 32-bits per count, 16 GigaBytes of memory would be required for the counts array. This size is not possible for a 32-bit operating systems, but is on the verge of practical for 64-bit operating systems and processors. When a 64-bit operating system is used, the array sizes can be larger than 4 billion elements, which requires 64-bit counts, doubling the memory size requirement for the counts array. Thus today, 8-bit and 16-bit Counting Sort is a practical algorithm and performs very well, outperforming other sorting algorithms by a wide margin. When sorting 32-bit and larger integers, as well as single and higher precision floating-point numerical arrays, N-bit Radix Sort is a good choice as was shown in Part 3 and Part 4. N-bit Radix Sort, which uses the Counting Sort internally, sorts one digit at a time in O(dn) time, where d is the number of digits. For example, sorting 32-bit numbers would take four passes at 8-bits at a time. Lastly, Counting Sort and N-bit Radix Sort can be combined to form a hybrid sorting algorithm with superior performance to sorting using a single sorting algorithm. This method was shown to be affective in Part 3, where the Counting Sort was used for sorting arrays of 8-bit and 16-bit elements, and in-place N-bit Radix Sort was used to sort arrays of 32-bit and 64-bit unsigned and signed integers. Intel architecture processors support SIMD/SSE instructions (single instruction multiple data) to perform certain kinds of operations in parallel, such as adding eight 16-bit numbers in a single clock cycle. These instructions operate on up to 128-bits of data at a time and achieve speedup from their ability to process several data items simultaneously. Intel has developed the Intel Performance Primitives (IPP) library of common routines that utilize these SSE instructions for acceleration, and has spent numerous man-years optimizing their performance. This library is simple to use and adapts to the processor type along with the subset of the instructions supported. Using the library is simpler and quicker than developing the SSE code yourself, especially when taking into account implementing support for generations of processors with varied support for SSE instruction sub-set. Two functions from the IPP library are useful for the Counting Sort algorithm: zero and set. The zero function initializes every value within an array to a zero. The set function sets every value within an array to a certain value. Each function supports a variety of data types such as integers, floating-point, and complex. Listing 4 shows 8-bit and 16-bit (unsigned and signed) implementations of the Counting Sort algorithm using the IPP library functions. // Copyright(c), Victor J. Duvanenko, 2010 inline void CountSortInPlaceIPP( unsigned char* a, unsigned long a_size ) if ( a_size < 2 ) return; const unsigned long numberOfCounts = 256; __declspec( align(32)) unsigned long count[ numberOfCounts ]; // one count for each possible value of an 8-bit element (0-255) ippsZero_32s( reinterpret_cast< Ipp32s * > ( count ), numberOfCounts ); // Scan the array and count the number of times each value appears for( unsigned long i = 0; i < a_size; i++ ) count[ a[ i ] ]++; // Fill the array with the number of 0's that were counted, followed by the number of 1's, and then 2's and so on unsigned long n = 0; for( unsigned long i = 0; i < numberOfCounts; i++ ) ippsSet_8u( (unsigned char)i, reinterpret_cast< Ipp8u * > ( &a[ n ] ), count[ i ] ); n += count[ i ]; inline void CountSortInPlaceIPP( unsigned short* a, unsigned long a_size ) if ( a_size < 2 ) return; const unsigned long numberOfCounts = 65536; #if 1 __declspec( align(32)) unsigned long count[ numberOfCounts ]; // one count for each possible value of an 8-bit element (0-255) // unsigned long count[ numberOfCounts ]; ippsZero_32s( reinterpret_cast< Ipp32s * > ( count ), numberOfCounts ); //__declspec( align(32)) unsigned long count[ numberOfCounts ] = { 0 }; // pre-initializing to zero should be faster/free //__declspec( align(32)) unsigned long count[ numberOfCounts ]; unsigned long count[ numberOfCounts ]; for( unsigned long i = 0; i < numberOfCounts; i++ ) // initialized all counts to zero, since the array may not contain all values count[ i ] = 0; // Scan the array and count the number of times each value appears for( unsigned long i = 0; i < a_size; i++ ) count[ a[ i ] ]++; // Fill the array with the number of 0's that were counted, followed by the number of 1's, and then 2's and so on unsigned long n = 0; for( unsigned long i = 0; i < numberOfCounts; i++ ) ippsSet_16s( (short)i, reinterpret_cast< Ipp16s * > ( &a[ n ] ), count[ i ] ); n += count[ i ]; The 32-bit version of the zero function is ippsZero_32s() is used to initialize the counts arrays, as a replacement for the pre-initialized arrays. The set function ippsSet_8u() replaced the last inner for loop, in the 8-bit implementation. Sadly, Intel SSE instruction set has no support for parallel index (lookup table) operations, which would have been useful for acceleration of the counting portion of the algorithm. Tables 5 and 6 show performance measurements of the unsigned 8-bit and 16-bit Counting Sort algorithm augmented with the Intel IPP library functions. Measurement results show that using the IPP library does not accelerate Counting Sort. For small array sizes (100 elements or fewer for 8-bit, and 10K or fewer for 16-bit) the IPP-based implementations are slower than C++ scalar (non-IPP) implementations. This is mostly likely due to the overhead of calling IPP library functions. Measurements demonstrate that when using the IPP library the use of _declspec() function is critical, since it ensures that the local stack-based count array is cache-line aligned, improving performance of SSE instructions. Hybrid algorithm approach uses multiple algorithms to create a better performing combination than a single algorithm could provide. For example, STL sort() uses QuickSort, Heap Sort and Insertion Sort to produce a generic high performance sorting algorithm. STL stable_sort() uses a buffered Merge Sort, and Insertion Sort. Counting Sort does processes the array through two passes, and does not break the array down into smaller pieces as other algorithms do. For this reason, it is difficult for Counting Sort to benefit from a hybrid approach, except for smaller array sizes. For arrays of 8-bit numbers, Insertion Sort could be used to accelerate smaller array sizes, as was done in Part 3, since Insertion Sort is about 4X faster for arrays of 10 elements. For arrays of 16-bit numbers, Insertion Sort could also be used for the smallest array sizes, followed by using Intel's IPP Radix Sort for arrays sizes up to 0.5 million elements, and 16-bit Counting Sort for the largest array sizes. Counting Sort is a very efficient, high performance, linear-time O(n), in-place sorting algorithm. Implementations for sorting arrays of unsigned 8-bit and 16-bit numbers were developed. This implementation was extended to support signed numbers, since signed numbers require different treatment from unsigned. The signed implementation was crafted to not sacrifice performance. For arrays of 8-bit unsigned and signed numbers, Counting Sort outperformed STL sort() by over 20X for array sizes of 100K and larger, and outperformed Intel's IPP sort by 20-30% for array sizes of 10K and larger. Counting Sort also outperforms N-bit-Radix Stable Sort from 1.6X to 5.9X for array sizes of 1K and larger. For arrays of 16-bit unsigned and signed numbers, Counting Sort outperforms STL sort() by up to 30X, IPP Radix Sort by up to 4X, and N-Bit-Radix Stable Sort by up to 6X. Counting Sort algorithm was shown to be practical for 8-bit and 16-bit numbers, but not yet practical for 32-bit and larger numbers on 32-bit operating systems. However, for 64-bit processor and operating systems, sorting 32-bit numbers should become practical within the next few years. For now, N-bit Radix Sort (Part 3) is a good alternate high-performance sorting algorithm with O(dn), where d is the number of digits within each array element. Counting Sort illustrates that for purely numeric arrays the concept of stability does not apply. In the implementations above the original numbers are not kept to produce the resulting sorted array -- they are counted, discarded, and then recreated. These implementations gain their performance from not moving any of the array elements. However, the Counting Sort algorithm can be implemented using numeric keys with associated data items. In this case, the concept of stability applies and the algorithm can be made stable. Performance measurement driven optimization drove the implementations, as was illustrated by performance differences when array initialization was used versus a for loop. Unfortuately, using Intel IPP functions (which utilize SSE parallel instructions) to optimize Counting Sort did not yield a faster algorithm implementation. However, these implementations may still be useful, since they use different computational units within the for portions of the algorithm. Lastly, a hybrid algorithms approach should produce a superior sorting algorithm, with several suggestions provided based on The astonishing performance gains provided by the Counting Sort algorithm warrant consideration of data type dependent sorting, where different algorithms are used depending on the data type that is being sorted; e.g., Counting Sort for 8 and 16-bit numeric data types, Radix Sort for larger numeric data types, and STL sort for other types.
{"url":"http://www.drdobbs.com/cpp/algorithm-improvement-through-performanc/224201198?pgno=4","timestamp":"2014-04-19T13:56:10Z","content_type":null,"content_length":"101773","record_id":"<urn:uuid:94211ff2-baa4-434c-a236-7454e2a626b9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Blast Into Math! ISBN: 978-87-403-0330-8 1 edition Pages : 215 Price: Free Download for FREE in 4 easy steps... We are terribly sorry, but in order to download our books or watch our videos, you will need a browser that allows JavaScript. After entering your email address, a confirmation email will be sent to your inbox. Please approve this email to receive our weekly eBook update. We will not share your personal information with any third party. You can also read this in Bookboon.com Premium A fun rigorous introduction to pure mathematics which is suitable for both students and a general audience interested in learning what pure mathematics is all about. 300+ Business books exclusively in our Premium eReader • No adverts • Advanced features • Personal library More about Premium Users who viewed this item also viewed About the book Blast into Math! A fun rigorous introduction to pure mathematics which is suitable for both students and a general audience interested in learning what pure mathematics is all about. Pure mathematics is presented in a friendly, accessible, and nonetheless rigorous style. Definitions, theorems, and proofs are accompanied by creative analogies and illustrations to convey the meaning and intuition behind the abstract math. The key to reading and understanding this book is doing the exercises. You don't need much background for the first few chapters, but the material builds upon itself, and if you don't do the exercises, eventually you'll have trouble understanding. The book begins by introducing fundamental concepts in logic and continues on to set theory and basic topics in number theory. The sixth chapter shows how we can change our mathematical perspective by writing numbers in bases other than the usual base 10. The last chapter introduces analysis. Readers will be both challenged and encouraged. A parallel is drawn between the process of working through the book and the process of mathematics research. If you read this book and do all the exercises, you will not only learn how to prove theorems, you'll also experience what mathematics research is like: exciting, challenging, and fun! Like the Facebook page for Blast Into Math here: https://www.facebook.com/BlastIntoMath 1. To the reader 2. Pure mathematics: the proof of the pudding is in the eating 1. A universal language 2. Theorems, propositions, and lemmas 3. Logic 4. Ready? Set? Prove! 5. Exercises 6. Examples and hints 3. Sets of numbers: mathematical playgrounds 1. Set theory 2. Numbers 3. The least upper bound property 4. Proof by induction 5. Exercises 6. Examples and hints 4. The Euclidean algorithm: a computational recipe 1. Division 2. Greatest common divisors 3. Proof of the Euclidean Algorithm 4. Greatest common divisors in disguise 5. Exercises 6. Examples and hints 5. Prime numbers: indestructible building blocks 1. Ingredients in the proof of the Fundamental Theorem of Arithmetic 2. Unique prime factorization: the Fundamental Theorem of Arithmetic 3. How many primes are there? 4. Counting infinity 5. Exercises 6. Examples and hints 6. Mathematical perspectives: all your base are belong to us 1. Number bases: infinitely many mathematical perspectives 2. Fractions in bases 3. Exercises 4. Examples and hints 7. Analytic number theory: ants, ghosts and giants 1. Sequences: mathematical ants 2. Real numbers and friendly rational numbers 3. Series: a tower of mathematical ants 4. Decimal expansions 5. The Prime Number Theorem 6. Exercises 7. Examples and hints 8. Afterword 9. Bibliography About the Author Julie Rowlett is an American mathematician currently teaching and researching pure mathematics at the University of Goettingen and the Max Planck Institute for Mathematics in Germany. Her research focus is geometric analysis. She received her Bachelor of Science in Mathematics from the University of Washington in 2001 and PhD in Mathematics from Stanford University in 2006. Her post-doctoral research experience includes the Centre de Recherches Mathematiques in Montreal, the Mathematical Sciences Research Institute in Berkeley, and the Hausdorff Center for Mathematics in Bonn. Julie has taught courses at Stanford University for the Education Program for Gifted Youth, at the University of California Santa Barbara, and at the University of Goettingen (in German). In addition to math, she enjoys cooking, learning foreign languages, singing, and dancing. Henry Segerman is a British/American mathematician, currently working as a research fellow at the University of Melbourne in Australia. He received his Master of Mathematics degree from the University of Oxford in 2001 and his PhD in Mathematics from Stanford University in 2007. He was a postdoctoral lecturer at the University of Texas at Austin from 2007 to 2010, and will start an assistant professorship position at Oklahoma State University in 2013. In addition to his research in 3-dimensional geometry and topology he is a mathematical artist, having exhibited works in art exhibitions at the Joint Mathematics Meetings and the Bridges conferences on mathematics and the arts. He is also an associate editor for the Journal of Mathematics and the Arts. He works mainly in the medium of 3D printed sculpture, but occasionally dabbles in 2D work, including of course illustration! See www.segerman.org for many more mathematically artistic projects. Embed Frame - Terms of Use The embed frame is free to use for private persons, universities and schools. It is not allowed to be used by any company for commercial purposes unless it is for media coverage. You may not modify, build upon, or block any portion or functionality of the embed frame, including but not limited to links back to the bookboon.com website. The Embed frame may not be used as part of a commercial business offering. The embed frame is intended for private people who want to share eBooks on their website or blog, professors or teaching professionals who want to make an eBook available directly on their page, and media, journalists or bloggers who wants to discuss a given eBook If you are in doubt about whether you can implement the embed frame, you are welcome to contact Thomas Buus Madsen on tbm@bookboon.com and seek permission. Zohaib Nasir ★★★★★ I think by reading this anyone can increase their ability to solve math problems. :-) Raymond D. Deans ★★★★★ This is for students who are starting off in learning the subject, and a good reinforcement to those who find it to be difficult. Paul Ziegler ★★★★★ Book Boon has been publishing free books for several years now. I am completing one of their newest mathematics texts, "Blast into Math." It is an excellently written book about mathematical logic using number theory as a means of illustrating how mathematicians think and work. Excelent book, excellent website. Hlazo Ngwenya ★★★★☆ it's a good book especially for those that find mathematics uninteresting .
{"url":"http://bookboon.com/en/blast-into-math-ebook","timestamp":"2014-04-19T15:03:36Z","content_type":null,"content_length":"45675","record_id":"<urn:uuid:6683afe6-87cd-4855-88d2-9a4b867b8d4e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
gnuplot demo script: gnuplot demo script: fit.dem autogenerated by webify.pl on Tue Mar 6 11:35:46 2007 gnuplot version gnuplot 4.3 patchlevel CVS-14Feb2007 # $Id: fit.dem,v 1.5 2004/09/25 03:39:20 sfeam Exp $ print "Some examples how data fitting using nonlinear least squares fit" print "can be done." print "" Click here for minimal script to generate this plot set title 'data for first fit demo' plot 'lcdemo.dat' set xlabel "Temperature T [deg Cels.]" set ylabel "Density [g/cm3]" set key below print "now fitting a straight line to the data :-)" print "only as a demo without physical meaning" load 'line.fnc' y0 = 0.0 m = 0.0 show variables Click here for minimal script to generate this plot set title 'all fit params set to 0' plot 'lcdemo.dat', l(x) Click here for minimal script to generate this plot fit l(x) 'lcdemo.dat' via y0, m Click here for minimal script to generate this plot set title 'unweighted fit' plot 'lcdemo.dat', l(x) Click here for minimal script to generate this plot fit l(x) 'lcdemo.dat' using 1:2:3 via y0, m Click here for minimal script to generate this plot set title 'fit weighted towards low temperatures' plot 'lcdemo.dat', l(x) Click here for minimal script to generate this plot fit l(x) 'lcdemo.dat' using 1:2:4 via y0, m Click here for minimal script to generate this plot set title 'bias to high-temperates' plot 'lcdemo.dat', l(x) print "now use real single-measurement errors to reach such a result (-> return)" print "(look at the file lcdemo.dat and compare the columns to see the difference)" Click here for minimal script to generate this plot set title 'data with experimental errors' plot 'lcdemo.dat' using 1:2:5 with errorbars fit l(x) 'lcdemo.dat' using 1:2:5 via y0, m Click here for minimal script to generate this plot set title 'fit weighted by experimental errors' plot 'lcdemo.dat' using 1:2:5 with errorbars, l(x) print "It's time now to try a more realistic model function" load 'density.fnc' show functions print "density(x) is a function which shall fit the whole temperature" print "range using a ?: expression. It contains 6 model parameters which" print "will all be varied. Now take the start parameters out of the" Click here for minimal script to generate this plot load 'start.par' set title 'initial parameters for realistic model function' plot 'lcdemo.dat', density(x) fit density(x) 'lcdemo.dat' via 'start.par' Click here for minimal script to generate this plot set title 'fitted to realistic model function' plot 'lcdemo.dat', density(x) print "looks already rather nice? We will do now the following: set" print "the epsilon limit higher so that we need more iteration steps" print "to convergence. During fitting please hit ctrl-C. You will be asked" print "Stop, Continue, Execute: Try everything. You may define a script" print "using the FIT_SCRIPT environment variable. An example would be" print "'FIT_SCRIPT=plot nonsense.dat'. Normally you don't need to set" print "FIT_SCRIPT since it defaults to 'replot'. Please note that FIT_SCRIPT" print "cannot be set from inside gnuplot." print "" Click here for minimal script to generate this plot FIT_LIMIT = 1e-10 fit density(x) 'lcdemo.dat' via 'start.par' Click here for minimal script to generate this plot set title 'fit with more iterations' plot 'lcdemo.dat', density(x) FIT_LIMIT = 1e-5 print "\nNow a brief demonstration of 3d fitting." print "hemisphr.dat contains random points on a hemisphere of" print "radius 1, but we let fit figure this out for us." print "It takes many iterations, so we limit FIT_MAXITER to 50." #HBB: made this a lot harder: also fit the center of the sphere #h(x,y) = sqrt(r*r - (x-x0)**2 - (y-y0)**2) + z0 #HBB 970522: distort the function, so it won't fit exactly: h(x,y) = sqrt(r*r - (abs(x-x0))**2.2 - (abs(y-y0))**1.8) + z0 x0 = 0.1 y0 = 0.2 z0 = 0.3 set title 'the scattered points, and the initial parameter' splot 'hemisphr.dat' using 1:2:3, h(x,y) Click here for minimal script to generate this plot # we *must* provide 4 columns for a 3d fit. We fake errors=1 fit h(x,y) 'hemisphr.dat' using 1:2:3:(1) via r, x0, y0, z0 set title 'the scattered points, fitted curve' splot 'hemisphr.dat' using 1:2:3, h(x,y) print "\n\nNotice, however, that this would converge much faster when" print "fitted in a more appropriate co-ordinate system:" print "fit r 'hemisphr.dat' using 0:($1*$1+$2*$2+$3*$3) via r" print "where we are fitting f(x)=r to the radii calculated as the data" print "is read from the file. No x value is required in this case." Click here for minimal script to generate this plot FIT_MAXITER=0 # no limit : we cannot delete the variable once set print "\n\nNow an example how to fit multi-branch functions\n" print "The model consists of two branches, the first describing longitudinal" print "sound velocity as function of propagation direction (upper data)," print "the second describing transverse sound velocity (lower data).\n" print "The model uses these data in order to fit elastic stiffnesses" print "which occur differently in both branches.\n" Click here for minimal script to generate this plot load 'hexa.fnc' load 'sound.par' set title 'sound data, and model with initial parameters' plot 'soundvel.dat', vlong(x), vtrans(x) # Must provide an error estimate for a 3d fit. Use constant 1 fit f(x,y) 'soundvel.dat' using 1:-2:2:(1) via 'sound.par' #create soundfit.par, reading from sound.par and updating values update 'sound.par' 'soundfit.par' print "" Click here for minimal script to generate this plot set title 'pseudo-3d multi-branch fit to velocity data' plot 'soundvel.dat', vlong(x), vtrans(x) print "Look at the file 'hexa.fnc' to see how the branches are realized" print "using the data index as a pseudo-3d fit" print "" print "Next we only use every fifth data point for fitting by using the" print "'every' keyword. Look at the fitting-speed increase and at" print "fitting result." print "" Click here for minimal script to generate this plot load 'sound.par' fit f(x,y) 'soundvel.dat' every 5 using 1:-2:2:(1) via 'sound.par' set title 'fitted only every 5th data point' plot 'soundvel.dat', vlong(x), vtrans(x) print "When you compare the results (see 'fit.log') you remark that" print "the uncertainties in the fitted constants have become larger," print "the quality of the plot is only slightly affected." print "" print "By marking some parameters as '# FIXED' in the parameter file" print "you fit only the others (c44 and c13 fixed here)." print "" Click here for minimal script to generate this plot load 'sound2.par' set title 'initial parameters' plot 'soundvel.dat', vlong(x), vtrans(x) fit f(x,y) 'soundvel.dat' using 1:-2:2:(1) via 'sound2.par' set title 'fit with c44 and c13 fixed' plot 'soundvel.dat', vlong(x), vtrans(x) print "This has the same effect as specifying only the real free" print "parameters by the 'via' syntax." print "" print "fit f(x) 'soundvel.dat' via c33, c11, phi0" print "" Click here for minimal script to generate this plot load 'sound.par' set title 'initial parameters' plot 'soundvel.dat', vlong(x), vtrans(x) fit f(x,y) 'soundvel.dat' using 1:-2:2:(1) via c33, c11, phi0 set title 'fit via c33,c11,phi0' plot 'soundvel.dat', vlong(x), vtrans(x) print "Here comes an example of a very complex function..." print "" Click here for minimal script to generate this plot set xlabel "Delta [degrees]" set ylabel "Reflectivity" set title 'raw data' #HBB 970522: here and below, use the error column present in moli3.dat: plot 'moli3.dat' w e print "now fitting the model function to the data" load 'reflect.fnc' #HBB 970522: Changed initial values to something sensible, i.e. # something an experienced user of fit would actually use. # FIT_LIMIT is also raised, to ensure a better fit. eta = 1.2e-4 tc = 1.8e-3 show variables show functions Click here for minimal script to generate this plot set title 'initial parameters' plot 'moli3.dat' w e, R(x) Click here for minimal script to generate this plot fit R(x) 'moli3.dat' u 1:2:3 via eta, tc Click here for minimal script to generate this plot set title 'fitted parameters' #HBB 970522: added comment on result of last fit. print "Looking at the plot of the resulting fit curve, you can see" print "that this function doesn't really fit this set of data points." print "This would normally be a reason to check for measurement problems" print "not yet accounted for, and maybe even re-think the theoretic" print "prediction in use." print "" print "You can have a look at all previous fit results by looking into" print "the file 'fit.log' or whatever you defined the env-variable 'FIT_LOGFILE'." print "Remember that this file will always be appended, so remove it" print "from time to time!" print "" Click here for minimal script to generate this plot
{"url":"http://gnuplot.sourceforge.net/demo_4.2/fit.html","timestamp":"2014-04-17T06:42:28Z","content_type":null,"content_length":"13738","record_id":"<urn:uuid:edda379a-5188-4b64-b364-b05f84e99df2>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
From Math Images Getting Started For a good overview of the project as a whole, see the Tour of the Math Images Project. This page may be especially useful for new users or anyone who would like to start contributing for the first time or in a new way to the Math Images Project. Finding Images or Learning Mathematics If, like many users, you are using the site to find interesting images or learn interesting mathematics, the Tour has everything you need to get started. Commenting on, Creating or Editing Pages See the Page Building Help page to learn about commenting on, creating or editing pages. Discussing mathematics or math images See the Discussing Math and Images page to learn how to participate in discussions about content on this site or about math or computer science in general. Think about the site as a whole See the Site Development Help page to learn how to leave general feedback on the Math Images website or to participate in discussions or projects aimed at improving this webiste as a whole. Use the site as part of a class Coming soon.
{"url":"http://mathforum.org/mathimages/index.php/Help:Contents","timestamp":"2014-04-17T22:05:45Z","content_type":null,"content_length":"15664","record_id":"<urn:uuid:023d5f0c-04cf-4e7c-a46c-9b3760e22d91>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
From Encyclopedia of Mathematics A continuous probability distribution concentrated on where beta-function. For The Fisher beta-distribution of the second kind (a type-VI distribution in Pearson's classification). It can be regarded as the distribution of a random variable represented in the form of the where the independent random variables Gamma-distribution) with parameters This relation is used for calculating the values of the Fisher where Chi-squared distribution) with The Fisher The universality of the Fisher Student distribution with The introduction of the Fisher See also Dispersion analysis; Fisher . [1] R.A. Fisher, "On a distribution yielding the error functions of several well-known statistics" , Proc. Internat. Congress mathematicians (Toronto 1924) , 2 , Univ. Toronto Press (1928) pp. [2] M.G. Kendall, A. Stuart, "The advanced theory of statistics. Distribution theory" , 3. Design and analysis , Griffin (1969) [3] H. Scheffé, "The analysis of variance" , Wiley (1959) [4] L.N. Bol'shev, N.V. Smirnov, "Tables of mathematical statistics" , Libr. math. tables , 46 , Nauka (1983) (In Russian) (Processed by L.S. Bark and E.S. Kedrova) The dispertion proportion is also known as the variance ratio, and is in the case of the Dispersion proportion. How to Cite This Entry: Fisher-F-distribution. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Fisher-F-distribution&oldid=28556 This article was adapted from an original article by A.V. Prokhorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"url":"http://www.encyclopediaofmath.org/index.php/Fisher-F-distribution","timestamp":"2014-04-19T19:46:52Z","content_type":null,"content_length":"26405","record_id":"<urn:uuid:e2bb9696-a5af-4224-9c3b-51e7c1a9e743>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
: A R/Bioconductor Package for Inferring Large Transcriptional Networks Using Mutual Information • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMC Bioinformatics. 2008; 9: 461. minet: A R/Bioconductor Package for Inferring Large Transcriptional Networks Using Mutual Information This paper presents the R/Bioconductor package minet (version 1.1.6) which provides a set of functions to infer mutual information networks from a dataset. Once fed with a microarray dataset, the package returns a network where nodes denote genes, edges model statistical dependencies between genes and the weight of an edge quantifies the statistical evidence of a specific (e.g transcriptional) gene-to-gene interaction. Four different entropy estimators are made available in the package minet (empirical, Miller-Madow, Schurmann-Grassberger and shrink) as well as four different inference methods, namely relevance networks, ARACNE, CLR and MRNET. Also, the package integrates accuracy assessment tools, like F-scores, PR-curves and ROC-curves in order to compare the inferred network with a reference one. The package minet provides a series of tools for inferring transcriptional networks from microarray data. It is freely available from the Comprehensive R Archive Network (CRAN) as well as from the Bioconductor website. Modelling transcriptional interactions by large networks of interacting elements and determining how these interactions can be effectively learned from measured expression data are two important issues in system biology [1]. It should be noted that by focusing only on transcript data, the inferred network should not be considered as a proper biochemical regulatory network, but rather as a gene-to-gene network where many physical connections between macromolecules might be hidden by short-cuts. In spite of some evident limitations the bioinformatics community made important advances in this domain over the last few years [2,3]. In particular, mutual information networks have been succesfully applied to transcriptional network inference [4-6]. Such methods, which typically rely on the estimation of mutual information between all pairs of variables, have recently held the attention of the bioinformatics community for the inference of very large networks (up to several thousands nodes) [4,7-9]. R is a widely used open source language and environment for statistical computing and graphics [10] which has become a de-facto standard in statistical modeling, data analysis, biostatistics and machine learning [11]. An important feature of the R environment is that it integrates generic data analysis and visualization functionalities with off-the-shelf packages implementing the latest advances in computational statistics. Bioconductor is an open source and open development software project for the analysis and comprehension of genomic data [12] mainly based on the R programming language. This paper introduces the new R and Bioconductor package minet, where the acronym stands for Mutual Information NETwork inference. This package is freely available on the R CRAN package resource [10] as well as on the Bioconductor website [12]. 1 Mutual information networks Mutual information networks are a subcategory of network inference methods. The rationale of this family of methods is to infer a link between a couple of nodes if it has a high score based on mutual information [9]. Mutual informaton network inference proceeds in two steps. The first step is the computation of the mutual information matrix (MIM), a square matrix whose i, j-th element is the mutual information between X[i ]and X[j], where X[i ]$X$, i = 1,...,n, is a discrete random variable denoting the expression level of the ith gene. The second step is the computation of an edge score for each pair of nodes by an inference algorithm that takes the MIM matrix as input. The adoption of mutual information in network inference tasks can be traced back to the Chow and Liu's tree algorithm [13,14]. Mutual information provides a natural generalization of the correlation since it is a non-linear measure of dependency. Hence with mutual information generalized correlation networks (relevance networks [7]) and also conditional independence graphs (e.g. ARACNE [8]) can be built. An advantage of these methods is their ability to deal with up to several thousands of variables also in the presence of a limited number of samples. This is made possible by the fact that the MIM computation requires only $n(n−1)2$ estimations of a bivariate mutual information term. Since each bivariate estimation can be computed fastly and is low variant also for a small number of samples, this family of methods is adapted for dealing with microarray data. Note that since mutual information is a symmetric measure, it is not possible to derive the direction of an edge using a mutual information network inference technique. Notwithstanding the orientation of the edges can be obtained by using algorithms like IC which are well known in the graphical modelling community [15 1.1 Relevance Network The relevance network approach [7] has been introduced in gene clustering and was successfully applied to infer relationships between RNA expressions and chemotherapeutic susceptibility [6]. The approach consists in inferring a genetic network where a pair of genes {X[i], X[j]} is linked by an edge if the mutual information I(X[i]; X[j]) is larger than a given threshold I[0]. The complexity of the method is O(n^2) since all pairwise interactions are considered. Note that this method does not eliminate all the indirect interactions between genes. For example, if gene X[1 ]regulates both gene X[2 ]and gene X[3], this would cause a high mutual information between the pairs {X[1], X[2]}, {X[1], X[3]} and {X[2], X[3]}. As a consequence, the algorithm will set an edge between X[2 ]and X[3 ]although these two genes interact only through gene X[1]. 1.2 CLR Algorithm The CLR algorithm [4] is an extension of the relevance network approach. This algorithm computes the mutual information for each pair of genes and derives a score related to the empirical distribution of the MI values. In particular, instead of considering the information I(X[i]; X[j]) between genes X[i ]and X[j], it takes into account the score $zij=zi2+zj2$ where and μ[i ]and σ[i ]are respectively the sample mean and standard deviation of the empirical distribution of the values I(X[i], X[k]), k = 1,...,n. The CLR algorithm was successfully applied to decipher the E. Coli TRN [4]. CLR has a complexity in O(n^2) once the MIM is computed. 1.3 ARACNE The Algorithm for the Reconstruction of Accurate Cellular Networks (ARACNE) [8] is based on the Data Processing Inequality [16]. This inequality states that, if gene X[1 ]interacts with gene X[3 ] through gene X[2], then I(X[1]; X[3]) ≤ min (I(X[1]; X[2]), I(X[2]; X[3])). ARACNE starts by assigning to each pair of nodes a weight equal to the mutual information. Then, as in relevance networks, all edges for which I(X[i]; X[j]) <I[0 ]are removed, with I[0 ]a given threshold. Eventually, the weakest edge of each triplet is interpreted as an indirect interaction and is removed if the difference between the two lowest weights is above a threshold W[0]. Note that by increasing I[0 ]the number of inferred edges is decreased while the opposite effect is obtained by increasing W[0]. If the network is a tree and only pairwise interactions are present, the method guarantees the reconstruction of the original network, once it is provided with the exact MIM. ARACNE's complexity is O (n^3) since the algorithm considers all triplets of genes. In [8] the method was able to recover components of the TRN in mammalian cells and outperformed Bayesian networks and relevance networks on several inference tasks [8]. 1.4 MRNET MRNET [9] infers a network using the maximum relevance/minimum redundancy (MRMR) feature selection method [17,18]. The idea consists in performing a series of supervised MRMR gene selection procedures where each gene in turn plays the role of the target output. The MRMR method has been introduced in [17,18] together with a best-first search strategy for performing filter selection in supervised learning problems. Consider a supervised learning task where the output is denoted by Y and V is the set of input variables. The method ranks the set V of inputs according to a score that is the difference between the mutual information with the output variable Y (maximum relevance) and the average mutual information with all the previously ranked variables (minimum redundancy). The rationale is that direct interactions (i.e. the most informative variables to the target Y) should be well ranked whereas indirect interactions (i.e. the ones with redundant information with the direct ones) should be badly ranked by the method. The greedy search starts by selecting the variable X[i ]having the highest mutual information to the target Y. The second selected variable X[j ]will be the one with a high information I(X[j]; Y) to the target and at the same time a low information I(X[j]; X[i]) to the previously selected variable. In the following steps, given a set S of selected variables, the criterion updates S by choosing the variable that maximizes the score where u[j ]is a relevance term and r[j ]is a redundancy term. More precisely, is the mutual information of X[j ]with the target variable Y, and measures the average redundancy of X[j ]to each already selected variables X[k ]S. At each step of the algorithm, the selected variable is expected to allow an efficient trade-off between relevance and redundancy. It has been shown in [19] that the MRMR criterion is an optimal "pairwise" approximation of the conditional mutual information between any two genes X[i ]and X[j ]given the set S of selected variables I(X[i]; X[j]|S). The MRNET approach consists in repeating this selection procedure for each target gene by putting Y = X[i ]and V = X \ {X[i]}, i = 1,...,n, where X is the set of the expression levels of all genes. For each pair {X[i], X[j]}, MRMR returns two (not necessarily equal) scores s[i ]and s[j ]according to (4). The score of the pair {X[i], X[j]} is then computed by taking the maximum of s[i ]and s[j]. A specific network can then be inferred by deleting all the edges whose score lies below a given threshold I[0 ](as in relevance networks, CLR and ARACNE). Thus, the algorithm infers an edge between X[i ]and X[j ]either when X[i ]is a well-ranked predictor of X[j ](s[i ]> I[0]) or when X[j ]is a well-ranked predictor of X[i ](s[j ]> I[0]). An effective implementation of the best-first search for quadratic problems is available in [20]. This implementation demands an O(f × n) complexity for selecting f features using a best first search strategy. It follows that MRNET has an O(f × n^2) complexity since the feature selection step is repeated for each of the n genes. In other terms, the complexity ranges between O(n^2) and O(n^3) according to the value of f. In practice the selection of features stops once a variable obtains a negative score. Implementation of the inference algorithms in minet All the algorithms discussed above are available in the minet package. The RELNET algorithm is implemented by simply running the command build.mim which returns the MIM matrix which can be considered as a weighted adjacency matrix of the network. CLR, ARACNE and MRNET are implemented by the commands aracne(mim), clr(mim), mrnet(mim) respectively that return a weighted adjacency matrix of the It should be noted, that the modularity of the minet package makes possible to assess network inference methods on similarity matrices other than MIM [21]. 2 Mutual information estimation An information-theoretic network inference technique aims at identifying connections between two genes (variables) by estimating the amount of information common to any pair of genes. Mutual information is a measure which calculates dependencies between two discrete random variables. An important property of this measure is that it is not restricted to the identification of linear relations between the random variables [16]. If X is a continuous random variable taking values between a and b, the interval [a, b] can be discretized by partitioning it into |$X$| subintervals, called bins, where the symbol $X$ denotes the bin index vector. We use also nb(x[k]) to denote the number of data points in the kth bin and the symbol $m=∑k∈Xnb(xk)$ to denote the number of samples. If X is a random vector each element X[i ]can be discretized separately into |$Xi$| bins with index vector $Xi$. Let X be a random vector and p a probability measure. The i, j-th element of the mutual information matrix (MIM) is defined by where the entropy of a random variable X is defined as and I(X[i]; X[j]) is the mutual information between the random variables X[i ]and X[j]. Hence, each mutual information calculus demands the estimation of three entropy terms (Eq. 5). A fast entropy estimation is therefore essential for an effective network inference based on MI. Entropy estimation has gained much interest in feature selection and network inference over the last decade [22]. Most approaches focus on reducing the bias inherent to entropy estimation. In this section, some of the fastest and most used entropy estimators are stressed. Other interesting approaches can be found in [22-26]. 2.1 Empirical and Miller-Madow corrected estimators The empirical estimator (also called "plug-in", "maximum likelihood" or "naïve", see [23]) is the entropy of the empirical distribution. Note that, because of the convexity of the logarithmic function, an underestimate of p(x[k]) causes an error on H(X = x[k]) that is larger than the one given by an overestimation of the same quantity. As a result, entropy estimators are biased downwards, that is It has been shown that the variance of the empirical estimator is upper-bounded by $var(H^emp)≤((log⁡m)2m)$ which depends only on the number of samples whereas the asymptotic bias of the estimate $bias(H^emp)=−|X|−12m$ depends also on the number of bins |$X$| [23]. As |$X$| m, this estimator can still have a low variance but the bias can become very large [23]. The Miller-Madow correction is then given by the following formula which is the empirical entropy corrected by the asymptotic bias, where |$X$| is the number of bins with non-zero probability. This correction, while adding no computational cost to the empirical estimator, reduces the bias without changing variance. As a result, the Miller-Madow estimator is often preferred to the naive empirical entropy estimator. 2.2 Shrink entropy estimator The rationale of the shrink estimator, [27], is to combine two different estimators, one with low variance and one with low bias, by using a weighting factor λ Shrinkage is a general technique to improve an estimator for a small sample size [3]. As the value of λ tends to one, the estimated entropy is moved toward the maximal entropy (uniform probability) whereas when λ is zero the estimated entropy tends to the value of the empirical one. Let λ* be the value minimizing the mean square function, see [27], It has been shown in [28] that the optimal λ is given by 2.3 The Schurmann-Grassberger Estimator The Dirichlet distribution can be used in order to estimate the entropy of a discrete random variable. The Dirichlet distribution is the multivariate generalization of the beta distribution. It is also the conjugate prior of the multinomial distribution in Bayesian statistics. More precisely, the density of a Dirichlet distribution takes the following form where β[i ]is the prior probability of an event x[i ]and Γ(·) is the gamma function, (see [25,27,29] for more details). In case of no a priori knowledge, the β[k ]are assumed to be equal (β[k ]= N, k $X$) so as no event becomes more probable than another. Note that using a Dirichlet prior with parameters N is equivalent to adding N ≥ 0 "pseudo-counts" to each bin i $X$. The prior actually provides the estimator the information that |$X$|N counts have been observed in previous experiments. From that viewpoint, |$X$|N becomes the a priori sample size. The entropy of a Dirichlet distribution can be computed directly with the following equation: with $ψ(z)=dln⁡Γ(z)dz$ the digamma function. Various choices of prior parameters has been proposed in the literature [29-31]. Schurmann and Grassberger have proposed the prior $N=1|X|$[32] that has been retained in the package. Implementation of estimators in minet The mutual information matrix is estimated by using the function build.mim(dataset, estimator). This function returns a matrix of paired mutual informations computed in nats (base e) and takes two 1. the data frame dataset which stores the gene expression dataset or a generic dataset where columns contain variables/features and rows contain outcomes/samples 2. the string mi, that denotes the routine used to perform mutual information estimator. The package makes available four estimation routines : "mi.empirical", "mi.shrink", "mi.sg","mi.mm" (default:"mi.empirical") each referring to the estimators technique explained above. 3 Discretization methods All the estimators discussed in the previous section have been designed for discrete variables. If the random variable X is continuous and takes values comprised between a and b, it is then required to partition the interval [a, b] into |$X$| sub-intervals in order to adopt a discrete entropy estimator. The two most used discretizing algorithm are the equal width and the equal frequency quantization. These are explained in the next sections. Other discretization methods can be found in [33-35]. 3.1 Equal Width The principle of the equal width discretization is to divide the range [a[i], b[i]] of each variable X[i], i n} in the dataset into |$Xi$| sub-intervals of equal size: $[ai,ai+bi−ai|Xi|[,[ai+bi−ai|Xi |,ai+2bi−ai|Xi|[,...[ai+(|Xi|−1)(bi−ai)|Xi|,bi+ε[$. Note that an ε is added in the last interval in order to include the maximal value in one of the |$Xi$| bins. This discretization scheme has a O(m) complexity cost (by variable). 3.2 Global Equal Width The principle of the global equal width discretization is the same as the equal width (Sec. 3.1) except that the considered range [a, b] is not the range of each random variable such as in Sec. 3.1 but the range of the random vector composed of all the variables in the dataset. In other words, a and b are respectively the minimal and the maximal value of the dataset. 3.3 Equal Frequency The equal frequency discretization scheme consists in partitioning the range [a[i], b[i]] of each variable X[i ]in the dataset into |$Xi$| intervals, each having the same number m/|$Xi$| of data points points. As a result, the size of each interval can be different. Note that if the |$Xi$| intervals have equal frequencies, the computation of entropy is straightforward: it is log $1|Xi|$. However, there can be more than m/|$Xi$| identical values in a vector of measurements. In such case, one of the bins will be more dense than the others and the resulting entropy will be different of log $1|Xi|$. It should be noted that this discretization is reported in some papers as one of the most efficient method (e.g. for naive Bayes classification) [35]. Implementation of discretization strategies in minet The discretization is performed in minet by the function discretize(dataset, disc = "equalfreq", nbins = sqrt(nrow(dataset))) • dataset is the dataset to be discretized • disc is a string which can take three values: "equalfreq" "equalwidth" "globalequalwidth"(default is " equalfreq"). • nbins, the number of bins to be used for discretization, which is by default set to $m$ with m is the number of samples [35]. Note that there are functions used by the built-in R hist() function that can be used here such as nclass. FD(dataset), nclass. scott(dataset) and nclass. Sturges(dataset). 4 Assessment of the network inference algorithm A network inference problem can be seen as a binary decision problem where the inference algorithm plays the role of a classifier: for each pair of nodes, the algorithm either returns an edge or not. Each pair of nodes can thus be assigned a positive label (an edge) or a negative one (no edge). A positive label (an edge) predicted by the algorithm is considered as a true positive (TP) or as a false positive (FP) depending on the presence or not of the corresponding edge in the underlying true network, respectively. Analogously, a negative label is considered as a true negative (TN) or a false negative (FN) depending on whether the corresponding edge is present or not in the underlying true network, respectively. Note that all mutual information network inference methods use a threshold value in order to delete the arcs having a too low score. Hence, for each treshold value, a confusion matrix can be computed. 4.1 ROC curves The false positive rate is defined as and the true positive rate as also known as recall or sensitivity. A Receiver Operating Characteristic (ROC) curve, is a graphical plot of the TPR (true positive rate) vs. FPR (false positive rate) for a binary classifier system as the threshold is varied [36]. A perfect classifier would yield a point in the upper left corner (having coordinates [0,1]) of the ROC space, representing 100% TPR (all true positives are found) and 0% FPR (no false positives are found). A completely random guess gives a point along the diagonal line (the so-called line of no-discrimination) which goes from the left bottom to the top right corners. Points above the diagonal line indicate good classification results, while points below the line indicate wrong results. 4.2 PR curves It is generally recommended [37] to use receiver operator characteristic (ROC) curves when evaluating binary decision problems in order to avoid effects related to the chosen threshold. However, ROC curves can present an overly optimistic view of an algorithm's performance if there is a large skew in the class distribution, as typically encountered in transcriptional network inference because of sparseness. To tackle this problem, precision-recall (PR) curves have been cited as an alternative to ROC curves [38]. Let the precision quantity measure the fraction of real edges among the ones classified as positive and the recall quantity also know as true positive rate (TPR), denote the fraction of real edges that are correctly inferred. These quantities depend on the threshold chosen to return a binary decision. The PR curve is a diagram which plots the precision (p) versus recall (r) for different values of the threshold on a two-dimensional coordinate system. 4.3 F-Scores Note that a compact representation of the PR diagram is returned by the maximum and/or the average of the F-score quantity [39]: which is an harmonic average of precision and recall. The general formula for non-negative real β is: where β is a parameter denoting the weight of the recall. Two commonly used F-scores are the F[2]-measure, which weights recall twice as much as precision, and the F[0.5]-measure, which weights precision twice as much as recall. In transcriptional network inference, precision is often a more desirable feature than recall since it is expensive to investigate if a gene regulates another. Assesment functionalities in minet In order to benchmark the inference methods, the package provides a number of assessment tools. The validate(net, ref.net, steps = 50) function allows to compare an inferred network net to a reference network ref.net, described by a Boolean adjacency matrix. The assessment process consists in removing the inferred edges having a score below a given threshold and in computing the related confusion matrix, for steps thresholds ranging from the minimum to the maximum value of edge weigths. A resulting dataframe table containing the list of all the steps confusion matrices is returned and made available for further analysis. In particular, the function pr(table) returns the related precisions and recalls, rates(table) computes true positive and false positive rates while the function fscores(table, beta) returns the F[β ]– scores. The functions show.pr(table) and show.roc(table) allow the user to plot PR-curves and ROC-curves respectively (Figure (Figure3)3) from a list of confusion matrices. Precision-Recall curves plotted with show.pr(table). 5 Example Once the R platform is launched, the package, its description and its vignette can be loaded using the following commands: library(help = minet) A demo script (demo(demo)) shows the main functionalities of the package that we describe in the following. In order to infer a network with the minet package, four steps are required: • data discretization, • MIM computation, • network inference, • normalization of the network (optional). The main function of the package is minet which sequentially executes the four steps mentioned above, see Figure Figure11). The four steps in the minet function (discretization disc, mutual information matrix build.mim, inference mrnet, aracne, clr and normalization norm. The function minet(dataset, method, estimator, disc, nbins) takes the following arguments: dataset, a matrix or a dataframe containing the microarray data, method, the inference algorithm (such as ARACNE, CLR or MRNET), estimator, the entropy estimator used for the computation of mutual information (empirical, Miller-Madow, shrink, Schurmannn-Grassberger), disc the binning algorithm (i.e. equal frequency or equal size interval) and the parameter nbins which sets the number of bins to use. The final step of the minet function is the normalization using the norm(net) function. This step normalizes all the weights of the inferred adjancy matrix between 0 and 1. Hence, the minet function returns the inferred network as a weighted adjacency matrix with values ranging from 0 to 1 where the higher is a weight, the higher is the evidence that a gene-gene interaction exists. For demo purposes the package makes available also the dataset syn.data representing the expression of 50 genes in 100 experiments. This dataset has been synthetically generated from the network syn.net using the microarray data generator Syntren [40]. This dataset can be loaded with data(syn.data) and the corresponding original network with data(syn.net). Note that the command res<-minet(syn.data,"mrnet","mi.shrink","equalwidth",10) is a compact way to execute the following sequence of instructions: In order to plot a PR-curve (see Figure Figure3),3), the functions show.pr and validate can be used. table <- validate(res, syn.net) In order to display the inferred network, the Rgraphviz package [41] can be used with the following commands (see Fig. Fig.22): Graph generated with minet and plotted with Rgraphviz. graph <- as(res, "graphNEL") Note that, for the sake of computational efficiency, all the inference functions as well as the entropy estimators are implemented in C++. As a reference, a network of five hundreds variables may be inferred in less than one minute on an Intel Pentium 4 with 2 Ghz and 512 DDR SDRAM. 6 Conclusion Transcriptional network inference is a key issue toward the understanding of the relationships between the genes of an organism. Notwithstanding, few public domain tools are available once a thourough comparison of existing approaches is at stake. A new R/Bioconductor package, freely available, has been introduced in this paper. This package makes available to biologists and bioinformatics practicioneers a set of tools to infer networks from microarray datasets with a large number (several thousands) of genes. Four information-theoretic methods of network inference (i.e. Relevance Networks, CLR, ARACNE and MRNET), four different entropy estimators (i.e. empirical, Miller-Madow, Schurmann-Grassberger and shrink) and three validation tools (i.e. F-scores, PR curves and ROC curves) are implemented in the package. We deem that this tool is an effective answer to the increasing need of comparative tools in the growing domain of transcriptional network inference from expression data. Authors' contributions PEM and FL carried out the implementation of the R package minet (up to version 1.1.6). PEM and GB have written the package documentation as well as the manuscript. All authors read and approved the final version of the manuscripts. Availability and requirements The R-package minet is freely available from the Comprehensive R Archive Network (CRAN) at http://cran.r-project.org as well as from the Bioconductor website http://bioconductor.org. The package runs on Linux, Mac OS and MS Windows using an installed version of R. Available functions of the package minet (version 1.1.6) This work was partially funded by the Communauté Française de Belgique under ARC grant no. 04/09-307. The authors thank their collegue Catharina Olsen for her appreciable comments, suggestions and testing of package functionalities. The authors also thank Korbinian Strimmer as well as the reviewers for their useful comments on the package and the paper. • van Someren EP, Wessels LFA, Backer E, Reinders MJT. Genetic network modeling. Pharmacogenomics. 2002;3:507–525. doi: 10.1517/14622416.3.4.507. [PubMed] [Cross Ref] • Gardner TS, Faith J. Reverse-engineering transcription control networks. Physics of Life Reviews 2. 2005. [PubMed] • Schäfer J, Strimmer K. An empirical Bayes approach to inferring large-scale gene association networks. Bioinformatics. 2005;21:754–764. doi: 10.1093/bioinformatics/bti062. [PubMed] [Cross Ref] • Faith J, Hayete B, Thaden J, Mogno I, Wierzbowski J, Cottarel G, Kasif S, Collins J, Gardner T. Large-Scale Mapping and Validation of Escherichia coli Transcriptional Regulation from a Compendium of Expression Profiles. PLoS Biology. 2007;5 [PMC free article] [PubMed] • Basso K, Margolin A, Stolovitzky G, Klein U, Dalla-Favera R, Califano A. Reverse engineering of regulatory networks in human B cells. Nature Genetics. 2005;37 [PubMed] • Butte AJ, PT , Slonim D, Golub T, Kohane I. Discovering functional relationships between RNA expression and chemotherapeutic susceptibility using relevance networks. Proceedings of the National Academy of Sciences. 2000;97:12182–12186. doi: 10.1073/pnas.220392197. [PMC free article] [PubMed] [Cross Ref] • Butte AJ, Kohane IS. Mutual Information Relevance Networks: Functional Genomic Clustering Using Pairwise Entropy Measurments. Pac Symp Biocomput. 2000:418–429. [PubMed] • Margolin AA, Nemenman I, Basso K, Wiggins C, Stolovitzky G, Favera RD, Califano A. ARACNE: an algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC Bioinformatics. 2006;7 [PMC free article] [PubMed] • Meyer PE, Kontos K, Lafitte F, Bontempi G. Information-Theoretic Inference of Large Transcriptional Regulatory Networks. EURASIP J Bioinform Syst Biol. 2007:79879. [PMC free article] [PubMed] • Gentleman RIR. R: A language for data analysis and graphics. Journal of Computational and Graphical Statistics. 1996;5 http://www.R-project.org • Venables WN, Ripley BD. Modern Applied Statistics with S. Fourth. Springer; 2002. • Gentleman RC, Carey VJ, Bates DJ, Bolstad BM, Dettling M, Dudoit S, Ellis B, Gautier L, Ge Y, Gentry J, Hornik K, Hothorn T, Huber W, Iacus S, Irizarry R, Leisch F, Li C, Maechler M, Rossini AJ, Sawitzki G, Smith C, Smyth GK, Tierney L, Yang YH, Zhang J. Bioconductor: Open software development for computational biology and bioinformatics. Genome Biology. 2004;5 [PMC free article] [PubMed • Cheng J, Greiner R, Kelly J, Bell D, Liu W. Learning Bayesian Networks from Data: An Information-Theory Based Approach. Artificial Intelligence. 2002;137 • Chow C, Liu C. Approximating discrete probability distributions with dependence trees. Information Theory, IEEE Transactions on 1968. • Pearl J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers Inc; 1988. • Cover TM, Thomas JA. Elements of Information Theory. New York: John Wiley; 1990. • Tourassi GD, Frederick ED, Markey MK, C E, Floyd J. Application of the mutual information criterion for feature selection in computer-aided diagnosis. Medical Physics. 2001;28:2394–2402. doi: 10.1118/1.1418724. [PubMed] [Cross Ref] • Peng H, Long F, Ding C. Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2005;27:1226–1238. doi: 10.1109/TPAMI.2005.159. [PubMed] [Cross Ref] • Ding C, Peng H. Minimum Redundancy Feature Selection From Microarray Gene Expression Data. Journal of Bioinformatics and Computational Biology. 2005;3:185–205. doi: 10.1142/S0219720005001004. [ PubMed] [Cross Ref] • Merz P, Freisleben B. Greedy and Local Search Heuristics for Unconstrained Binary Quadratic Programming. Journal of Heuristics. 2002;8:1381–1231. doi: 10.1023/A:1017912624016. [Cross Ref] • Olsen C, Meyer PE, Bontempi G. On the Impact of Entropy Estimator in Transcriptional Regulatory Network Inference. In: Ahdesmäki M, Strimmer K, Radde N, Rahnenf hrer J, Klemm K, L hdesm ki H, Yli-Harja O, editor. 5th International Workshop on Computational Systems Biology (WSCB 08) Tampere International Center for Signal Processing; 2008. p. 41. • Daub CO, Steuer R, Selbig J, Kloska S. Estimating mutual information using B-spline functions – an improved similarity measure for analysing gene expression data. BMC Bioinformatics. 2004;5 [PMC free article] [PubMed] • Paninski L. Estimation of entropy and mutual information. Neural Computation. 2003;15:1191–1253. doi: 10.1162/089976603321780272. [Cross Ref] • Beirlant J, Dudewica EJ, Gyofi L, Meulen E van der. Nonparametric Entropy Estimation: An Overview. Journal of Statistics. p. 97. • Nemenman I, Bialek W, de Ruyter van Steveninck R. Entropy and information in neural spike trains: Progress on the sampling problem. Phys Rev E Stat Nonlin Soft Matter Phys. 2004;69:056111. [ • Darbellay G, Vajda I. Estimation of the information by an adaptive partitioning of the observation space. IEEE Transactions on Information Theory. 1999. • Hausser J. Master's thesis. National Institute of Applied Sciences Lyon; 2006. Improving entropy estimation and inferring genetic regulatory networks.http://strimmerlab.org/publications/ • Schäfer J, Strimmer K. A shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. Stat Appl Genet Mol Biol. 2005;4 [PubMed] • Wu L, Neskovic P, Reyes E, Festa E, Heindel W. Classifying n-back EEG data using entropy and mutual information features. European Symposium on Artificial Neural Networks. 2007. • Beerenwinkel N, Schmidt B, Walter H, Kaiser R, Lengauer T, Hoffmann D, Korn K, Selbig J. Diversity and complexity of HIV-1 drug resistance: A bioinformatics approach to predicting phenotype from genotype. Proc Natl Acad Sci U S A. 2002;99:8271–8276. doi: 10.1073/pnas.112177799. [PMC free article] [PubMed] [Cross Ref] • Krichevsky R, Trofimov V. The performance of universal coding. IEEE Transactions in Information Theory. 1981. • Schurmann T, Grassberger P. Entropy estimation of symbol sequences. Chaos. 1996. [PubMed] • Dougherty J, Kohavi R, Sahami M. Supervised and Unsupervised Discretization of Continuous Features. International Conference on Machine Learning. 1995. pp. 194–202. • Liu H, Hussain F, Tan CL, Dash M. Discretization: An Enabling Technique. Data Mining and Knowledge Discovery. 2002;6 • Yang Y, Webb GI. On why discretization works for naive-bayes classifiers. Proceedings of the 16th Australian Joint Conference on Artificial Intelligence. 2003. • Davis J, Goadrich M. The Relationship Between Precision-Recall and ROC Curves. Proceedings of the 23rd international conference on Machine learning. 2006. • Provost F, Fawcett T, Kohavi R. Proceedings of the Fifteenth International Conference on Machine Learning. Morgan Kaufmann, San Francisco, CA; 1998. The case against accuracy estimation for comparing induction algorithms; pp. 445–453. • Bockhorst J, Craven M. Markov Networks for Detecting Overlapping Elements in Sequence Data. In: Saul LK, Weiss Y, Bottou L, editor. Advances in Neural Information Processing Systems 17. Cambridge, MA: MIT Press; 2005. pp. 193–200. • Sokolova M, Japkowicz N, Szpakowicz S. Beyond Accuracy, F-score and ROC: a Family of Discriminant Measures for Performance Evaluation. Proceedings of the AAAI'06 workshop on Evaluation Methods for Machine Learning. 2006. • den Bulcke TV, Leemput KV, Naudts B, van Remortel P, Ma H, Verschoren A, Moor BD, Marchal K. SynTReN: a generator of synthetic gene expression data for design and analysis of structure learning algorithms. BMC Bioinformatics. 2006;7:43. doi: 10.1186/1471-2105-7-43. [PMC free article] [PubMed] [Cross Ref] • Carey VJ, Gentry J, Whalen E, Gentleman R. Network Structures and Algorithms in Bioconductor. Bioinformatics. 2005;21:135–136. doi: 10.1093/bioinformatics/bth458. [PubMed] [Cross Ref] Articles from BMC Bioinformatics are provided here courtesy of BioMed Central Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2630331/?tool=pubmed","timestamp":"2014-04-18T21:24:05Z","content_type":null,"content_length":"133891","record_id":"<urn:uuid:c294a399-5463-481f-af3c-4594c4c5dbc4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
Gloucester City Algebra 1 Tutor Find a Gloucester City Algebra 1 Tutor ...As a Pennsylvania certified teacher in Mathematics, I was recognized by ETS for scoring in the top 15% of all Praxis II Mathematics test takers. In high school I scored 1550/1600 (780M, 770V) on the SAT and in January 2013 I scored 2390/2400 (800M, 790R, 800W). Yes, I still take the tests to mak... 19 Subjects: including algebra 1, calculus, statistics, geometry ...Ed. As previously mentioned, with a Math and English major, I have learned many helpful tricks and tactics that I would love to share with my students to help them succeed. I also enjoy and excel in Social Studies and Science, so if you are having difficulty with those subjects in the Praxis II, I would be more than happy to help you with them as well. 15 Subjects: including algebra 1, reading, English, grammar ...In addition, I have a passion for reading, and I believe it is essential to ignite that passion so that others can be spurred on to learn to read. SAT Math is something that every student can ace. I truly believe that with the right training, every student can get to an 800. 21 Subjects: including algebra 1, reading, calculus, physics ...For me, each math problem is like a puzzle in which the answer can be found through more than one method. As for my credentials, I've received the EducationWorks Service Award for one year of service in the Supplemental Educational Services (SES) Tutoring Program. I specialize in tutoring elementary math, geometry, and algebra for success in school. 5 Subjects: including algebra 1, reading, geometry, elementary math ...Also, I have my own transportation. Please do not hesitate to message me if you have any questions. I look forward to hearing from you! 11 Subjects: including algebra 1, reading, geometry, grammar
{"url":"http://www.purplemath.com/Gloucester_City_algebra_1_tutors.php","timestamp":"2014-04-18T16:28:29Z","content_type":null,"content_length":"24318","record_id":"<urn:uuid:6d8ae057-f78a-43dc-b88e-6042f978e955>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
THEY'RE EVERYWHERE! THEY'RE EVERYWHERE! Grades 7-12 Envision a world without geometric shapes--no houses, no buildings, no roads, no airplanes, no television, no computers. This lesson is designed to give students an appreciation of the polygons and polyhedrons around them that make their world one of order and strength. After reviewing the definitions and attributes of polygons, students will view the video segments to see how these polygons are found in every building. Polygons make up the repeating structure and support of these buildings. Students will view segments that include an architect who uses geometry to identify her limitations in designing buildings; a structural engineer who uses geometry as supports for the massive structures he designs; and an inventor who uses geometry to generate structures that change from one shape to another. Students will build polygons and polyhedrons, and then construct airplanes out of polygons. Students will then view a "movie" on the invasion of polygons, which will be topic of their final project, "You Can't Get Away From Them." This investigation can extend over two class periods. The Eddie Files #103: Geometry: Invasion of the Polygons Students will be able to: 1. review the vocabulary of polygons. 2. build two and three dimensional geometric shapes. 3. build flexible and inflexible shapes. 4. construct a paper airplane to determine how shape can control distance and accuracy. 5. create expanding and contracting shapes. 6. investigate the use of polygons and polyhedrons in architecture and geometric inventions. Texas Assessment of Academic Skills (TAAS), Exit Level Math Objectives: #3: Demonstrate an understanding of geometric properties and relationships. #11: Determine solution strategies and solve problems. #12: Express or solve problems using mathematical representation. NCTM Standards for Grades 9-12 Standard 1: Mathematics as Problem Solving Standard 3: Mathematics as Reasoning Standard 7: Geometry from a Synthetic Perspective Standard 14: Mathematical Structure Per class: □ string □ one trash can per ten students Per group of 4 students: □ box of toothpicks □ bag of gumdrops or can of modeling clay □ one polygon puzzle, cut out and stored in a plastic bag Per student: □ paper □ scissors □ camera and film (optional) □ poster board □ polygon □ polyhedron □ convex polygons □ nonconvex polygons Day One When students enter the room, ask them to build a three sided figure from three gumdrops (modeling clay may be substituted for gumdrops) and three toothpicks. The gumdrops will be the vertices of the shapes and the toothpicks will be the sides. Ask students to name the shape they have built. (a triangle) On the overhead or chalkboard, make a list of all the kinds of triangles they know, categorizing by sides and by angles. (scalene, isosceles, equilateral, obtuse, acute, right) Ask students to work in groups of four, with each of the students building one of these shapes with their toothpicks and gumdrops. Tell students that they have just built the simplest polygon. Ask students for a definition of polygon. (a closed shape made of segments) Ask students what the next polygon with four sides is called. (a quadrilateral) On the overhead, generate a list of all the special quadrilaterals. (parallelogram, rectangle, rhombus, square, trapezoid, kite) Call out the description of one of the quadrilaterals and ask each group to build the quadrilateral described with toothpicks and gumdrops. Have the students name the quadrilateral. For example, tell the students to build the quadrilateral that has the following: four congruent sides and four congruent angles (a square) four congruent angles (a rectangle) only one pair of parallel sides (a trapezoid) four congruent sides (a rhombus) opposite sides parallel (a parallelogram) two pair of adjacent sides congruent, but opposites not (a kite) Next have the students generate a five sided polygon (a pentagon), a six sided polygon (a hexagon), and an eight sided polygon (an octagon). Have students note that these are all two dimensional figures (flat). Now have each group try to solve the polygon puzzle included as part of this lesson. Each side of the triangle marked with a picture, word, or definition must be matched with its corresponding picture, word, or definition. The puzzle will be complete when all the pieces form a large triangle. Tell the students they will watch a segment of video where two construction workers find polygons in the three dimensional buildings that they are helping to construct. The student's responsibility will be to find polygons in the buildings. Tell students they will be asked to design a building without polygons. Students will also be asked to find polyhedrons in the architect's designs and actual buildings. Ask the students to watch what the worker on the left is doing during this segment. BEGIN The Eddie Files video after Eddie says, "So I hit the streets, looking for polygons." PAUSE when the male worker on the left stands up and says, "...basically very common in buildings, but there..." when the camera shows the construction site. Ask students to find polygons in the structure. RESUME the video. PAUSE again when the male worker admits, "That is a parallelogram," and the camera again shows the skyscraper being built. Ask students what predominant shape is seen. (a parallelogram or rectangle) Ask students what kinds of buildings could be built without polygons. Could a building made totally of circles and curves be possible? Have students try to draw the shape they visualize. Ask for volunteers to share their drawings. Ask the class for the limitations of these buildings. For example, there would be no corners. How would that affect the walls and floors of the buildings? Would skyscrapers be possible? RESUME video until the female worker throws a green pepper into the male worker's lunch pail. Ask students what she is doing. (stealing his lunch) PAUSE after they have finished talking, on the frame of the cylindrical appearing building. Ask students if the building is really formed from circles or cylinders. (The building is really made of rectangles hinged together to form the appearance of a cylinder.) RESUME the video briefly. PAUSE the video on the building with the blue triangular roof with two chimneys. Ask students to find the polygons that form the roof and the entire building. The shapes of the sides are polygons, which are two dimensional. However, the actual building is a three dimensional shape, which is called a polyhedron. A polyhedron is a three dimensional shape made of Have students build polyhedrons with their toothpick polygons. Have them pick four polygons they constructed earlier and connect them with more toothpicks. Choose one and ask the students what polygons were used to construct the shape. Now ask each group of four students to build a polyhedron with at least one hexagon, one pentagon, two squares, and two triangles. Note that the more vertices connected, the more sturdy the shape. Have each group write a description of their polyhedron. Have the groups exchange the descriptions. Each group will now try to build another group's polyhedron. Ask the groups to check for accuracy with the original group when they are finished. Ask students if every group built the same polyhedron. (no) Ask students how many possible polyhedrons there are in the world. (too many to count) Inform the students that they are now going to see an architect who makes her living designing polyhedrons for specific purposes. Tell students they have specific responsibilities during the viewing. They are to remember how many possible choices the architect has to solve the dilemma of how the train station should look; to identify the limitations of her building design; and to determine why she decided on a rectangular prism of glass. RESUME the video. PAUSE after she says, "...letting them know where the front door was." Ask students what limitations she had for the building design. (only 40 feet wide and 50 feet long-a small space to get 100,000 people through) RESUME the video. PAUSE after she says, "What if the whole building was just one big triangle?" What polygons are used to make this triangular polyhedron tower? (primarily triangles and some rectangles) Ask students if they think this tower fits in with the rest of the neighborhood and why. RESUME the video. PAUSE after, "You can see it from very far away." Ask students why she picked the glass rectangular prism. (same shape as the surrounding buildings, appears to be the same height, glass tower can be seen from very far away) RESUME video. PAUSE on the view of New York City at night. Have students place a toothpick and gumdrop triangle made with three toothpicks, one gumdrop square made with four toothpicks, and one gumdrop pentagon made with five toothpicks on their desks. Ask students which structure is the sturdiest. (triangle) Ask why the triangle is so sturdy. (cannot bend it inwards) Explain to the students that triangles are the only shapes that exist only in the convex form. Tell students that convex polygons are polygons which have no vertices which would cause a line going through one of the segments of the polygon to contain a point in the interior of the polygon. Any line drawn through the triangle segments would not enter the triangle. Now have students push one vertex of the square inwards. It is possible. The shape is no longer a square. It is a nonconvex quadrilateral. Have students trace along the sides of the quadrilateral. Have them notice that their pencil will go through the interior of the shape. Repeat the process for the pentagon. Again it is possible to turn the pentagon into a nonconvex pentagon. Ask students to write a conclusion on what makes the sturdiest polygon. Remind students that flexible shapes are not necessarily sturdy and strong. These two attributes are critical when building structures for human habitation and use. The next segment of video introduces students to a structural engineer. Tell the students their responsibility will be to define what a structural engineer does. The students will also build some simple shapes which add strength and rigidity to flexible materials. RESUME video. PAUSE after he says, "...space frame that is all made out of triangles." Ask students what the triangles do for the ceiling. (make it rigid) Ask what is the main function of a structural engineer in building. (to make buildings sturdy) RESUME video. PAUSE after he says, "...give the material strength to hold the weight of my hand." Have students roll a piece of paper and test the strength as the engineer did. RESUME video. STOP after he says, "This is why it is so important to understand the shape of a building." Cut a piece of string approximately two meters long. Divide the students into groups of no more than ten. Place a trash can in the center of each group and have each member of the group stand exactly one string length from the trash can. What shape will the students form? (circle) Have each student, one at a time, try to throw a piece of standard notebook paper without any folds or creases into the trash can from where the student is standing. Ask students to describe what happened. (paper hard to control) Have students take the same piece of paper and wad it into a tight ball. Repeat the process with students throwing the paper into the trash can. Ask students to describe what happened. (The more rigid ball was easily thrown into the trash can.) Have students go back to their desks to fold a paper airplane from a piece of notebook paper. First have them fold the paper in half the long way. Take two corners (the unfolded ones) and fold them back to form two right triangles on either side of the airplane. A right trapezoid should be formed. Now fold one obtuse triangle on each side, bringing B down to segment DF and creasing through A and F to form the longest side of the obtuse triangle. Now fold one more triangle on each side by folding segment AF onto segment DF. Have students look at the airplane from the rear. Ask them if they notice the same fold the structural engineer used to show how a triangular fold kept the shape from falling down. Have students go back to their circle. Again one at a time (for safety's sake), have students throw the airplane into the trash can. Students should discuss how much more movement is made possible by building with polygons instead of a wadded up piece of paper. Ask students to note how sturdy their shape is as they throw it. Finally, ask them to pick up their plane and flatten it. Have students note the number of triangles they actually folded. This is an example of a shape that was molded into another form, but could be put back to the original. Pre-Viewing Activity Day 2 Briefly review the activities of the previous day. Then have each student make a toothpick and gumdrop hexagon and push in one vertex to make it nonconvex. Without breaking it, have students try to collapse the entire hexagon into as small a shape as possible. Have students try to fold it back out into a hexagon. Tell students that by having a flexible shape, they are able to "morph" one shape into another and back again. Focus for Viewing Tell students that they are about to view a video segment about a geometric inventor. This inventor takes what the class did on a small scale and does it on a large one. The students' responsibility is to find the polygons that morph into other polygons to form beautiful polyhedrons. Students should also determine what the word "morph" means. Students should listen to find the one mistake he makes when describing certain polygons. Viewing Activities Begin The Eddie Files video where it was stopped on the previous day. Allow the students to watch the segment uninterrupted. Pause the video after he says, "That's the first step of inventing.'' Ask the students if they would like to see the inventor segment again. If so, rewind to the segment where his first invention is shown. Resume the video and ask students to tell where he made the mistake. (He calls triangles and pentagons three sided shapes.) Pause the video at the end of the inventor sequence when he says, "That's the first step of inventing." To prepare for the viewing segment summarizing the lesson, fast forward to the black and white section of the video which shows the "4" on the screen and stop the video here to first conduct the following activity. Post-Viewing Activity Tell students that they are going to invent a continuous band of paper without a beginning or an end that will go around the room. The students will only be allowed one sheet of notebook paper and scissors for the actual activity. Tell students that they will "morph" this piece of paper into this band, and then will "morph" it back to a piece of notebook paper. Have students work in groups of four. Give them some time to experiment with cutting the paper. If no one comes up with a solution, ask the students to follow these directions. Have the students fold their notebook paper in half to form a rectangle 8 1/2 by 5 1/2 inches. Have the students cut through both layers of paper according to the following diagram. Cut along the dotted lines. Now have students cut along the crease, being certain to leave the outside segments on both ends uncut. Now have students unfold their shape. Ask them how they would have to cut to make the paper band go around the room, or at least around a large desk. Remind students that by now they should have a better appreciation of how useful polygons and polyhedrons are in our world. Students will now watch a spoof of a movie called "The Invasion of the Polygons." Student responsibility is to decide how many quadrilaterals the man under investigation at the end of the spoof could have seen. Students will also decide what is meant by, "You can run but you cannot hide!" as it relates to polygons. Begin the video at the "4." Have students watch the entire segment. Stop after the words, "Coming to a theater near you." Ask students what quadrilaterals the actor could have seen. (rectangle, parallelogram, rhombus, trapezoid, kite, square) Ask students the purpose of this segment. (It illustrates that polygons are everywhere. Unless we are out in nature, we are surrounded by them.) Within the community, take students to a construction site to observe and document polygons and polyhedrons in building structures. Have students interview an architect or structural engineer. Have students make a collage of polygons and polyhedrons they find in their house, car, church, grocery store, doctor's office, shopping mall, or neighborhood. Students will take photographs or sketch pictures of the polygons they find. Students will document the polygons and polyhedrons they display in the pictures. Students will then take a picture of a "polygonless" situation. The title for this activity will be "You Can't Get Away From Them." Have students use hinges and other materials to design and build polygon inventions that "morph". Have students visit an art gallery to see how artists use geometric shapes in their creations. Literature: Have students read Flatland, A Romance in Many Dimensions by Edwin Abbott. Art: Have students study the tessellation work by M.C. Escher to discover how he used polygons as the basis of his patterns. Architecture: Have students study the ancient buildings of the Greeks, Mayans, Moors, and Romans to see the use of polygons in their structure. Have students study medieval architecture in Europe versus the architecture of the Renaissance to note structural differences. Science: Have students investigate how scientists use polygons as the basis of a chemical substance, such as a hexagon for a Benzene ring. Careers/Industry: Have students investigate the use of polygons and polyhedrons in the planning of automobiles and airplanes. Have students investigate the different ways to cut precious stones geometric cuts. 1995-1996 National Teacher Training Institute / Austin Master Teacher: Linda Shaub Click here to view the worksheet associated with this lesson. Lesson Plan Database Thirteen Ed Online
{"url":"http://www.thirteen.org/edonline/nttidb/lessons/as/everyas.html","timestamp":"2014-04-17T18:32:12Z","content_type":null,"content_length":"23406","record_id":"<urn:uuid:57a672b8-1e42-4695-ac28-6abee8658408>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Each February, five $20,000 scholarships ($5,000 per year, renewable) are awarded by the Helen Way Klingler College of Arts and Sciences to the top scorers in the Mathematics Scholarship Competition. The competition is open to all high school seniors who are interested in studying at Marquette University. To be eligible for one of the scholarships, you must have applied to Marquette by December 1. Approximately 150 students take the mathematics exam each year. The mathematics exam covers high school mathematics through precalculus. Calculators are not allowed. You are welcome to study the mathematics exams from previous years that are posted below. Please visit Marquette's Undergraduate Admissions website for additional information about Marquette's scholarship competitions, and other scholarship opportunities.
{"url":"http://marquette.edu/mscs/undergrad-scholarship.shtml","timestamp":"2014-04-20T18:23:55Z","content_type":null,"content_length":"12680","record_id":"<urn:uuid:1c7bbedc-4963-4060-9e77-3caa3429a2bd>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: topos theory qua f.o.m.; topos theory qua pure math Stephen G Simpson simpson at math.psu.edu Thu Jan 15 13:48:53 EST 1998 Let me try to summarize the current state of the discussion regarding topos theory qua f.o.m. (= foundations of mathematics). I started the discussion by asking about real analysis in topos theory. McLarty claimed that there is no problem about this. After a lot of back and forth, it turned out that the basis of McLarty's claim is that the topos axioms plus two additional axioms give a theory that is easily intertranslatable with Zermelo set theory with bounded comprehension and choice. The two additional axioms are: (a) "there exists a natural number object (defined in terms of primitive recursion)"; (b) "every surjection has a right inverse" (i.e. the axiom of choice), which implies the law of the excluded middle. [ Question: Do (a) and (b) hold in categories of sheaves? ] OK, fair enough; the topos axioms plus (a) and (b) are enough for the development of real analysis. But then I raised two further questions, which seem crucial: (1) Is there any foundational picture that motivates the topos axioms? (2) Can this same foundational picture also be used to motivate the additional axioms (a) and (b)? McLarty seems to be saying that the answers to (1) and (2) are "yes" and "yes", but he declines to discuss it here on the FOM list. OK, fair enough. Let's leave it at that, unless somebody else wants to carry the ball. To me, the question of foundational motivation is crucial. For a while I was wondering whether it might be possible to motivate topos theory as "a general theory of functions". But McLarty says that the real motivation is much more complex. Indeed, when we look at the history, topos theory seems to have arisen from a strange mixture of motivations. One motivation is "sets without elements" a la Lawvere. Another is "linear transformations without linearity" a la Grothendieck/Lawvere. Another is "logic without formulas" a la The whole thing is very confusing to a humble FOMer such as myself. On the other hand, maybe I'm barking up the wrong tree. Maybe topos theory doesn't really have any f.o.m. motivation or content. Maybe topos theory is to be viewed as simply a tool or technique in pure mathematics. Maybe this is what McLarty had in mind when he said: > To category theorists beginning with Saunders MacLane it was > exciting, and it revamped all of homological algebra. Whether it is > "foundations" or not it is important mathematics. In other words, don't judge topos theory from the viewpoint of f.o.m. Judge it from the viewpoint of applications to pure mathematics: homological algebra, algebraic geometry, etc. My perspective is that of a mathematician specializing in f.o.m. I know something of homological algebra, algebraic geometry, etc, but I don't know whether topos theory is "important mathematics" or not. I'm willing to leave that question up to the intended clients of topos theory, i.e. the algebraic geometers et al. I assume they will evaluate topos theory fairly based on their own standards. -- Steve PS. McLarty mentioned MacLane's book "Mathematics: Form and Function". My impression is that, although MacLane in that book tried to motivate topos theory as an f.o.m.-style general theory of functions, in subsequent books he has backed off from that view. This would seem to confirm my hypothesis that topos theory isn't to be evaluated as f.o.m. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-January/000779.html","timestamp":"2014-04-18T20:46:52Z","content_type":null,"content_length":"5877","record_id":"<urn:uuid:d9a4fa5a-ac8f-44c2-85d8-9ae3ba781cfd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
oint Presentations Introduction to Convection: Mass Transfer PPT Presentation Summary : Introduction to Convection: Mass Transfer Chapter Six and Appendix E Sections 6.1 to 6.8 and E.4 Concentration Boundary Layer Concentration Boundary (cont ... Source : http://www3.nd.edu/~msen/Teaching/IntHT/Slides/06B-Chapter%206,%20Sec%206.1-6.8,%20App%20E.4%20Black.ppt Mass Transfer - Prasad A Wadegaonkar PPT Presentation Summary : Mass Transfer Factors affecting mass transfer rates between phases As discussed, the rate of mass transfer between phases is largely determined by the rate of ... Source : http://wprasad.webs.com/Mass%20Transfer.ppt ChE306: Heat and Mass Transfer PPT Presentation Summary : ChE306 Course Description. Theory of heat and mass transport. Unified treatment via equations of change. Analogies between heat and mass transfer. Source : http://chemeng.nmsu.edu/people/faculty/deng/ChE306/Lecture%201.pptx Presentation Summary : Mass Transfer Coefficient Mass transfer coefficients - simplified method to describe complex boundary condition involving flow and diffusion. Mass transfer from a ... Source : http://www.che.utah.edu/~ring/ChE%205655%20Chip%20Processing/MT%20in%20Boundary%20Layer.ppt ChE306: Heat and Mass Transfer PPT Presentation Summary : General Considerations. Must have a mixture of two or more species for mass transfer to occur. The species concentration gradient is the driving potential for transfer. Source : http://chemeng.nmsu.edu/people/faculty/deng/ChE306/Lecture%2018.pptx Diffusion Mass Transfer - University of Notre Dame PPT Presentation Summary : Diffusion Mass Transfer Chapter 14 Sections 14.1 through 14.7 General Considerations Definitions Property Relations Diffusion Fluxes Absolute Fluxes Conservation of ... Source : http://www3.nd.edu/~msen/Teaching/IntHT/Slides/14A-Chapter%2014,%20Secs%2014.1%20-%2014.7%20Black.ppt Measurement of Bioreactor KLa - Auburn University PPT Presentation Summary : We would like to show you a description here but the site won’t allow us. Source : http://www.eng.auburn.edu/users/drmills/mans486/Measurement_Kla_Sparged_Bioreactor_files/Bioreactor_Kla.ppt ME 259 Heat (and Mass) Transfer - CSU, Chico PPT Presentation Summary : ME 259 Heat Transfer Lecture Slides I Dr. Gregory A. Kallio Dept. of Mechanical Engineering, Mechatronic Engineering & Manufacturing Technology California State ... Source : http://www.csuchico.edu/~gkallio/MECH%20338/Lecture%20Slides%20&%20Notes/ME259LectureSlides1.ppt Review of Mass Transfer - Rowan University - Personal Web Sites PPT Presentation Summary : Review of Mass Transfer Fick’s First Law (one dimensional diffusion) J flux (moles/area/time) At steady-state or any instant in time Fick’s Second Law Source : http://users.rowan.edu/~farrell/Courses/Controlled%20Release/Course%20Notes/Mass%20Transfer%20(2a).ppt ME6203 Mass Transport - National University of Singapore PPT Presentation Summary : ME6203 Mass Transport ME 6203 Mass Transport Objectives To examine fundamental principles of diffusive and convective mass transfer and their applications in analysis ... Source : http://serve.me.nus.edu.sg/arun/file/teaching/ME6203%20Mass%20Transport%20Jan%2009.ppt C H A P T E R 13 The Transfer of Heat PPT Presentation Summary : PM3125 Lectures 7 to 9 Lecture Content of Lectures 7 to 9: Mathematical problems on heat transfer Mass transfer: concept and theory Oxygen transfer from gas bubble to ... Source : http://www.rshanthini.com/tmp/PM3125/Lecture7to9heatandmasstransfer.ppt Convective Mass Transfer - r_shanthini PPT Presentation Summary : CP302 Separation Process Principles Mass Transfer - Set 6 A T L Course content of Mass transfer section 03 01 04 Diffusion Theory of interface mass transfer Source : http://www.rshanthini.com/tmp/CP302/CP302_MassTransfer_06_OK.ppt Presentation Summary : Title: Heat&Mass Transfer Author: Namas Chandra Last modified by: shet Created Date: 11/19/2002 12:13:32 AM Document presentation format: On-screen Show Source : http://www.eng.fsu.edu/~chandra/courses/eml4536/Heat%26Mass%20Transfer.ppt Molecular Diffusion - nus.edu.sg PPT Presentation Summary : Dept of Chemical and Biomolecular Engineering CN2125E Heat and Mass Transfer Dr. Tong Yen Wah, E5-03-15, 6516-8467 chetyw@nus.edu.sg (Mass Transfer, Radiation) Source : http://courses.nus.edu.sg/course/chewch/CN2125E/lectures/Week9.ppt Presentation Summary : Mass transfer between the liquid medium and solid catalyst is facilitated at high liquid flow rate through the bed. To achieve this, packed are often operated with ... Source : http://yalun.files.wordpress.com/2008/10/fermentation-technology-chapter-viiviii-ix-x.ppt ME 259 Heat (and Mass) Transfer - CSU, Chico PPT Presentation Summary : ME 259 Heat Transfer Lecture Slides III Dr. Gregory A. Kallio Dept. of Mechanical Engineering, Mechatronic Engineering & Manufacturing Technology Source : http://www.csuchico.edu/~gkallio/MECH%20338/Lecture%20Slides%20&%20Notes/ME259LectureSlides3.ppt Correlation of Mass Transfer Coefficient - Rowan University ... PPT Presentation Summary : Correlation of Mass Transfer Coefficient Dan Duffield Rick Pelletier 2-21-2006 Mass Transfer Coefficient, km Correlated using- where x = constant Figure 5 shows the ... Source : http://users.rowan.edu/~farrell/Courses/Controlled%20Release/Student%20Work%202006/Dan%20and%20Rick%203.ppt Presentation Summary : Solids- Free coordinates in Leaching. Sometimes, we don’t use the triangle for representing the leaching ternary system. Rectangular diagram is used in such cases. Source : http://www.unimasr.net/ums/upload/files/2013/Apr/UniMasr.com_1a0b5606349d588c52256c3a5bc3a13b_1.pptx ADVANCED MASS TRANSFER - PersianGig.com PPT Presentation Summary : Title: ADVANCED MASS TRANSFER Author: COMSOL Last modified by: ABC Created Date: 9/21/2008 5:59:19 PM Document presentation format: On-screen Show Other titles Source : http://vu-aut.persiangig.com/a-JERM/Dr.Zokaee/courseplan.ppt Presentation Summary : Heat and mass transfer on the surface of moving droplet at small Re and Pe numbers Heat and mass fluxes extracted/delivered from/to the droplet surface ... Source : http://www.bgu.ac.il/me/laboratories/tmf/Elperin-Dresden/Simultaneous%20Heat%20and%20Mass.ppt Presentation Summary : Title: TROUBLESHOOTING AMINE PLANTS USING MASS TRANSFER RATE-BASED SIMULATION TOOLS Author: Jenny Seagraves Last modified by: Jenny Seagraves Created Date Source : http://www.iapg.org.ar/sectores/eventos/eventos/listados/IAPGCALAFATE/VIERNES/17.00/SimulationofAminePlantsPresentation.ppt Presentation Summary : Title: Mass Transfer Operations Author: crystal Last modified by: Nagwa Mansi Created Date: 7/27/2010 9:07:59 PM Document presentation format: On-screen Show (4:3) Source : http://www.unimasr.net/ums/upload/files/2012/Oct/UniMasr.com_0773e513110a63ba6d0a7422f57e8af6_1.ppt Presentation Summary : ... Two-Film Theory KLa: Transfer Rate KLa (s-1) KL = liquid mass transfer coefficient (m/s) a = area-to-volume ratio of the packing (m2/m3) Determination: ... Source : http://www.ce.siue.edu/488/notes/T4%20Physical%20Treatment%20Air%20Stripping.ppt If you find powerpoint presentation a copyright It is important to understand and respect the copyright rules of the author. Please do not download if you find presentation copyright. If you find a presentation that is using one of your presentation without permission, contact us immidiately at
{"url":"http://www.xpowerpoint.com/ppt/mass-transfer.html","timestamp":"2014-04-17T03:50:20Z","content_type":null,"content_length":"22118","record_id":"<urn:uuid:60ddbb26-256a-454b-9c62-6d7f9054f2a0>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Manual Reference Pages - EXP (3) exp, expf, exp2, exp2f, expm1, expm1f, log, logf, log10, log10f, log1p, log1pf, pow, powf - exponential, logarithm, power functions ERROR (due to Roundoff etc.) Return Values See Also .Lb libm .In math.h double exp double x float expf float x double exp2 double x float exp2f float x double expm1 double x float expm1f float x double log double x float logf float x double log10 double x float log10f float x double log1p double x float log1pf float x double pow double x double y float powf float x float y The exp and the expf functions compute the base .Ms e exponential value of the given argument x. The exp2 and the exp2f functions compute the base 2 exponential of the given argument x. The expm1 and the expm1f functions compute the value exp(x)-1 accurately even for tiny argument x. The log and the logf functions compute the value of the natural logarithm of argument x. The log10 and the log10f functions compute the value of the logarithm of argument x to base 10. The log1p and the log1pf functions compute the value of log(1+x) accurately even for tiny argument x. The pow and the powf functions compute the value of x to the exponent y. ERROR (due to Roundoff etc.) The values of exp 0, expm1 0, exp2 integer, and pow integer integer are exact provided that they are representable. Otherwise the error in these functions is generally below one ulp. These functions will return the appropriate computation unless an error occurs or an argument is out of range. The functions pow x y and powf x y raise an invalid exception and return an NaN if x < 0 and y is not an integer. An attempt to take the logarithm of ±0 will result in a divide-by-zero exception, and an infinity will be returned. An attempt to take the logarithm of a negative number will result in an invalid exception, and an NaN will be generated. The functions exp(x)-1 and log(1+x) are called expm1 and logp1 in BASIC on the Hewlett-Packard HP -71B and APPLE Macintosh, EXP1 and LN1 in Pascal, exp1 and log1 in C on APPLE Macintoshes, where they have been provided to make sure financial calculations of ((1+x)**n-1)/x, namely expm1(n*log1p(x))/x, will be accurate when x is tiny. They also provide accurate inverse hyperbolic The function pow x 0 returns x**0 = 1 for all x including x = 0, oo, and NaN . Previous implementations of pow may have defined x**0 to be undefined in some or all of these cases. Here are reasons for returning x**0 = 1 always: 1. Any program that already tests whether x is zero (or infinite or NaN) before computing x**0 cannot care whether 0**0 = 1 or not. Any program that depends upon 0**0 to be invalid is dubious anyway since that expression s meaning and, if invalid, its consequences vary from one computer system to another. 2. Some Algebra texts (e.g. Sigler s) define x**0 = 1 for all x, including x = 0. This is compatible with the convention that accepts a[0] as the value of polynomial p(x) = a[0]*x**0 + a[1]*x**1 + a[2]*x**2 +...+ a[n]*x**n at x = 0 rather than reject a[0]*0**0 as invalid. 3. Analysts will accept 0**0 = 1 despite that x**y can approach anything or nothing as x and y approach 0 independently. The reason for setting 0**0 = 1 anyway is this: If x(z) and y(z) are functions analytic (expandable in power series) in z around z = 0, and if there x(0) = y(0) = 0, then x(z)**y(z) -> 1 as z -> 0. 4. If 0**0 = 1, then oo**0 = 1/0**0 = 1 too; and then NaN**0 = 1 too because x**0 = 1 for all finite and infinite x, i.e., independently of x. fenv(3), math(3) Visit the GSP FreeBSD Man Page Interface. Output converted with manServer 1.07.
{"url":"http://gsp.com/cgi-bin/man.cgi?section=3&topic=expf","timestamp":"2014-04-18T08:02:17Z","content_type":null,"content_length":"14773","record_id":"<urn:uuid:e1670fb6-da1c-4228-81a3-2cafca143499>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
The computer software Coq “runs” the formal foundations-language dependent type theory and serves in particular as a formal proof management system. It provides a formal language to write mathematical definitions, executable programs and theorems together with an environment for semi-interactive development of machine-checked proofs, i.e. for certified programming. Coq is named after Thierry Coquand, and follows a tradition of naming languages after animals (compare OCaml). Computer systems such as Coq and Agda have been used to give machine-assisted and machine-verified proofs of extraordinary length, such as of the four-colour theorem? and the Kepler conjecture?. More generally, they are being used to formalise and machine-verify large parts of mathematics as such, see the section Formalization of set-based mathematics below. One striking insight by Vladimir Voevodsky was that Coq naturally lends itself also to a formalization of higher mathematics that is founded not on sets, but on higher category theory and homotopy theory. For this see below the section Homotopy type theory. Formalization of set-based mathematics Projects include Formalized proofs Major theorems whose proofs have been fully formalized in Coq include Homotopy type theory For Coq-projects in homotopy type theory see Coq uses the Gallina specification language for specifying theories. It uses a version of the calculus of constructions to implement natural deduction. A dependent type theory software similar to Coq is Agda. Similar but non-dependent type theory software includes Haskell. A web-based version of Coq is at To start it, choose “Coq” from the menu “proof assistant” and Click on “guest login”. In the user interface that appears, enter Coq-code in the left window and hit the arrow-buttons to “run” it with output appearing in the right window. The guest account allows everything except saving files and loading libraries. But with copy-and-paste one can of course “include libraries” by hand. (Notice, though, that the current version can for instance not read the HoTT libraries verbatim, since it does not understand implicit types yet.) A tool for viewing proofs in static Coq files without loading them into Coq is A proviola-enhanced version of the Coq-library for homotopy type theory is at Learning Coq To get an idea how to use Coq from Emacs, there are Andrej Bauer’s Video tutorials for the Coq proof assistant (web). Yet properly learning Coq can be quite daunting, luckily the right material can help a lot: 1. Benjamin Pierce’s Software Foundations is probably the most elementary introduction to Coq and functional progamming. The book is written in Coq so you can directly open the source files in CoqIDE and step through them to see what is going on and solve the exercises. 2. In a similar style, Andrej Bauer and Peter LeFanu Lumsdaine wrote a nice Coq tutorial (pdf) on homotopy type theory. See also Oberwolfach HoTT-Coq tutorial. 3. Adam Chlipala’s trimmed down version of Certified Programming with Dependent Types explains more advanced Coq techniques. Applications to formal mathematics For applications to homotopy type theory see the references listed there. Especially
{"url":"http://www.ncatlab.org/nlab/show/Coq","timestamp":"2014-04-19T17:47:38Z","content_type":null,"content_length":"40349","record_id":"<urn:uuid:3deb111c-c8c5-4d61-aa78-e67ddbaec979>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: January 2010 [00828] [Date Index] [Thread Index] [Author Index] Re: FindMaximum/NMaximize vs. Excel Solver • To: mathgroup at smc.vnet.net • Subject: [mg106886] Re: FindMaximum/NMaximize vs. Excel Solver • From: John <jhurley13 at gmail.com> • Date: Tue, 26 Jan 2010 06:34:12 -0500 (EST) • References: <hj98d9$g8f$1@smc.vnet.net> I received an answer from Daniel Lichtblau at Wolfram, and posed this additional question: Stepping back, what would be the right syntax for NMaximize for this w is the list of wagers to optimize, a list of 20 elements; each wager must be between 0 and 100; the sum of the wagers is 100 It seems gross to list out individual variables when Mathematica is so powerful at list processing. His response was as follows, and was just what I was looking for, since I wanted the "Mathematica" way of doing it: It can be set up as below. Notice I do not explicitly list variables. probabilities = {0.2,0.2,0.2,0.2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.1,0.1}; len = Length[probabilities]; odds = {7,5,5,5,5,5,5,5,5,5,5,5,5,5,1,1,2,2,2,2}; vars = Array[w,len]; tot = 100; c1 = Map[#>=0&, vars]; ep = (vars(1+odds)-tot); sp = ep.probabilities; var = (ep-sp)^2; obj = sp / Sqrt[Dot[probabilities,var]]; Timing[NMaximize[{obj,Flatten[{c1,Total[vars]==tot}]}, vars]] That works, but takes around four times longer than FindMaximum. The speed of NMaximize is not troubling to me, but FindMaximum might be slow for this. That is, I'm not sure whether the timing is expected or indicates a speed bump. In a later message, we were looking at timings: Thank you so much for your reply. It not only answers my question, but reminds me how much I still need to learn about Mathematica. If nothing else, I have to take a long look at "a = b" vs. "a:=b". I was still curious about why it took so long, and found that simplifying obj helped speed things up by 20% or so: obj = sp/Sqrt[Dot[probabilities, var]] // Simplify; Timing[NMaximize[{obj, Flatten[{Total[vars] == tot, c1}]}, vars, AccuracyGoal -> 6, PrecisionGoal -> 9]] Just for fun, I tried the different methods available from NMaximize, and found: SimulatedAnnealing 19.2039 RandomSearch 1329.74 DifferentialEvolution 18.9437 NelderMead 99.743 Default 23.7308 FindMaximum 5.43854 If it is OK with you, I'd like to post your response back to the group since it helped me so much; I can attribute it to you or to an anonymous helper. Thanks again. That's fine with me. Possibly someone will get competitive and maybe figure out a tweak that makes FindMaximum or NMaximize handle this faster. For FindMaximum, I doubt it will be method-related because I believe that function has but one choice (interior point) when given constraints (unless everything is linear or, at worst, objective is One possible improvement would be to maximize the square, since (I think) everything is nonnegative. Could use obj2, given as below. obj2 = Simplify[Rationalize[obj]^2, Assumptions -> Map[0 <= # <= 100 &, vars]] One other thing. If you get rid of variables that correspond to zero probability, the timings become tremendously faster. I do not think this type of smarts could be automated (in NMaximize or Findmaximum), I was still curious about why it took so long, and found that simplifying obj helped speed things up by 20% or so: obj = sp/Sqrt[Dot[probabilities, var]] // Simplify; Timing[NMaximize[{obj, Flatten[{Total[vars] == tot, c1}]}, vars, AccuracyGoal -> 6, PrecisionGoal -> 9]] Just for fun, I tried the different methods available from NMaximize, and found: SimulatedAnnealing 19.2039 RandomSearch 1329.74 DifferentialEvolution 18.9437 NelderMead 99.743 Default 23.7308 FindMaximum 5.43854 Thanks for the replies.
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Jan/msg00828.html","timestamp":"2014-04-20T23:52:03Z","content_type":null,"content_length":"28670","record_id":"<urn:uuid:30b152f6-3f66-416e-a483-f187bf3db600>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
NAEP - 2008 Long-Term Trend: Course Taking (Mathematics) As part of the 2008 long-term trend assessment, students at age 17 were asked about the kinds of mathematics courses they were taking. The highest-level mathematics course was determined from the students' responses to the questions. The full text of the question and the percentage of students who responded within each category are shown below. Counting what you are taking now, have you ever taken any of the following mathematics courses? A. General business, or consumer mathematics B. Pre-algebra or introduction to Algebra C. First-year algebra D. Second-year algebra E. Geometry F. Trigonometry G. Pre-calculus or calculus • The percentage of 17-year-olds taking more advanced mathematics courses increased in 2008 compared to 1986. • A higher percentage of 17-year-olds in 2008 compared to 1986 indicated that they had taken pre-calculus or calculus classes. * Significantly different (p < .05) from 2008. ^1 Original assessment format. Results prior to 2004 are also from the original assessment format. ^2 Revised assessment format. NOTE: The "pre-algebra or general mathematics" response category includes "pre-algebra or introduction to algebra" and “general, business, or consumer mathematics” and students who did not take any of the listed courses. The "other" response category includes students for whom the highest-level mathematics course could not be determined. Detail may not sum to totals because of rounding. View complete data with standard errors. SOURCE: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Assessment of Educational Progress (NAEP), various years, 1978–2008 Long-Term Trend Mathematics Assessments.
{"url":"http://nationsreportcard.gov/ltt_2008/ltt0012.asp?tab_id=tab2&subtab_id=Tab_1","timestamp":"2014-04-20T03:10:41Z","content_type":null,"content_length":"18506","record_id":"<urn:uuid:fb23c1e5-9161-491a-835c-3d7a4c70b5d1>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
fudge() Function Generates random numbers to emulate dice rolls; returns the total of a special Fudge dice roll. When these dice are rolled, the result is -1, 0, or 1, this function then sums up all of the dice rolled and returns that sum. • times - The number of times to roll the dice. Roll ten special Fudge dice. Returns a number that is between -10 and 10. Roll five special Fudge dice, using variables. 1. [h: DiceTimes = 5] 2. [t: fudge(DiceTimes)] Returns a number than is between -5 and 5. See Also For another method of rolling dice, see Dice Expressions.
{"url":"http://lmwcs.com/rptools/wiki/fudge","timestamp":"2014-04-19T23:59:51Z","content_type":null,"content_length":"21270","record_id":"<urn:uuid:2178762f-b84a-4a1e-8940-f14bb3696f66>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
XL Fortran for AIX 8.1 Language Reference To use the following IEEE procedures, you must add a USE IEEE_ARITHMETIC, USE IEEE_EXCEPTIONS, or USE IEEE_FEATURES statement to your source file as required. For more information on the USE statement, see USE. XL Fortran supports all the named constants in the IEE_FEATURES module. The IEEE_ARITHMETIC module behaves as if it contained a USE statement for IEEE_EXCEPTIONS. All values that are public in IEEE_EXCEPTIONS remain public in IEEE_ARITHMETIC. When the IEEE_EXCEPTIONS or the IEEE_ARITHMETIC modules are accessible, IEEE_OVERFLOW and IEEE_DIVIDE_BY_ZERO are supported in the scoping unit for all kinds of real and complex data. To determine the other exceptions supported use the IEEE_SUPPORT_FLAG function. Use IEEE_SUPPORT_HALTING to determine if halting is supported. Support of other exceptions is influenced by the accessibility of the named constants IEEE_INEXACT_FLAG, IEEE_INVALID_FLAG, and IEEE_UNDERFLOW_FLAG of the IEEE_FEATURES module as follows: • If a scoping unit has access to IEEE_UNDERFLOW_FLAG of IEEE_FEATURES, the scoping unit supports underflow and returns true from IEEE_SUPPORT_FLAG(IEEE_UNDERFLOW, X), for REAL(4)and REAL(8). • If IEEE_INEXACT_FLAG or IEEE_INVALID_FLAG is accessible, the scoping unit supports the exception and returns true from the corresponding inquiry for REAL(4) and REAL(8). • If IEEE_HALTING is accessible, the scoping unit supports halting control and returns true from IEEE_SUPPORT_HALTING(FLAG) for the flag. If an exception flag signals on entry to a scoping unit that does not access IEEE_EXCEPTIONS or IEEE_ARITHMETIC, the compiler ensures that the exception flag is signaling on exit. If a flag is quiet on entry to such a scoping unit, it can be signaling on exit. Further IEEE support is available through the IEEE_ARITHMETIC module. Support is influenced by the accessibility of named constants in the IEEE_FEATURES module: • If a scoping unit has access to IEEE_DATATYPE of IEEE_FEATURES, the scoping unit supports IEEE arithmetic and returns true from IEEE_SUPPORT_DATATYPE(X) for REAL(4)and REAL(8). • If IEEE_DENORMAL, IEEE_DIVIDE, IEEE_INF, IEEE_NAN, IEEE_ROUNDING, or IEEE_SQRT is accessible, the scoping unit supports the feature and returns true from the corresponding inquiry function for REAL(4) and REAL(8). • For IEEE_ROUNDING, the scoping unit returns true for all the rounding modes IEEE_NEAREST, IEEE_TO_ZERO, IEEE_UP, and IEEE_DOWN for REAL(4) and REAL(8). If the IEEE_EXCEPTIONS or IEEE_ARITHMETIC modules are accessed, and IEEE_FEATURES is not, the supported subset of features is the same as if IEEE_FEATURES was accessed. An elemental IEEE class function. Returns the IEEE class of a floating-point value. Where X is of type real. Result Type The result is of type IEEE_CLASS_TYPE. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) function must return with a value of true. If you specify a data type of REAL(16), then IEEE_SUPPORT_DATATYPE will return false, though the appropriate class type will still be returned. TYPE(IEEE_CLASS_TYPE) :: C REAL :: X = -1.0 IF (IEEE_SUPPORT_DATATYPE(X)) THEN C = IEEE_CLASS(X) ! C has class IEEE_NEGATIVE_NORMAL An elemental IEEE copy sign function. Returns the value of X with the sign of Y. Where X and Y are of type real, though they may be of different kinds. Result Type The result is of the same kind and type as X. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) and IEEE_SUPPORT_DATATYPE(Y) must return with a value of true. For supported IEEE special values, such as NaN and infinity, IEEE_COPY_SIGN returns the value of X with the sign of Y. IEEE_COPY_SIGN ignores the -qxlf90=nosignedzero compiler option. XL Fortran REAL(16) numbers have no signed zero. Example 1: REAL :: X DOUBLE PRECISION :: Y X = 3.0 Y = -2.0 IF (IEEE_SUPPORT_DATATYPE(X) .AND. IEEE_SUPPORT_DATATYPE(Y)) THEN X = IEEE_COPY_SIGN(X,Y) ! X has value -3.0 Example 2: REAL :: X, Y X = 5.0 Y = 1.0 IF (IEEE_SUPPORT_DATATYPE(X)) THEN X = IEEE_VALUE(X, IEEE_NEGATIVE_INF) ! X has value -INF X = IEEE_COPY_SIGN(X,Y) ! X has value +INF An elemental IEEE subroutine. Retrieves the status of the exception flag specified. Sets FLAG_VALUE to true if the flag is signaling, or false otherwise. Where FLAG is an INTENT(IN) argument of type IEEE_FLAG_TYPE specifying the IEEE flag to obtain. FLAG_VALUE is an INTENT(OUT) default logical argument that contains the value of FLAG. LOGICAL :: FLAG_VALUE IF (FLAG_VALUE) THEN PRINT *, "Overflow flag is signaling." PRINT *, "Overflow flag is quiet." An elemental IEEE subroutine. Retrieves the halting mode for an exception and sets HALTING to true if the exception specified by the flag will cause halting. If you use -qflttrap=imprecise, halting is not precise and may occur after the exception. By default, exceptions do not cause halting in XL Fortran. Where FLAG is an INTENT(IN) argument of type IEEE_FLAG_TYPE specifying the IEEE flag. HALTING is an INTENT(OUT) default logical. IF (HALTING) THEN PRINT *, "The program will halt on an overflow exception." An IEEE subroutine. Sets ROUND_VALUE to the current IEEE rounding mode. Where ROUND_VALUE is an INTENT(OUT) scalar of type IEEE_ROUND_TYPE. CALL IEEE_GET_ROUNDING_MODE(ROUND_VALUE) ! Store the rounding mode IF (ROUND_VALUE == IEEE_OTHER) THEN PRINT *, "You are not using an IEEE rounding mode." An IEEE subroutine. Retrieves the current IEEE floating-point status. Where STATUS_VALUE is an INTENT(OUT) scalar of type IEEE_STATUS_TYPE. You can only use STATUS_VALUE in an IEEE_SET_STATUS invocation. CALL IEEE_GET_STATUS(STATUS_VALUE) ! Get status of all exception flags CALL IEEE_SET_FLAG(IEEE_ALL,.FALSE.) ! Set all exception flags to quiet ... ! calculation involving exception handling CALL IEEE_SET_STATUS(STATUS_VALUE) ! Restore the flags An elemental IEEE function. Tests whether a value is finite. Returns true if IEEE_CLASS(X) has one of the following values: • IEEE_NEGATIVE_NORMAL • IEEE_NEGATIVE_DENORMAL • IEEE_NEGATIVE_ZERO • IEEE_POSITIVE_ZERO • IEEE_POSITIVE_DENORMAL • IEEE_POSITIVE_NORMAL It returns false otherwise. Where X is of type real. Result Type Where the result is of type default logical. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) must return with a value of true. REAL :: X = 1.0 IF (IEEE_SUPPORT_DATATYPE(X)) THEN PRINT *, IEEE_IS_FINITE(X) ! Prints true An elemental IEEE function. Tests whether a value is IEEE Not-a-Number. Returns true if IEEE_CLASS(X) has the value IEEE_SIGNALING_NAN or IEEE_QUIET_NAN. It returns false otherwise. Where X is of type real. Result Type Where the result is of type default logical. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) and IEEE_SUPPORT_NAN(X) must return with a value of true. Example 1: REAL :: X = -1.0 IF (IEEE_SUPPORT_DATATYPE(X)) THEN IF (IEEE_SUPPORT_SQRT(X)) THEN ! IEEE-compliant SQRT function IF (IEEE_SUPPORT_NAN(X)) THEN PRINT *, IEEE_IS_NAN(SQRT(X)) ! Prints true Example 2: REAL :: X = -1.0 IF (IEEE_SUPPORT_STANDARD(X)) THEN PRINT *, IEEE_IS_NAN(SQRT(X)) ! Prints true An elemental IEEE function. Tests whether a value is negative. Returns true if IEEE_CLASS(X) has one of the following values: • IEEE_NEGATIVE_NORMAL • IEEE_NEGATIVE_DENORMAL • IEEE_NEGATIVE_ZERO • IEEE_NEGATIVE_INF It returns false otherwise. Where X is of type real. Result Type Where the result is of type default logical. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) must return with a value of true. IF (IEEE_SUPPORT_DATATYPE(1.0)) THEN PRINT *, IEEE_IS_NEGATIVE(1.0) ! Prints false An elemental IEEE function. Tests whether a value is normal. Returns true if IEEE_CLASS(X) has one of the following values: • IEEE_NEGATIVE_NORMAL • IEEE_NEGATIVE_ZERO • IEEE_POSITIVE_ZERO • IEEE_POSITIVE_NORMAL It returns false otherwise. Where X is of type real. Result Type Where the result is of type default logical. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) must return with a value of true. REAL :: X = -1.0 IF (IEEE_SUPPORT_DATATYPE(X)) THEN IF (IEEE_SUPPORT_SQRT(X)) THEN ! IEEE-compliant SQRT function PRINT *, IEEE_IS_NORMAL(SQRT(X)) ! Prints false An elemental IEEE function. Returns unbiased exponent in the IEEE floating-point format. If the value of X is neither zero, infinity, or NaN, the result has the value of the unbiased exponent of X, equal to EXPONENT(X)-1. Where X is of type real. Result Type Where the result is the same type and kind as X. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) must return with a value of true. If X is zero, the result is negative infinity. If X is infinite, the result is positive infinity. If X is NaN, the result is NaN. IF (IEEE_SUPPORT_DATATYPE(1.1)) THEN PRINT *, IEEE_LOGB(1.1) ! Prints 0.0 An elemental IEEE function. Returns the next machine-representable neighbor of X in the direction towards Y. Where X and Y are of type real. Result Type Where the result is the same type and kind as X. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) and IEEE_SUPPORT_DATATYPE(Y) must return with a value of true. If X and Y are equal the function returns X without signaling an exception. If X and Y are not equal, the function returns the next machine-representable neighbor of X in the direction towards Y. The neighbors of zero, of either sign, are both nonzero. IEEE_OVERFLOW and IEEE_INEXACT are signaled when X is finite but IEEE_NEXT_AFTER(X, Y) is infinite. IEEE_UNDERFLOW and IEEE_INEXACT are signaled when IEEE_NEXT_AFTER(X, Y) is denormalized or zero. If X or Y is a quiet NaN, the result is one of the input NaN values. Example 1: REAL :: X = 1.0, Y = 2.0 IF (IEEE_SUPPORT_DATATYPE(X)) THEN PRINT *, (IEEE_NEXT_AFTER(X,Y) == X + EPSILON(X)) ! Prints true Example 2: REAL(4) :: X = 0.0, Y = 1.0 IF (IEEE_SUPPORT_DATATYPE(X)) THEN PRINT *, (IEEE_NEXT_AFTER(X,Y) == 2.0**(-149)) ! Prints true An elemental IEEE remainder function. The result value, regardless of the rounding mode, is exactly X-Y*N, where N is the integer nearest to the exact value X/Y; whenever |N - X/Y| = 1/2, N is even. Where X and Y are of type real. Result Type Where the result is of type real with the same kind as the argument with greater precision. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) and IEEE_SUPPORT_DATATYPE(Y) must return with a value of true. If the result value is zero, the sign is the same as X. IF (IEEE_SUPPORT_DATATYPE(4.0)) THEN PRINT *, IEEE_REM(4.0,3.0) ! Prints 1.0 PRINT *, IEEE_REM(3.0,2.0) ! Prints -1.0 PRINT *, IEEE_REM(5.0,2.0) ! Prints 1.0 An elemental IEEE function. Rounds to an integer value according to the current rounding mode. Where X is of type real. Result Type Where the result is the same type and kind as X. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) must return with a value of true. If the result has the value zero, the sign is that of X. IF (IEEE_SUPPORT_DATATYPE(1.1)) THEN CALL IEEE_SET_ROUNDING_MODE(IEEE_NEAREST) PRINT *, IEEE_RINT(1.1) ! Prints 1.0 CALL IEEE_SET_ROUNDING_MODE(IEEE_UP) PRINT *, IEEE_RINT(1.1) ! Prints 2.0 An elemental IEEE function. Returns X * 2^I. Where X is of type real and I is of type INTEGER. Result Type Where the result is the same type and kind as X. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) must return with a value of true. If X * 2^I is representable as a normal number, then the result is a normal number. If X is finite and X * 2^I is too large the IEEE_OVERFLOW exception occurs. The result value is infinity with the sign of X. If X * 2^I is too small and there is a loss of accuracy, the IEEE_UNDERFLOW exception occurs. The result is the nearest representable number with the sign of X. If X is infinite, the result is the same as X with no exception signals. IF (IEEE_SUPPORT_DATATYPE(1.0)) THEN PRINT *, IEEE_SCALB(1.0,2) ! Prints 4.0 A transformational IEEE function. Returns a value of the kind type parameter of an IEEE real data type with decimal precision of at least P digits, and a decimal exponent range of at least R. Where P and R are both scalar optional arguments of type integer. If the kind type parameter is not available and the precision is not available, the result is -1. If the kind type parameter is not available and the exponent range is not available, the result is -2. If the kind type parameter is not available and if neither the precision or the exponent range is available, the result is -3. If more than one kind type parameter value is applicable, the value returned is the one with the smallest decimal precision. If there are several values, the smallest of these kind values is IEEE_SELECTED_REAL_KIND(6,70) has the value KIND(0.0) An IEEE subroutine. Assigns a value to an IEEE exception flag. Where FLAG is an INTENT(IN) scalar or array argument of type IEEE_FLAG_TYPE corresponding to the value of the flag to be set. FLAG_VALUE is an INTENT(IN) scalar or array argument of type logical, corresponding to the desired status of the exception flag. The value of FLAG_VALUE should be conformable with the value of FLAG. If FLAG_VALUE is true, the exception flag specified by FLAG is set to signaling. Otherwise, the flag is set to quiet. Each element of FLAG must have a unique value. CALL IEEE_SET_FLAG(IEEE_OVERFLOW, .TRUE.) ! IEEE_OVERFLOW is now signaling An IEEE subroutine. Controls continuation or halting after an exception. Where FLAG is an INTENT(IN) scalar or array argument of type IEEE_FLAG_TYPE corresponding to the exception flag for which holding applies. HALTING is an INTENT(IN) scalar or array argument of type logical, corresponding to the desired halting status. By default exceptions will not cause halting in XL Fortran. The value of HALTING should be conformable with the value of FLAG. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) must return with a value of true. If you use the -qflttrap=imprecise compiler option, halting is not precise and may occur after the exception has occurred. If HALTING is true, the exception specified by FLAG will cause halting. Otherwise, execution will continue after the exception. If your code sets the halting mode to true for an exception flag and you do not use the -qflttrap=enable option when compiling the entire program, the program will produce unexpected results if exceptions occur. See the User's Guide for further information. Each element of FLAG must have a unique value. CALL IEEE_SET_HALTING_MODE(IEEE_DIVIDE_BY_ZERO, .TRUE.) REAL :: X = 1.0 / 0.0 ! Program will halt with a divide-by-zero exception An IEEE subroutine. Sets the current rounding mode. Where ROUND_VALUE is an INTENT(IN) argument of type IEEE_ROUND_TYPE specifying the rounding mode. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) and IEEE_SUPPORT_ROUNDING (ROUND_VALUE, X) must return with a value of true. The compilation unit calling this program must be compiled with the -qfloat=rrm compiler option. All compilation units calling programs compiled with the -qfloat=rrm compiler option must also be compiled with this option. IF (IEEE_SUPPORT_DATATYPE(1.1)) THEN CALL IEEE_SET_ROUNDING_MODE(IEEE_NEAREST) PRINT *, IEEE_RINT(1.1) ! Prints 1.0 CALL IEEE_SET_ROUNDING_MODE(IEEE_UP) PRINT *, IEEE_RINT(1.1) ! Prints 2.0 An elemental IEEE subroutine. Restores the value of the floating-point status. Where STATUS_VALUE is an INTENT(IN) argument of type IEEE_STATUS_TYPE specifying the floating-point status. STATUS_VALUE must have been set previously by IEEE_GET_STATUS. An inquiry IEEE function. Determines whether the current implementation supports IEEE arithmetic. Support means using an IEEE data format and performing the binary operations of +, -, and * as in the IEEE standard whenever the operands and result all have normal values. NaN and Infinity are not fully supported for REAL(16). Arithmetic operations do not necessarily propagate these values. Where X is an optional scalar argument of type real. Result Type The result is a scalar of type default logical. If X is absent, the function returns a value of false. If X is present and REAL(16), the function returns a value of false. Otherwise the function returns true. CALL IEEE_GET_STATUS(STATUS_VALUE) ! Get status of all exception flags CALL IEEE_SET_FLAG(IEEE_ALL,.FALSE.) ! Set all exception flags to quiet ... ! calculation involving exception handling CALL IEEE_SET_STATUS(STATUS_VALUE) ! Restore the flags An inquiry IEEE function. Determines whether the current implementation supports denormalized numbers. Where X is an optional scalar or array valued argument of type real. Result Type The result is a scalar of type default logical. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) must return with a value of true. The result has a value of true if the implementation supports arithmetic operations and assignments with denormalized numbers for all arguments of type real where X is absent, or for real variables of the same kind type parameter as X. Otherwise, the result has a value of false. An inquiry IEEE function. Determines whether the current implementation supports division to the accuracy of the IEEE standard. Where X is an optional scalar or array valued argument of type real. Result Type The result is a scalar of type default logical. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) must return with a value of true. The result has a value of true if the implementation supports division with the accuracy specified by the IEEE standard for all arguments of type real where X is absent, or for real variables of the same kind type parameter as X. Otherwise, the result has a value of false. An inquiry IEEE function. Determines whether the current implementation supports an exception. Where FLAG is a scalar argument of IEEE_FLAG_TYPE. X is an optional scalar or array valued argument of type real. Result Type The result is a scalar of type default logical. The result has a value of true if the implementation supports detection of the exception specified for all arguments of type real where X is absent, or for real variables of the same kind type parameter as X. Otherwise, the result has a value of false. If X is absent, the result has a value of false. If X is present and of type REAL(16), the result has a value of false. Otherwise the result has a value of true. An inquiry IEEE function. Determines whether the current implementation supports the ability to abort or continue execution after an exception occurs. Support by the current implementation includes the ability to change the halting mode using IEEE_SET_HALTING(FLAG). Where FLAG is an INTENT(IN) argument of IEEE_FLAG_TYPE. Result Type The result is a scalar of type default logical. The result returns with a value of true for all flags. An inquiry IEEE function. Support indicates that IEEE infinity behavior for unary and binary operations, including those defined by intrinsic functions and by functions in intrinsic modules, complies with the IEEE standard. Where X is an optional scalar or array valued argument of type real. Result Type The result is a scalar of type default logical. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) must return with a value of true. The result has a value of true if the implementation supports IEEE positive and negative infinities for all arguments of type real where X is absent, or for real variables of the same kind type parameter as X. Otherwise, the result has a value of false. If X is of type REAL(16), the result has a value of false. Otherwise the result has a value of true. An inquiry IEEE function. Determines whether the current implementation supports IEEE base conversion rounding during formatted input/output. Support refers the ability to do IEEE base conversion during formatted input/output as described in the IEEE standard for the modes IEEE_UP, IEEE_DOWN, IEEE_ZERO, and IEEE_NEAREST for all arguments of type real where X is absent, or for real variables of the same kind type parameter as X. Where X is an optional scalar or array valued argument of type real. Result Type The result is a scalar of type default logical. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) must return with a value of true. If X is present and of type REAL(16), the result has a value of false. Otherwise, the result returns a value of true. An inquiry IEEE function. Determines whether the current implementation supports the IEEE Not-a-Number facility. Support indicates that IEEE NaN behavior for unary and binary operations, including those defined by intrinsic functions and by functions in intrinsic modules, conforms to the IEEE standard. Where X is an optional scalar or array valued argument of type real. Result Type The result is a scalar of type default logical. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) must return with a value of true. If X is absent, the result has a value of false. If X is present and of type REAL(16), the result has a value of false. Otherwise the result returns a value of true. An inquiry IEEE function. Determines whether the current implementation supports a particular rounding mode for arguments of type real. Support indicates the ability to change the rounding mode using Where ROUND_VALUE is a scalar argument of IEEE_ROUND_TYPE. X is an optional scalar or array valued argument of type real. Result Type The result is a scalar of type default logical. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) must return with a value of true. If X is absent, the result has a value of true if the implementation supports the rounding mode defined by ROUND_VALUE for all arguments of type real. Otherwise, it has a value of false. If X is present, the result returns a value of true if the implementation supports the rounding mode defined by ROUND_VALUE for real variables of the same kind type parameter as X. Otherwise, the result has a value of false. If X is present and of type REAL(16), the result returns a value of false when ROUND_VALUE has a value of IEEE_NEAREST. Otherwise the result returns a value of true. If ROUND_VALUE has a value of IEEE_OTHER the result has a value of false. An inquiry IEEE function. Determines whether the current implementation supports the SQRT as defined by the IEEE standard. Where X is an optional scalar or array valued argument of type real. Result Type The result is a scalar of type default logical. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) must return with a value of true. If X is absent, the result returns a value of true if SQRT adheres to IEEE conventions for all variables of type REAL. Otherwise, the result has a value of false. If X is present, the result returns a value of true if SQRT adheres to IEEE conventions for all variables of type REAL with the same kind type parameter as X. Otherwise, the result has a value of If X is present and of type REAL(16), the result has a value of false. Otherwise the result returns a value of true. An inquiry IEEE function. Determines whether all facilities defined in the Fortran 2000 draft standard are supported. Where X is an optional scalar or array valued argument of type real. Result Type The result is a scalar of type default logical. If X is absent, the result returns a value of false since XL Fortran supports REAL(16). If X is present, the result returns a value of true if the following functions also return true: • IEEE_SUPPORT_DATATYPE(X) • IEEE_SUPPORT_DENORMAL(X) • IEEE_SUPPORT_DIVIDE(X) • IEEE_SUPPORT_FLAG(FLAG, X) for every valid flag. • IEEE_SUPPORT_HALTING(FLAG) for every valid flag. • IEEE_SUPPORT_INF(X) • IEEE_SUPPORT_NAN(X) • IEEE_SUPPORT_ROUNDING(ROUND_VALUE, X) for every valid ROUND_VALUE • IEEE_SUPPORT_SQRT(X) Otherwise, the result returns a value of false. An elemental IEEE unordered function. Where X and Y are of type real. Result Type The result is of type default logical. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) and IEEE_SUPPORT_DATATYPE(Y) must return with a value of true. Unordered function that returns with a value of true if X or Y is a NaN. Otherwise the function returns with a value of false. REAL X,Y X = 0.0, Y = -1.0 Y = IEEE_VALUE(Y, IEEE_QUIET_NAN) An elemental IEEE function. Generates an IEEE value as specified by CLASS. Implementation of this function is platform and compiler dependent due to variances in NaN processing on differing platforms. A NaN value saved in a binary file that is read on a different platform than the one that generated the value will have unspecified results. Where X is of type real. CLASS is of type IEEE_CLASS_TYPE. Result Type The result is of the same type and kind as X. To ensure compliance with the Fortran 2000 draft standard, the IEEE_SUPPORT_DATATYPE(X) must return with a value of true. IEEE_SUPPORT_NAN(X) must be true if the value of CLASS is IEEE_SIGNALING_NAN or IEEE_QUIET_NAN. IEEE_SUPPORT_INF(X) must be true if the value of CLASS is IEEE_NEGATIVE_INF or IEEE_POSITIVE_INF. IEEE_SUPPORT_DENORMAL(X) must be true if the value of CLASS is IEEE_NEGATIVE_DENORMAL or IEEE_POSITIVE_DENORMAL. Multiple calls of IEEE_VALUE(X, CLASS) return the same result for a particular value of X, if kind type parameter and CLASS remain the same. If a compilation unit calls this program with a CLASS value of IEEE_SIGNALING_NAN, the compilation unit must be compiled with the -qfloat=nans compiler option. REAL :: X IF (IEEE_SUPPORT_DATATYPE(X)) THEN X = IEEE_VALUE(X, IEEE_NEGATIVE_INF) PRINT *, X ! Prints -INF [ Top of Page | Previous Page | Next Page | Table of Contents | Index ]
{"url":"http://sc.tamu.edu/IBM.Tutorial/docs/Compilers/xlf_8.1/html/lr414.HTM","timestamp":"2014-04-21T10:13:13Z","content_type":null,"content_length":"44894","record_id":"<urn:uuid:1ff9c383-97bc-44d1-8966-8cf84c838ac2>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
Kaminski, Matthias (2008): Holographic quark gluon plasma with flavor. Dissertation, LMU München: Faculty of Physics Metadaten exportieren Autor recherchieren in In this thesis we explore the effects of chemical potentials or charge densities inside a thermal plasma, which is governed by a strongly coupled gauge theory. Since perturbative methods in general fail in this regime, we make use of the AdS/CFT correspondence which originates from string theory. AdS/CFT is a gauge/gravity duality (also called holography), which we utilize here to translate perturbative gravity calculations into results in a gauge theory at strong coupling. As a model theory for Quantum-Chromo-Dynamics (QCD), we investigate N=4 Super-Yang-Mills theory in four space-time dimensions. This theory is coupled to fundamental hypermultiplets of N=2 Super-Yang-Mills theory. In spite of being quite different from QCD this model succeeds in describing many of the phenomena qualitatively, which are present in the strong interaction. Thus, the effects discovered in this thesis may also be taken as predictions for heavy ion collisions at the RHIC collider in Brookhaven or the LHC in Geneva. In particular we successively study the introduction of baryon charge, isospin charge and finally both charges (or chemical potentials) simultaneously. We examine the thermodynamics of the strongly coupled plasma. Phase diagrams are given for the canonical and grandcanonical ensemble. Furthermore, we compute the most important thermodynamical quantities as functions of temperature and charge densities~(or chemical potentials): the free energy, grandcanonical potential, internal energy and entropy. Narrow resonances which we observe in the flavor current spectral functions follow the (holographically found) vector meson mass formula at low temperature. Increasing the temperature the meson masses first decrease in order to turn around at some temperature and then increase as the high-temperature regime is entered. While the narrow resonances at low temperatures can be interpreted as stable mesonic quasi-particles, the resonances in the high-temperature regime are very broad. We discuss these two different temperature-regimes and the physical relevance of the discovered turning point that connects them. Moreover, we find that flavor currents with isospin structure in a plasma at finite isospin density show a triplet splitting of the resonances in the spectral functions. Our analytical calculations confirm this triplet splitting also for the diffusion pole, which is holographically identified with the lowest lying quasinormal frequency. We discuss the non-vanishing quark condensate. Furthermore, the baryon diffusion coefficient depends non-trivially on both: baryon and isospin density. Guided by discontinuities in the condensate and densities, we discover a phase transition resembling the one found in the case of 2-flavor QCD. Finally, we extend our hydrodynamic considerations to the diffusion of charmonium at weak and strong coupling. As expected, the ratio of the diffusion coefficient to the meson mass shift at strong coupling is significantly smaller than the weak coupling result. This result is reminiscent of the result for the viscosity to entropy density ratio, which is significantly smaller at strong coupling compared to its value at weak coupling. Item Type: Thesis (Dissertation, LMU Munich) Keywords: String Phenomenology, String Theory, Thermal Field Theory, Quark Gluon Plasma, Thermal Spectral Function, Diffusion, Gauge Gravity Correspondence, AdS/CFT, Duality, Strong Coupling, Chemical Potential, Isospin Density, Subjects: 600 Natural sciences and mathematics > 530 Physics 600 Natural sciences and mathematics Faculties: Faculty of Physics Language: English Date Accepted: 20. May 2008 1. Referee: Erdmenger, Johanna Persistent Identifier urn:nbn:de:bvb:19-87868 MD5 Checksum of the d0a2ecd1b423884ed0ebe35f3f0bf40c Signature of the printed 0001/UMC 17159 ID Code: 8786 Deposited On: 18. Aug 2008 07:18 Last Modified: 16. Oct 2012 08:19
{"url":"http://edoc.ub.uni-muenchen.de/8786/","timestamp":"2014-04-20T08:24:06Z","content_type":null,"content_length":"29159","record_id":"<urn:uuid:1803ccbe-7b84-45a4-9e02-0e41258f8c7c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
$p$-primary then divisible? up vote 0 down vote favorite I asked this via MathSE, but haven't got any responces. Sorry for asking it here. Sorry. We know that in the context of abelian groups, $p$-groups are called $p$-primary groups. I have a question about $p$-primary groups as follows. Derek J.S.Robinson, noted: ...the group $\mathbb Q/\mathbb Z$ is the direct sum of its primary components, each of which is also divisible. Now the $p$-primary ... when he was giving a basic concepts and ideas of Quasicyclic Groups in chapter 4 of his book A course in the theory of groups. In another reference, An introduction to the theory of groups by J.J.Rotman, we face to the following lemma in chapter 10: Lemma 10.27. If $G$ and $H$ are divisible $p$-primary groups, then $G\cong H$ if and only if $G[p]\cong H[p]$. I see that Robinson says any primary component is divisible naturally while in another view, Rotman is noting the groups which are divisible and $p$-primary. Is being $p$-primitive leads us to being divisible? Or the whole groups are necessarily torsion? Am I misunderstanding an important point here? Thanks for sharing your thoughts. abelian-groups gr.group-theory add comment 1 Answer active oldest votes Robinson only assserts that for the specific group $\mathbb{Q}/\mathbb{Z}$ the $p$-primary componenents/subgroups are divisible; and this would remain true replacing $\mathbb {Q}/\mathbb{Z}$ by any other divisible group. Rotman's lemma is very different. Here, you suppose $G$ is divisible and in addition that it is $p$-primary (or to use a different terminology a $p$-group). And, the same for $H$. Then some result on $G$ and $H$ is proved. up vote 2 down vote So, the $p$-primary subgroups of a divisible group are divisible. But, certainly not every $p$-primary group is divisible; just consider finite cycylic groups of prime power accepted order, for example. To answer your additional question: it is not necessary for a divisible group to be torsion (think of the rationals), but $p$-primary groups are always torsion essentially by Just an add-on in view of a comment of you on mathSE: if you want an example of an infinite p-primary group that is not divisible take an infinite/countable direct sum of prime cyclic ones for example. – quid Dec 8 '12 at 11:04 @quid. Thanks so much. Yo saved me. Thanks. – Babak Sorouh Dec 8 '12 at 13:04 You are welcome! – quid Dec 8 '12 at 13:39 add comment Not the answer you're looking for? Browse other questions tagged abelian-groups gr.group-theory or ask your own question.
{"url":"https://mathoverflow.net/questions/115777/p-primary-then-divisible","timestamp":"2014-04-19T10:13:00Z","content_type":null,"content_length":"54169","record_id":"<urn:uuid:8c064be0-40d2-422d-82dd-5eec0338c739>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
Chris Beaumont's IDL Library This function estimates the partial derivative of a multi-dimensional function, sampled on a regular grid Routine details result = pdiv(data, dimension [, order=order]) This function estimates the partial derivative of a multi-dimensional function, sampled on a regular grid Return value A grid the same size as data, giving the partial derivative along dimension at each data point data in required An n-dimensional datacube, representing a function evenly sampled on a grid. dimension in required The dimension (1-n_dimension(data)) over which to calculate the partial derivative (df / d_dim). Defaults to 1. order in optional 1-3, indicating how to approximate the derivative. All methods implicitly use lagrange interpolation to express each data point as a point on a polynomial, and then differentiate that polynomial. Order=1,2,3 correspond to a (3,5,7) point interpolation scheme. Defaults to 1. File attributes Modifcation date: Tue Aug 3 15:35:05 2010 Lines: 130
{"url":"http://www.ifa.hawaii.edu/~beaumont/code/pdiv.html","timestamp":"2014-04-17T09:45:01Z","content_type":null,"content_length":"7728","record_id":"<urn:uuid:0b4e4217-4631-44f2-a55a-d32f6323015f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Can we extend an over-determined set of polynomials so that they intersect completely? up vote 1 down vote favorite For a fixed degree $d>1$, let $\{x^\alpha: |\alpha|=d\}$ be the set of monomials of degree $d$ in the variables $x_1,\ldots, x_n$. View these monomials as $N=\binom{n+d-1}{d-1}>n$ complex polynomials in ${\mathbb{C}}[x_1,\ldots, x_n]$. For each collection $I$ of multi-indices of length $d$, consider the subscheme $\bigcap_{\alpha\in I}\{x^\alpha=0\}$ in affine space $\mathbb{C}^N$. As we let $I$ range over collections of at most $n$ multi-indices, these subschemes are nonempty and distinct from each other, i.e. these intersections have distinct zero sets considering multiplicity. I'm wondering if we can extend this property to the entire collection. Is it possible to find $N$ polynomials $g_1(x_1,\ldots, x_N),\ldots, g_N(x_1,\ldots, x_N)$ such that (1) $g_\alpha(x_1,\ldots,x_n,0,\ldots, 0)=x^\alpha$. (i.e. $g_i$ is formed from $x^\alpha$ by adding extra variables) (2) As we range over arbitrary $I$, the subschemes $\bigcap_{\alpha\in I}\{g_\alpha=0\}$ are non-empty and distinct from each other? ac.commutative-algebra ra.rings-and-algebras 1 Yes. Since you are allowed to use arbitrarily large degrees on the g's, you can choose the coefficients in the gs so that their intersections are distinct. – J.C. Ottem Dec 18 '10 at 9:04 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ac.commutative-algebra ra.rings-and-algebras or ask your own question.
{"url":"http://mathoverflow.net/questions/49779/can-we-extend-an-over-determined-set-of-polynomials-so-that-they-intersect-compl","timestamp":"2014-04-18T05:48:05Z","content_type":null,"content_length":"48690","record_id":"<urn:uuid:7367257a-609e-4600-804f-7a65cb843e3a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
Bases in Linear Algebra - Midterm Tomorrow! September 30th 2008, 04:37 PM #1 Junior Member Aug 2008 Chicago, IL Bases in Linear Algebra - Midterm Tomorrow! Hey quick question thats been bugging me. Suppose I have a vector space V and a basis B that describes it. B is a subspace of V right? I figure it is because for B to be a basis it must be a span and be linearly independent. Both spans and linear independent sets are described as subsets in my book. However, bases aren't. Any Let's say we're talking about a vector space. The basis is a set of linear independent vectors, that through linear combinations span(V). I think the basis itself though is just this set of vectors, not the coefficients as well. For this reason I believe the basis is a subset of vector space V, not a subspace. September 30th 2008, 04:41 PM #2 MHF Contributor Oct 2005
{"url":"http://mathhelpforum.com/advanced-algebra/51405-bases-linear-algebra-midterm-tomorrow.html","timestamp":"2014-04-20T10:19:46Z","content_type":null,"content_length":"32742","record_id":"<urn:uuid:fac2909a-5d51-4bac-883f-319189678308>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematica is a powerful technical programming language developed by Wolfram Research. It encompases computer algebra, numerical computation, visualization and statistics capabilities. It can be used on all kinds of mathematical analysis, from simple plotting to signal processing. The aim of this wikibook is to introduce the Mathematica language and how to use this software. It will describe the functions available and give examples of them. Language Overview 2D Graphics 3D Graphics Last modified on 21 October 2012, at 12:01
{"url":"http://en.m.wikibooks.org/wiki/Mathematica","timestamp":"2014-04-18T10:53:43Z","content_type":null,"content_length":"17029","record_id":"<urn:uuid:30845a50-9797-463f-95b7-48aaeffae50f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
OpenMx - Advanced Structural Equation Modeling Tue, 08/17/2010 - 10:16 OK changes made. OK changes made. Mon, 08/23/2010 - 13:27 Here's another, similar Here's another, similar issue: Running thresholdModelMod Error in computeMatrixHelper(, model, : non-conformable arrays Error in summary(thresholdModelModrun <- mxRun(thresholdModelMod)) : error in evaluating the argument 'object' in selecting a method for function 'summary' Non-conformable arrays should really report the dimensions of the arrays and the operator in question, in order to help the user trace the source of the problem. Better still would be the array names as well, though obviously these could be the result of previous operations. So ideally we'd end up with something like this: The "Result of (a+b*c)" has dimensions m rows and n columns and "array D" has p rows and q columns. Since the operator in use is %*% , the number of columns in "Result of (a+b*c)" must equal the number of rows in "array D", which it does not. I know from experience that crafting such error messages for all algebras etc. is not easy. However, if we can do this then the users will thank us over and over for saving them time debugging their Mon, 08/23/2010 - 13:44 I did my best to hide the I did my best to hide the errors from the computeMatrixHelper function. Can you attach the script, and I'll take a look at it. Mon, 08/23/2010 - 14:36 Sure, here it is. Sure, here it is. Attachment Size thresholdModel1Factor3VariateModeratedwithAlgebraMistake.R 2.97 KB Tue, 08/24/2010 - 09:17 I changed the error message I changed the error message to: "Trying to evaluate 'thresholdModelMod.ageRegressedMeans' in model 'thresholdModelMod' generated the error message: non-conformable arrays" It's not quite as detailed as you would like, but it's better than the earlier version. The difficulty with showing the full expression that caused the error is caused by the fact that the expressions need to be translated into a form that be calculated. But then when an error occurs, showing the error to the user should display the untranslated expression. At some point mxEval() might get completely rewritten. For the moment, we lose the untranslated version during evaluation.
{"url":"http://openmx.psyc.virginia.edu/thread/623","timestamp":"2014-04-19T15:02:59Z","content_type":null,"content_length":"35594","record_id":"<urn:uuid:de58f2c8-1395-4934-a1a5-24c8b10f0136>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
Solomon Lefschetz pioneered the field of topology--the study of the properties of many?sided figures and their ability to deform, twist, and stretch without changing their shape. According to Lefschetz, "If it's just turning the crank, it's algebra, but if it's got an idea in it, it's topology." The very word topology comes from the title of an earlier Lefschetz monograph published in 1920. In Topics in Topology Lefschetz developed a more in-depth introduction to the field, providing authoritative explanations of what would today be considered the basic tools of algebraic Lefschetz moved to the United States from France in 1905 at the age of twenty-one to find employment opportunities not available to him as a Jew in France. He worked at Westinghouse Electric Company in Pittsburgh and there suffered a horrible laboratory accident, losing both hands and forearms. He continued to work for Westinghouse, teaching mathematics, and went on to earn a Ph.D. and to pursue an academic career in mathematics. When he joined the mathematics faculty at Princeton University, he became one of its first Jewish faculty members in any discipline. He was immensely popular, and his memory continues to elicit admiring anecdotes. Editor of Princeton University Press's Annals of Mathematics from 1928 to 1958, Lefschetz built it into a world-class scholarly journal. He published another book, Lectures on Differential Equations, with Princeton in 1946. Other Princeton books authored or coauthored by Solomon Lefschetz: Subject Area: • Mathematics Shopping Cart:
{"url":"http://press.princeton.edu/titles/4017.html","timestamp":"2014-04-17T09:35:08Z","content_type":null,"content_length":"14267","record_id":"<urn:uuid:2c624426-d826-429b-8879-fe6a3a7fad40>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Method and apparatus for encoding and decoding an audio signal using adaptively switched temporal resolution in the spectral domain - Patent # 8095359 - PatentGenius Method and apparatus for encoding and decoding an audio signal using adaptively switched temporal resolution in the spectral domain 8095359 Method and apparatus for encoding and decoding an audio signal using adaptively switched temporal resolution in the spectral domain (4 images) Inventor: Boehm, et al. Date Issued: January 10, 2012 Application: 12/156,748 Filed: June 4, 2008 Inventors: Boehm; Johannes (Goettingen, DE) Kordon; Sven (Hannover, DE) Assignee: Thomson Licensing (Princeton, NJ) Primary Smits; Talivaldis Ivars Attorney Or International IP Law Group, P.C. U.S. Class: 704/203; 704/205; 704/269 Field Of International G10L 19/02 U.S Patent Other Niamut O. A. et al. "Flexible frequency decompositions for cosine-modulated filter banks", 2003 IEEE International Conference on Acoustics,Speech, and Signal Processing, Proceedings. References: (ICASSP). Hong Kong, Apr. 6-10, 2003, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New York, NY IEEE, US, vol. 1 of 6, Apr. 6, 2003 pp. 449-V452 XPO10639305. cited by other. European Search Report dated Oct. 8, 2007. cited by other. Abstract: Perceptual audio codecs make use of filter banks and MDCT in order to achieve a compact representation of the audio signal, by removing redundancy and irrelevancy from the original audio signal. During quasi-stationary parts of the audio signal a high frequency resolution of the filter bank is advantageous in order to achieve a high coding gain, but this high frequency resolution is coupled to a coarse temporal resolution that becomes a problem during transient signal parts by producing audible pre-echo effects. The invention achieves improved coding/decoding quality by applying on top of the output of a first filter bank a second non-uniform filter bank, i.e. a cascaded MDCT. The inventive codec uses switching to an additional extension filter bank (or multi-resolution filter bank) in order to re-group the time-frequency representation during transient or fast changing audio signal sections. By applying a corresponding switching control, pre-echo effects are avoided and a high coding gain and a low coding delay are achieved. Claim: What is claimed is: 1. A method for encoding an input signal comprising: transforming the input signal into a frequency domain via a first forward transform, wherein: the first forward transformapplied to first-length sections of the input signal and, using adaptive switching of a temporal resolution, is followed by quantization and entropy encoding of values of the resulting frequency domain bins; the first forward transform and a secondforward transform are a MDCT transform, an integer MDCT transform, a DCT-4 transform, or a DCT transform; adaptively controlling the temporal resolution by performing a second forward transform following the first forward transform, wherein: the secondforward transform is applied to second-length sections of the transformed first-length sections; and the second-length sections are smaller than the first-length sections and either output values of the first forward transform or output values of thesecond forward transform are processed in the quantization and entropy encoding; prior to the transforms at encoding side, the amplitude values of the first-length sections and the second-length sections are weighted using window functions, andoverlap-add processing for the first-length sections and second-length sections is applied, and wherein for transitional windows the amplitude values are weighted using asymmetric window functions, and wherein for the second-length sections start andstop window functions are used; and control of the switching, quantization and/or entropy encoding is derived from a psychoacoustic analysis of the input signal; and attaching to an encoded output signal corresponding temporal resolution controlinformation as side information. 2. The method according to claim 1, wherein if more than one different second length is used for signaling topology of different second lengths applied, indices indicating a region of changing temporal resolution, or an index number referringto a matching entry of a corresponding code book accessible at decoding side, are contained in the side information. 3. The method according to claim 2, wherein the topology is determined by: performing a spectral flatness measure (SFM) using the first forward transform, by determining for selected frequency bands a spectral power value of transform bins anddividing an arithmetic mean value of the spectral power values by their geometric mean value; sub-segmenting an un-weighted input signal section, performing weighting and short transforms on m sub-sections where a frequency resolution of the shorttransforms corresponds to the selected frequency bands; for each frequency line consisting of m transform segments, determining the spectral power value and calculating a temporal flatness measure (TFM) by determining an arithmetic mean divided by ageometric mean of the m transform segments; determining tonal or noisy frequency bands by using the SFM; and using the TFM for recognizing temporal variations in the tonal or noisy frequency bands and using threshold values for switching to finertemporal resolution for the determined noisy frequency bands. 4. The method according to claim 1, wherein if more than one different second length is used successively, lengths increase starting from frequency bins representing low frequency 5. Use of the method according to claim 1 in a watermark embedder. 6. A method for decoding an encoded original signal, that was encoded into a frequency domain using a first forward transform that was applied to first-length sections of the original signal, wherein the first forward transform and a secondforward transform are a MDCT transform, an integer MDCT transform, a DCT-4 transform, or a DCT transform, and wherein a temporal resolution was adaptively switched by performing the second forward transform following the first forward transform onsecond-length sections of the transformed first-length sections, wherein the second-length sections are smaller than the first-length sections and either output values of the first forward transform or output values of the second forward transform wereprocessed in a quantization and entropy encoding, and wherein control of the switching, quantization and/or entropy encoding was derived from a psycho-acoustic analysis of the original signal and corresponding temporal resolution control information wasattached to the encoding output signal as side information, the decoding method comprising: providing from the encoded signal the side information; inversely quantizing and entropy decoding the encoded signal; and corresponding to the side information,either: performing a first inverse transform into a time domain, the first inverse transform operating on first-length signal sections of the inversely quantized and entropy decoded signal and the first inverse transform providing the decoded signal; orprocessing second-length sections of the inversely quantized and entropy decoded signal in a second inverse transform before performing the first inverse transform wherein, following the first inverse transform and the second inverse transform, theamplitude values of the first-length sections and the second-length sections are weighted using window functions, and overlap-add processing for the first-length sections and second-length sections is applied, and wherein for transitional windows theamplitude values are weighted using asymmetric window functions, and wherein for the second-length sections start and stop window functions are used, wherein the first inverse transform and the second inverse transform are an inverse MDCT, an inverseinteger MDCT, or an inverse DCT-4 transform. 7. The method according to claim 6, wherein if more than one different second length is used for signaling a topology of different second lengths applied, indices indicating a region of changing temporal resolution, or an index number referringto a matching entry of a corresponding code book accessible at decoding side, are contained in the side information. 8. The method according to claim 7, wherein the topology is determined by: performing a spectral flatness measure (SFM) using the first forward transform, by determining for selected frequency bands a spectral power value of transform bins anddividing an arithmetic mean value of the spectral power values by their geometric mean value; sub-segmenting an un-weighted input signal section, performing weighting and short transforms on m sub-sections where a frequency resolution of the shorttransforms corresponds to the selected frequency bands; for each frequency line consisting of m transform segments, determining the spectral power value and calculating a temporal flatness measure (TFM) by determining the arithmetic mean value dividedby a geometric mean of the m transform segments; determining tonal or noisy frequency bands by using the SFM; and using the TFM for recognizing temporal variations in the tonal or noisy frequency bands and using threshold values for switching to finertemporal resolution for the determined noisy frequency bands. 9. The method according to claim 6, wherein if more than one different second length is used successively, lengths increase starting from frequency bins representing low frequency 10. An apparatus for encoding an input signal comprising: first forward transform means being adapted for transforming first-length sections of the input signal into a frequency domain; second forward transform means being adapted fortransforming second-length sections of the transformed first-length sections, wherein the second-length sections are smaller than the first-length sections, wherein the first forward transform and the second forward transform are a MDCT transform, aninteger MDCT transform, a DCT-4 transform, or a DCT transform; means being adapted for quantizing and entropy encoding output values of the first forward transform means or output values of the second forward transform means; means being adapted forcontrolling the quantization and/or entropy encoding and for controlling adaptively whether the output values of the first forward transform means or the output values of the second forward transform means are processed in the quantizing and entropyencoding means, wherein the controlling is derived from a psycho-acoustic analysis of the input signal; and means being adapted for attaching to an encoded apparatus output signal corresponding temporal resolution control information as sideinformation, wherein, prior to the transforms at encoding side, amplitude values of the first-length sections and the second-length sections are weighted using window functions, and overlap-add processing for the first-length sections and thesecond-length sections is applied, and wherein for transitional windows the amplitude values are weighted using asymmetric window functions, and wherein for the second-length sections start and stop window functions are used. 11. The apparatus according to claim 10, wherein if more than one different second length is used for signaling a topology of different second lengths applied, several indices indicating a region of changing temporal resolution, or an indexnumber referring to a matching entry of a corresponding code book accessible at decoding side, are contained in the side information. 12. The apparatus according to claim 11, wherein the topology is determined by: performing a spectral flatness measure SFM using the first forward transfrom, by determing for selected frequency bands a spectral power value of transform bins anddividing an arithmetic mean value of the spectral power values by their geometric mean value; sub-segmenting an un-weighted input signal section, performing weighting and short transforms on m sub-sections where a frequency resolution of the shorttransforms corresponds to the selected frequency bands; for each frequency line consisting of m transfrom segments, determining the spectral power value and calculating a temporal flatness measure (TFM) by determining the arithmetic mean value dividedby a geometric mean value of the m transform segments; determining tonal or noisy frequency bands by using the SFM; and using the TFM for recognizing temporal variations in the tonal or noisy frequency bands and using threshold values for switching tofiner temporal resolution for the determined noisy frequency bands. 13. The apparatus according to claim 10, wherein in case more than one different second length is used successively, lengths increase starting from frequency bins representing low frequency lines. 14. An apparatus for decoding an encoded original signal, that was encoded into a frequency domain using a first forward transform being applied to first-length sections of the original signal, wherein a temporal resolution was adaptivelyswitched by performing a second forward transform following the first forward transform and being applied to second-length sections of the transformed first-length sections, wherein the first forward transform and the second forward transform are a MDCTtransform, an integer MDCT transform, a DCT-4 transform, or a DCT transform, and wherein the second-length sections are smaller than the first-length sections and either output values of the first forward transform or output values of the second forwardtransform were processed in a quantization and entropy encoding, and wherein control of the switching, quantization and/or entropy encoding was derived from a psycho-acoustic analysis of the original signal and corresponding temporal resolution controlinformation was attached to an encoded output signal as side information, the apparatus comprising: means being adapted for providing from the encoded signal the side information and for inversely quantizing and entropy decoding the encoded signal; andmeans being adapted for, corresponding to the side information, either: performing a first inverse transform into a time domain, the first inverse transform operating on first-length signal sections of the inversely quantized and entropy decoded signaland the first inverse transform providing a decoded signal; or processing second-length sections of the inversely quantized and entropy decoded signal in a second inverse transform before performing the first inverse transform, wherein, following thefirst inverse transform and the second inverse transform, amplitude values of the first-length sections and the second-length sections are weighted using window functions, and overlap-add processing for the first-length sections and second-lengthsections is applied, and wherein for transitional windows the amplitude values are weighted using asymmetric window functions, and wherein for the second-length sections start and stop window functions are used. 15. The apparatus according to claim 14, wherein if more than one different second length is used for signaling the topology of different second lengths applied, several indices indicating the region of changing temporal resolution, or an indexnumber referring to a matching entry of a corresponding code book accessible at decoding side, are contained in the side information. 16. The apparatus according to claim 15, wherein the topology is determined by: performing a spectral flatness measure (SFM) using the first forward transform, by determining for selected frequency bands a spectral power value of transform binsand dividing an arithmetic mean value of the spectral power values by their geometric mean value; sub-segmenting an un-weighted input signal section, performing weighting and short transforms on m sub-sections where a frequency resolution of thesetransforms corresponds to the selected frequency bands; for each frequency line consisting of m transform segments, determining the spectral power value and calculating a temporal flatness measure (TFM) by determining the arithmetic mean divided by ageometric mean of the m transform segments; determining tonal or noisy frequency bands by using the SFM; and using the TFM for recognizing the temporal variations in the tonal or noisy frequency bands and using threshold values for switching to finertemporal resolution for the determined noisy frequency bands. 17. The apparatus according to claim 14, wherein in case more than one different second length is used successively, lengths increase starting from frequency bins representing low frequency lines. Description: FIELD OF THE INVENTION This application claims the benefit, under 35 U.S.C. .sctn.119 of European Patent Application 07110289.1, filed Jun. 14, 2007. The invention relates to a method and to an apparatus for encoding and decoding an audio signal using transform coding and adaptive switching of the temporal resolution in the spectral domain. BACKGROUND OF THE INVENTION Perceptual audio codecs make use of filter banks and MDCT (modified discrete cosine transform, a forward transform) in order to achieve a compact representation of the audio signal, i.e. a redundancy reduction, and to be able to reduceirrelevancy from the original audio signal. During quasi-stationary parts of the audio signal a high frequency or spectral resolution of the filter bank is advantageous in order to achieve a high coding gain, but this high frequency resolution iscoupled to a coarse temporal resolution that becomes a problem during transient signal parts. A well-know consequence are audible pre-echo effects. B. Edler, "Codierung von Audiosignalen mit utberlappender Transformation und adaptiven Fensterfunktionen", Frequenz, Vol. 43, No. 9, p. 252-256, September 1989, discloses adaptive window switching in the time domain and/or transform lengthswitching, which is a switching between two resolutions by alternatively using two window functions with different length. U.S. Pat. No. 6,029,126 describes a long transform, whereby the temporal resolution is increased by combining spectral bands using a matrix multiplication. Switching between different fixed resolutions is carried out in order to avoid windowswitching in the time domain. This can be used to create non-uniform filter-banks having two different resolutions. WO-A-03/019532 discloses sub-band merging in cosine modulated filter-banks, which is a very complex way of filter design suited for poly-phase filter bank construction. SUMMARY OF THE INVENTION The above-mentioned window and/or transform length switching disclosed by Edler is sub-optimum because of long delay due to long look-ahead and low frequency resolution of short blocks, which prevents providing a sufficient resolution foroptimum irrelevancy reduction. A problem to be solved by the invention is to provide an improved coding/decoding gain by applying a high frequency resolution as well as high temporal resolution for transient audio signal parts. The invention achieves improved coding/decoding quality by applying on top of the output of a first filter bank a second non-uniform filter bank, i.e. a cascaded MDCT. The inventive codec uses switching to an additional extension filter bank(or multi-resolution filter bank) in order to re-group the time-frequency representation during transient or fast changing audio signal sections. By applying a corresponding switching control, pre-echo effects are avoided and a high coding gain is achieved. Advantageously, the inventive codec has a low coding delay (no In principle, the inventive encoding method is suited for encoding an input signal, e.g. an audio signal, using a first forward transform into the frequency domain being applied to first-length sections of said input signal, and using adaptiveswitching of the temporal resolution, followed by quantization and entropy encoding of the values of the resulting frequency domain bins, wherein control of said switching, quantization and/or entropy encoding is derived from a psycho-acoustic analysisof said input signal, including the steps of: adaptively controlling said temporal resolution is achieved by performing a second forward transform following said first forward transform and being applied to second-length sections of said transformedfirst-length sections, wherein said second length is smaller than said first length and either the output values of said first forward transform or the output values of said second forward transform are processed in said quantization and entropyencoding; attaching to the encoding output signal corresponding temporal resolution control information as side information. In principle the inventive encoding apparatus is suited for encoding an input signal, e.g. an audio signal, said apparatus including: first forward transform means being adapted for trans-forming first-length sections of said input signal intothe frequency domain; second forward transform means being adapted for trans-forming second-length sections of said transformed first-length sections, wherein said second length is smaller than said first length; means being adapted for quantizing andentropy encoding the output values of said first forward transform means or the output values of said second forward transform means; means being adapted for controlling said quantization and/or entropy encoding and for controlling adaptively whethersaid output values of said first forward transform means or the output values of said second forward transform means are processed in said quantizing and entropy encoding means, wherein said controlling is derived from a psycho-acoustic analysis of saidinput signal; means being adapted for attaching to the encoding apparatus output signal corresponding temporal resolution control information as side information. In principle, the inventive decoding method is suited for decoding an encoded signal, e.g. an audio signal, that was encoded using a first forward transform into the frequency domain being applied to first-length sections of said input signal,wherein the temporal resolution was adaptively switched by performing a second forward transform following said first forward transform and being applied to second-length sections of said transformed first-length sections, wherein said second length issmaller than said first length and either the output values of said first forward transform or the output values of said second forward transform were processed in a quantization and entropy encoding, and wherein control of said switching, quantizationand/or entropy encoding was derived from a psycho-acoustic analysis of said input signal and corresponding temporal resolution control information was attached to the encoding output signal as side information, said decoding method including the stepsof: providing from said encoded signal said side information; inversely quantizing and entropy decoding said encoded signal; corresponding to said side information, either performing a first forward inverse transform into the time domain, said firstforward inverse transform operating on first-length signal sections of said inversely quantized and entropy decoded signal and said first forward inverse transform providing the decoded signal, or processing second-length sections of said inverselyquantized and entropy decoded signal in a second forward inverse transform before performing said first forward inverse transform. In principle, the inventive decoding apparatus is suited for decoding an encoded signal, e.g. an audio signal, that was encoded using a first forward transform into the frequency domain being applied to first-length sections of said inputsignal, wherein the temporal resolution was adaptively switched by performing a second forward transform following said first forward transform and being applied to second-length sections of said transformed first-length sections, wherein said secondlength is smaller than said first length and either the output values of said first forward transform or the output values of said second forward transform were processed in a quantization and entropy encoding, and wherein control of said switching,quantization and/or entropy encoding was derived from a psycho-acoustic analysis of said input signal and corresponding temporal resolution control information was attached to the encoding output signal as side information, said apparatus including:means being adapted for providing from said side information and for inversely quantizing and entropy decoding said encoded signal; means being adapted for, corresponding to said side information, either performing a first forward inverse transform intothe time domain, said first forward inverse trans-form operating on first-length signal sections of said inversely quantized and entropy decoded signal and said first forward inverse transform providing the decoded signal, or processing second-lengthsections of said inversely quantized and entropy decoded signal in a second forward inverse transform before performing said first forward inverse transform. BRIEF DESCRIPTION OF THE DRAWINGS Exemplary embodiments of the invention are described with reference to the accompanying drawings, which show in: FIG. 1 inventive encoder; FIG. 2 inventive decoder; FIG. 3 a block of audio samples that is windowed and trans-formed with a long MDCT, and series of non-uniform MDCTs applied to the frequency data; FIG. 4 changing the time-frequency resolution by changing the block length of the MDCT; FIG. 5 transition windows; FIG. 6 window sequence example for second-stage MDCTs; FIG. 7 start and stop windows for first and last MDCT; FIG. 8 time domain signal of a transient, T/F plot of first MDCT stage and T/F plot of second-stage MDCTs with an 8-fold temporal resolution topology; FIG. 9 time domain signal of a transient, second-stage filter bank T/F plot of a single, 2-fold, 4-fold and 8-fold temporal resolution topology; FIG. 10 more detail for the window processing according to FIG. 6. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS In FIG. 1, the magnitude values of each successive overlapping block or segment or section of samples of a coder input audio signal CIS are weighted by a window function and transformed in a long (i.e. a high frequency resolution) MDCT filterbank or transform stage or step MDCT-1, providing corresponding transform coefficients or frequency bins. During transient audio signal sections a second MDCT filter bank or transform stage or step MDCT-2, either with shorter fixed transform length orpreferably a multi-resolution MDCT filter bank having different shorter transform lengths, is applied to the frequency bins of the first forward transform (i.e. on the same block) in order to change the frequency and temporal filter resolutions, i.e. aseries of non-uniform MDCTs is applied to the frequency data, whereby a non-uniform time/frequency representation is generated. The amplitude values of each successive overlapping section of frequency bins of the first forward transform are weighted bya window function prior to the second-stage transform. The window functions used for the weighting are explained in connection with FIGS. 4 to 7 and equations (3) and (4). In case of MDCT or integer MDCT transforms, the sections are 50% overlapping. In case a different transform is used the degree of overlapping can be different. In case only two different transform lengths are used for stage or step MDCT-2, that step or stage when considered alone is similar to the above-mentioned Edler codec. The switching on or off of the second MDCT filter bank MDCT-2 can be performed using first and second switches SW1 and SW2 and is controlled by a filter bank control unit or step FBCTL that is integrated into, or is operating in parallel to, apsycho-acoustic analyzer stage or step PSYM, which both receive signal CIS. Stage or step PSYM uses temporal and spectral information from the input signal CIS. The topology or status of the 2nd stage filter MDCT-2 is coded as side information into thecoder output bit stream COS. The frequency data output from switch SW2 is quantized and entropy encoded in a quantiser and entropy encoding stage or step QUCOD that is controlled by psycho-acoustic analyzer PSYM, in particular the quantization stepsizes. The output from stages QUCOD (encoded frequency bins) and FBCTL (topology or status information or temporal resolution control information or switching information SW1 or side information) is combined in a stream packer step or stage STRPCK andforms the output bit stream COS. The quantizing can be replaced by inserting a distortion signal. In FIG. 2, at decoder side, the decoder input bit stream DIS is de-packed and correspondingly decoded and inversely `quantized` (or re-quantized) in a depacking, decoding and re-quantizing stage or step DPCRQU, which provides correspondinglydecoded frequency bins and switching information SW1. A correspondingly inverse non-uniform MDCT step or stage iMDCT-2 is applied to these decoded frequency bins using e.g. switches SW3 and SW4, if so signaled by the bit stream via switching informationSW1. The amplitude values of each successive section of inversely transformed values are weighted by a window function following the transform in step or stage iMDCT-2, which weighting is followed by an overlap-add processing. The signal isreconstructed by applying either to the decoded frequency bins or to the output of step or stage iMDCT-2 a correspondingly inverse high-resolution MDCT step or stage iMDCT-1 . The amplitude values of each successive section of inversely transformedvalues are weighted by a window function following the transform in step or stage iMDCT-1, which weighting is followed by an overlap-add processing. Thereafter, the PCM audio decoder output signal DOS. The transform lengths applied at decoding sidemirror the corresponding transport lengths applied at encoding side, i.e. the same block of received values is inverse transformed twice. The window functions used for the weighting are explained in connection with FIGS. 4 to 7 and equations (3) and (4). In case of inverse MDCT or inverse integer MDCT transforms, the sections are 50% overlapping. In case a different inversetransform is used the degree of overlapping can be different. FIG. 3 depicts the above-mentioned processing, i.e. applying first and second stage filter banks. On the left side a block of time domain samples is windowed and transformed in a long MDCT to the frequency domain. During transient audio signalsections a series of non-uniform MDCTs is applied to the frequency data to generate a non-uniform time/frequency representation shown at the right side of FIG. 3. The time/frequency representations are displayed in grey or hatched. The time/frequency representation (on the left side) of the first stage transform or filter bank MDCT-1 offers a high frequency or spectral resolution that is optimum for encoding stationary signal sections. Filter banks MDCT-1 and iMDCT-1represent a constant-size MDCT and iMDCT pair with 50% overlapping blocks. Overlay-and-add (OLA) is used in filter bank iMDCT-1 to cancel the time domain alias. Therefore the filter bank pair MDCT-1 and iMDCT-1 is capable of theoretical perfectreconstruction. Fast changing signal sections, especially transient signals, are better represented in time/frequency with resolutions matching the human perception or representing a maximum signal compaction tuned to time/frequency. This is achieved byapplying the second transform filter bank MDCT-2 onto a block of selected frequency bins of the first forward trans-form filter bank MDCT-1. The second forward transform is characterized by using 50% overlapping windows of different sizes, using transition window functions (i.e. `Edler window functions` each of which having asymmetric slopes) when switching from one size to another,as shown in the medium section of FIG. 3. Window sizes start from length 4 to length 2.sup.n, wherein n is an integer number greater 2. A window size of `4` combines two frequency bins and doubled time resolution, a window size of 2.sup.n combines2.sup.(n-1) frequency bins and increases the temporal resolution by factor 2.sup.(n-1). Special start and stop window functions (transition windows) are used at the beginning and at the end of the series of MDCTs. At decoding side, filter bank iMDCT-2applies the inverse transform including OLA. Thereby the filter bank pair MDCT-2/iMDCT-2 is capable of theoretical perfect reconstruction. The output data of filter bank MDCT-2 is combined with single-resolution bins of filter bank MDCT-1 which were not included when applying filter bank MDCT-2. The output of each transform or MDCT of filter bank MDCT-2 can be interpreted as time-reversed temporal samples of the combined frequency bins of the first forward transform. Advantageously, a construction of a non-uniform time/frequencyrepresentation as depicted at the right side of FIG. 3 now becomes feasible. The filter bank control unit or step FBCTL performs a signal analysis of the actual processing block using time data and excitation patterns from the psycho-acoustic model in psycho-acoustic analyzer stage or step PSYM. In a simplifiedembodiment it switches during transient signal sections to fixed-filter topologies of filter bank MDCT-2, which filter bank may make use of a time/frequency resolution of human perception. Advantageously, only few bits of side information are requiredfor signaling to the decoding side, as a code-book entry, the desired topology of filter bank iMDCT-2. In a more complex embodiment, the filter bank control unit or step FBCTL evaluates the spectral and temporal flatness of input signal CIS and determines a flexible filter topology of filter bank MDCT-2 . In this embodiment it is sufficient totransmit to the decoder the coded starting locations of the start window, transition window and stop window positions in order to enable the construction of filter bank iMDCT-2. The psycho-acoustic model makes use of the high spectral resolution equivalent to the resolution of filter bank MDCT-1 and, at the same time, of a coarse spectral but high temporal resolution signal analysis. This second resolution can matchthe coarsest frequency resolution of filter bank MDCT-2. As an alternative, the psycho-acoustic model can also be driven directly by the output of filter bank MDCT-1, and during transient signal sections by the time/frequency representation as depicted at the right side of FIG. 3 following applyingfilter bank MDCT-2. In the following, a more detailed system description is provided. The MDCT The Modified Discrete Cosine Transformation (MDCT) and the inverse MDCT (iMDCT) can be considered as representing a critically sampled filter bank. The MDCT was first named "Oddly-stacked time domain alias cancellation transform" by J. P.Princen and A. B. Bradley in "Analysis/synthesis filter bank design based on time domain aliasing cancellation", IEEE Transactions on Acoust. Speech Sig. Proc. ASSP-34 (5), pp. 1153-1161, 1986. H. S. Malvar, "Signal processing with lapped transform", Artech House Inc., Norwood, 1992, and M. Temerinac, B. Edler, "A unified approach to lapped orthogonal transforms", IEEE Transactions on Image Processing, Vol. 1, No. 1, pp. 111-116,January 1992, have called it "Modulated Lapped Trans-form (MLT)" and have shown its relations to lapped orthogonal transforms in general and have also proved it to be a special case of a QMF filter bank. The equations of the transform and the inverse transform are given in equations (1) and (2): .function..times..times..function..function..function..pi..times..times..- times..times..function..times..times..function..function..function..pi..ti- mes..times..times..times. ## In these transforms, 50% overlaying blocks are processed. At encoding side, in each case, a block of N samples is windowed and the magnitude values are weighted by window function h (n) and is thereafter transformed to K=N/2 frequency bins,wherein N is an integer number. At decoding side, the inverse transform converts in each case M frequency bins to N time samples and thereafter the magnitude values are weighted by window function h(n), wherein N and M are integer numbers. A followingoverlay-add procedure cancels out the time alias. The window function h(n) must fulfill some constraints to enable perfect reconstruction, see equations (3) and (4): h.sup.2(n+N/2)+h.sup.2(n)=1 (3) h(n)=h(N-n-1) (4) Analysis and synthesis window functions can also be different but the inverse transform lengths used in the decoding correspond to the transform lengths used in the encoding. However, this option is not considered here. A suitable window function is the sine window function given in (5): .function..function..pi..times..times..times..times. ##EQU00002## In the above-mentioned article, Edler has shown switching the MDCT time-frequency resolution using transition windows. An example of switching (caused by transient conditions) using transition windows 1, 10 from a long transform to eight short transforms is depicted in the bottom part of FIG. 4, which shows the gain G of the window functions in verticaldirection and the time, i.e. the input signal samples, in horizontal direction. In the upper part of this figure three successive basic window functions A, B and C as applied in steady state conditions are shown. The transition window functions have the length N.sub.L Of the long transform. At the smaller-window side end there are r zero-amplitude window function samples. Towards the window function centre located at N.sub.L/2, a mirrored half-windowfunction for the small transform (having a length of N.sub.short samples) is following, further followed by r window function samples having a value of `one` (or a `unity` constant). The principle is depicted for a transition to short window at the leftside of FIG. 5 and for a transition from short window at the right side of FIG. 5. Value r is given by r=(N.sub.L-N.sub.short)/4 (6) Multi-Resolution Filter Bank The first-stage filter bank MDCT-1, iMDCT-1 is a high resolution MDCT filter bank having a sub-band filter bandwidth of e.g. 15-25 Hz. For audio sampling rates of e.g. 32-48 kHz a typical length of N.sub.L is 2048 samples. The window functionh(n) satisfies equations (3) and (4). Following application of filter MDCT-1 there are 1024 frequency bins in the preferred embodiment. For stationary input signal sections, these bins are quantized according to psycho-acoustic considerations. Fast changing, transient input signal sections are processed by the additional MDCT applied to the bins of the first MDCT. This additional step or stage merges two, four, eight, sixteen or more sub-bands and thereby increases the temporalresolution, as depicted in the right part of FIG. 3. FIG. 6 shows an example sequence of applied windowing for the second-stage MDCTs within the frequency domain. Therefore the horizontal axis is related to f/bins. The transition window functions are designed according to FIG. 5 and equation(6), like in the time domain. Special start window functions STW and stop window functions SPW handle the start and end sections of the transformed signal, i.e. the first and the last MDCT. The design principle of these start and stop window functionsis shown in FIG. 7. One half of these window functions mirrors a half-window function of a normal or regular window function NW, e.g. a sine window function according to equation (5). Of other half of these window functions, the adjacent half has acontinuous gain of `one` (or a `unity` constant) and the other half has the gain zero. Due to the properties of MDCT, performing MDCT-2 can also be regarded as a partial inverse transformation. When applying the forward MDCTs of the second stage MDCTs, each one of such new MDCT (MDCT-2) can be regarded as a new frequency line(bin) that has combined the original windowed bins, and the time reversed output of that new MDCT can be regarded as the new temporal blocks. The presentation in FIGS. 8 and 9 is based on this assumption or condition. Indices ki in FIG. 6 indicate the regions of changing temporal resolution. Frequency bins starting from position zero up to position k1-1 are copied from (i.e. represent) the first forward transform (MDCT-1), which corresponds to a singletemporal resolution. Bins from index k1-1 to index k2 are transformed to g1 frequency lines. g1 is equal to the number of transforms performed (that number corresponds to the number of overlapping windows and can be considered as the number of frequency bins in thesecond or upper transform level MDCT-2). The start index is bin k1-1 because index k1 is selected as the second sample in the first forward transform in FIG. 6 (the first sample has a zero amplitude, see also FIG. 10a). g1=(number_of_windowed_bins)/(N/2)-1=(k2-k1+1)/2-1, with a regular window size N of e.g. 4 bins, which size creates a section with doubled temporal resolution. Bins from index k2-3 to index k3+4 are combined to g2 frequency lines (transforms), i.e. g2=(k3-k2+2)/4-1. The regular window size is e.g. 8 bins, which size results in a section with quadrupled temporal resolution. The next section in FIG. 6 is transformed by windows (trans-form length) spanning e.g. 16 bins, which size results in sections having eightfold temporal resolution. Windowing starts at bin k3-5. If this is the last resolution selected (as istrue for FIG. 6), then it ends at bin k4+4, otherwise at bin k4. Where the order (i.e. the length) of the second-stage trans-form is variable over successive transform blocks, starting from frequency bins corresponding to low frequency lines, the first second-stage MDCTs will start with a small order and thefollowing second-stage MDCTs will have a higher order. Transition windows fulfilling the characteristics for perfect reconstruction are used. The processing according to FIG. 6 is further explained in FIG. 10, which shows a sample-accurate assignment of frequency indices that mark areas of a second (i.e. cascaded) transform (MDCT-2), which second transform achieves a better temporalresolution. The circles represent bin positions, i.e. frequency lines of the first or initial transform (MDCT-1). FIG. 10a shows the area of 4-point second-stage MDCTs that are used to provide doubled temporal resolution. The five MDCT sections depicted create five new spectral lines. FIG. 10b shows the area of 8-point second-stage MDCTs that are used toprovide fourfold temporal resolution. Three MDCT sections are depicted. FIG. 10c shows the area of 16-point second-stage MDCTs that are used to provide eightfold temporal resolution. Four MDCT sections are depicted. At decoder side, stationary signals are restored using filter bank iMDCT-1, the iMDCT of the long transform blocks including the overlay-add procedure (OLA) to cancel the time alias. When so signaled in the bitstream, the decoding or the decoder, respectively, switches to the multi-resolution filter bank iMDCT-2 by applying a sequence of iMDCTs according to the signaled topology (including OLA) before applying filter bankiMDCT-1. Signaling the Filter Bank Topology to the Decoder The simplest embodiment makes use of a single fixed topology for filter bank MDCT-2/iMDCT-2 and signals this with a single bit in the transferred bitstream. In case more fixed sets of topologies are used, a corresponding number of bits is usedfor signaling the currently used one of the topologies. More advanced embodiments pick the best out of a set of fixed code-book topologies and signal a corresponding code-book entry inside the bitstream. In embodiments were the filter topology of the second-stage transforms is not fixed, a corresponding side information is transmitted in the encoding output bitstream. Preferably, indices k1, k2, k3, k4, . . . , kend are transmitted. Starting with quadrupled resolution, k2 is transmitted with the same value as in k1 equal to bin zero. In topologies ending with temporal resolutions coarser than the maximum temporal resolution, the value transmitted in kend is copied to k4,k3, . . . . The following table illustrates this with some examples. bi is a place holder for a frequency bin as a value. TABLE-US-00001 Indices signaling topology Topology k1 k2 k3 k4 kend Topology with 1x, 2x, 4x, b1 > 1 b2 b3 b4 b5 8x, 16x temporal resolutions Topology with 1x, 2x, 4x, b1 > 1 b2 b3 b4 b4 8x temporal resolutions (like in FIG. 6) Topologywith 8x temporal 0 0 0 bmax bmax resolution only Topology with 4x, 8x and 0 0 b2 b3 bmax 16x temporal resolution Due to temporal psycho-acoustic properties of the human auditory system it is sufficient to restrict this to topologies with temporal resolution increasing with frequency. Filter Bank Topology Examples FIGS. 8 and 9 depict two examples of multi-resolution T/F (time/frequency) energy plots of a second-stage filter bank. FIG. 8 shows an `8.times. temporal resolution only` topology. A time domain signal transient in FIG. 8a is depicted asamplitude over time (time expressed in samples). FIG. 8b shows the corresponding T/F energy plot of the first-stage MDCT (frequency in bins over normalized time corresponding to one transform block), and FIG. 8c shows the corresponding T/F plot of thesecond-stage MDCTs (8*128 time-frequency tiles). FIG. 9 shows a `1.times., 2.times., 4.times., 8.times. topology`. A time domain signal transient in FIG. 9a is depicted as amplitude over time (time expressed in samples). FIG. 9b shows thecorresponding T/F plot of the second-stage MDCTs, whereby the frequency resolution for the lower band part is selected proportional to the bandwidths of perception of the human auditory system (critical bands), with bN1=16, bN2=16, bN4=16, bN8=114, for1024 coefficients in total (these numbers have the following meaning: 16 frequency lines having single temporal resolution, 16 frequency lines having double, 16 frequency lines having 4 times, and 114 frequency lines having 8 times temporal resolution). For the low frequencies there is a single partition, followed by two and four partitions and, above about f=50, eight partitions. Filter Bank Control The simplest embodiment can use any state-of-the-art transient detector to switch to a fixed topology matching, or for coming close to, the T/F resolution of human perception. The preferred embodiment uses a more advanced control processing:Calculate a spectral flatness measure SFM, e.g. according to equation (7), over selected bands of M frequency lines (f.sub.bin) of the power spectral density Pm by using a discrete Fourier transform (DFT) of a windowed signal of a long transform blockwith N.sub.L samples, i.e. the length of MDCT-1 (the selected bands are proportional to critical bands); Divide the analysis block of N.sub.L samples into S>8 overlapping blocks and apply S windowed DFTs on the sub-blocks. Arrange the result as amatrix having S columns (temporal resolution, t.sub.block) and a number of rows according the number of frequency lines of each DFT, S being an integer; Calculate S spectrograms Ps, e.g. general power spectral densities or psycho-acoustically shapedspectrograms (or excitation patterns); For each frequency line determine a temporal flatness measure (TFM) according to equation (8); Use the SFM vector to determine tonal or noisy bands, and use the TFM vector to recognize the temporal variations withinthis bands. Use threshold values to decide whether or not to switch to the multi-resolution filter bank and what topology to pick. .times..times..times..times..times..times..times..times..times..times..ti- mes..times..times..times..times..times..times..times..times..times..times.- .times..times..times..times..times..times..times..times..times..times..tim-es..times..times..times..times..times..times..times..times. ##EQU00003## In a different embodiment, the topology is determined by the following steps: performing a spectral flatness measure SFM using said first forward transform, by determining for selected frequency bands the spectral power of transform bins anddividing the arithmetic mean value of said spectral power values by their geometric mean value; sub-segmenting an un-weighted input signal section, performing weighting and short transforms on m sub-sections where the frequency resolution of thesetransforms corresponds to said selected frequency bands; for each frequency line consisting of m transform segments, determining the spectral power and calculating a temporal flatness measure TFM by determining the arithmetic mean divided by thegeometric mean of the m segments; determining tonal or noisy bands by using the SFM values; using the TFM values for recognizing the temporal variations in these bands. Threshold values are used for switching to finer temporal resolution for saidindicated noisy frequency bands. The MDCT can be replaced by a DCT, in particular a DCT-4. Instead of applying the invention to audio signals, it also be applied in a corresponding way to video signals, in which case the psycho-acoustic analyzer PSYM is replaced by an analyzertaking into account the human visual system properties. The invention can be use in a watermark embedder. The advantage of embedding digital watermark information into an audio or video signal using the inventive multi-resolution filter bank, when compared to a direct embedding, is an increasedrobustness of watermark information transmission and watermark information detection at receiver side. In one embodiment of the invention the cascaded filter bank is used with a audio watermarking system. In the watermarking encoder a first (integer)MDCT is performed. A first watermark is inserted into bins 0 to k1-1 using a psycho-acoustic controlled embedding process. The purpose of this watermark can be frame synchronization at the watermark decoder. Second-stage variable size (integer) MDCTsare applied to bins starting from bin index k1 as described before. The output of this second stage is resorted to gain a time-frequency expression by interpreting the output as time-reversed temporal blocks and each second-stage MDCT as a new frequencyline (bin). A second watermark signal is added onto each one of these new frequency lines by using an attenuation factor that is controlled by psycho-acoustic considerations. The data is resorted and the inverse (integer) MDCT (related to theabove-mentioned second-stage MDCT) is performed as described for the above embodiments (decoder), including windowing and overlay/add. The full spectrum related to the first forward transform is restored. The full-size inverse (integer) MDCT performedonto that data, windowing and overlay/add restores a time signal with a watermark embedded. The multi-resolution filter bank is also used within the watermark decoder. Here the topology of the second-stage MDCTs is fixed by the application. * * * * * Randomly Featured Patents
{"url":"http://www.patentgenius.com/patent/8095359.html","timestamp":"2014-04-19T09:34:14Z","content_type":null,"content_length":"69190","record_id":"<urn:uuid:1c6f51b9-2f6f-48e0-b5ef-aae27b5c9c06>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
dimension 2 October 24th 2010, 02:08 AM #1 Super Member Aug 2009 dimension 2 since dim of zero vector is 0, does it mean that if W= {0, v} then the dim W = 1? i thought, by definition of dim, dim W= 2 instead? i dont get why dim of a zero vector is 0 when there is one element, namely the zero vector. Do you mean W = span{0,v}? In this case, we do have dim W = 1. The reason is that the vectors 0 and v are not linearly independent (for example, 7*0 + 0*v = 0). Hence span{0,v} = span{v}, and so dim W = 1 (remember that the dimension of a vector space is the number of basis vectors in a given basis, or equivalently, the maximal number of linearly independent vectors in the vector space). As for why the dimension of the vector space consisting of nothing but the zero vector is 0, you can think of it as a convenient convention. In R^3, the dimension of a plane is 2. Moving down in dimension, the line has dimension 1, and moving down in dimension again, you have the vector space {0}, which is assumed to have dimension 0. i just realized that isint it always said that R^n has dimension n, R represents real numbers here..so how can it be that in R^3, the dimension of a plane is 2? Your statement that $\mathbb{R}^n$ always has dimension $n$ is correct. Then you take a plane, say $P$ inside of $\mathbb{R}^3$. This plane is a subspace of $\mathbb{R}^3$, and as such, there is no reason why the plane should have dimension 3. Remember, it is all of $\mathbb{R}^3$ that has dimension 3 as a vector space. This does not imply that all subspaces of $\mathbb{R}^3$ have dimension 3. Last edited by HappyJoe; October 24th 2010 at 03:58 AM. Reason: Ugh October 24th 2010, 02:38 AM #2 October 24th 2010, 03:52 AM #3 Super Member Aug 2009 October 24th 2010, 03:58 AM #4
{"url":"http://mathhelpforum.com/advanced-algebra/160772-dimension-2-a.html","timestamp":"2014-04-16T20:02:54Z","content_type":null,"content_length":"37851","record_id":"<urn:uuid:d3b3a14e-f32f-4340-9f90-eebeb456dd2b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
TR06-015 | 1st February 2006 00:00 On Barnette's conjecture Barnette's conjecture is the statement that every 3-connected cubic planar bipartite graph is Hamiltonian. Goodey showed that the conjecture holds when all faces of the graph have either 4 or 6 sides. We generalize Goodey's result by showing that when the faces of such a graph are 3-colored, with adjacent faces having different colors, if two of the three color classes contain only faces with either 4 or 6 sides, then the conjecture holds. More generally, we consider 3-connected cubic planar graphs that are not necessarily bipartite, and show that if the faces of such a graph are 2-colored, with every vertex incident to one blue face and two red faces, and all red faces have either 4 or 6 sides, while the blue faces are arbitrary, provided that blue faces with either 3 or 5 sides are adjacent to a red face with 4 sides (but without any assumption on blue faces with $4,6,7,8,9,\ldots$ sides), then the graph is Hamiltonian. The approach is to consider the reduced graph obtained by contracting each blue face to a single vertex, so that the reduced graph has faces corresponding to the original red faces and with either 2 or 3 sides, and to show that such a reduced graph always contains a proper quasi spanning tree of faces. In general, for a reduced graph with arbitrary faces, we give a polynomial time algorithm based on spanning tree parity to decide if the reduced graph has a spanning tree of faces having 2 or 3 sides, while to decide if the reduced graph has a spanning tree of faces with 4 sides or of arbitrary faces is NP-complete for reduced graphs of even degree. As a corollary, we show that whether a reduced graph has a noncrossing Euler tour has a polynomial time algorithm if all vertices have degree 4 or 6, but is NP-complete if all vertices have degree 8. Finally, we show that if Barnette's conjecture is false, then the question of whether a graph in the class of the conjecture is Hamiltonian is NP-complete.
{"url":"http://eccc.hpi-web.de/report/2006/015/","timestamp":"2014-04-21T02:05:02Z","content_type":null,"content_length":"21161","record_id":"<urn:uuid:ba103e73-e175-4151-8b4e-52f08d0c2391>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
Please Help ! Discrete Proof By Mathematical Induction. February 26th 2008, 01:56 PM #1 Feb 2008 Please Help ! Discrete Proof By Mathematical Induction. Prove that a set with n elements has n(n-1)(n-2)/6 subsets containing exactly three elements whenever n is an integer greater than or equal to 3. Please help, I understand the basics but I am currently stuck on this problem. ${\binom {n} {3}}=\frac{{n!}}{{\left( {3!} \right)\left( {n - 3} \right)!}} = \frac{{\left( n \right)\left( {n - 1} \right)\left( {n - 2} \right)\left( {n - 3} \right)!}}{{\left( {3!} \right)\ left( {n - 3} \right)!}} = \frac{{\left( n \right)\left( {n - 1} \right)\left( {n - 2} \right)}}{{\left( 6 \right)}}$ thanks, but it has to include a basis, induction hypothesis, and induction step for it be a complete proof. This is a prefect example of why many of us think that the current state of mathematics training is in the dumps. Given how many students have no clear idea what ‘induction proofs’ are all about, why complicate things with such a problem? Is this a set theory class? If it is then I am wrong. Otherwise, I stand by what I have written. Then base case: For $n = 3....$ verify. Inductive step: For some integer $k$, statement holds true. Prove that $P(k) \implies P(k+1)$. February 26th 2008, 02:37 PM #2 February 26th 2008, 02:46 PM #3 Feb 2008 February 26th 2008, 03:28 PM #4 February 26th 2008, 03:35 PM #5 Jan 2008
{"url":"http://mathhelpforum.com/discrete-math/29244-please-help-discrete-proof-mathematical-induction.html","timestamp":"2014-04-20T01:20:05Z","content_type":null,"content_length":"43035","record_id":"<urn:uuid:efa1658f-41ec-4a75-8c59-b926e3d3d620>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Electric flux through a cube I don't really understand why the electric flux through the top and bottom faces of the cube is zero. Is it because the angle between the face and the electric field is 90? Yes. The flux is [tex]\oint \vec{E} \cdot d\vec{A}[/tex]. The dot product can be rewritten with a cosine. Since the angle between the field and the surface is 90 degrees, the flux resolves to zero. The electric field is piercing the surface at an angle. Draw a diagram. Since you know that [tex]\Phi = \oint \vec{E} \cdot d\vec{A}[/tex], evaluate that integral (which, in this simplified case, can be written as [tex]\Phi = EA\cos{\theta}[/tex]). The angle is not 180.
{"url":"http://www.physicsforums.com/showthread.php?t=129879","timestamp":"2014-04-21T02:05:50Z","content_type":null,"content_length":"23618","record_id":"<urn:uuid:f49ef349-a4a0-45ce-8cd5-b4cbceafa43c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Show that the following process will find a factorization. April 28th 2010, 12:30 PM #1 Junior Member Feb 2010 Show that the following process will find a factorization. We know from a theorem that if an odd integer N equals a^2+b^2 = c^2+d^2 cannot be a prime. Show that the following process will find a factorization. Assume we label them so all a,b,c,d positive, a,c odd, b,d even and a is not equal to c. 1) Set u=gcd(a-c,d-b) and w=gcd(a+c,d+b). Prove that a-c=lu and d-b=mu and a+c=mw and d+b=lw for some l and m. 2) Now show that N = [(u/2)^2 + (w/2)^2]*[m^2 + l^2]. We know from a theorem that if an odd integer N equals a^2+b^2 = c^2+d^2 cannot be a prime. Show that the following process will find a factorization. Assume we label them so all a,b,c,d positive, a,c odd, b,d even and a is not equal to c. 1) Set u=gcd(a-c,d-b) and w=gcd(a+c,d+b). Prove that a-c=lu and d-b=mu and a+c=mw and d+b=lw for some l and m. 2) Now show that N = [(u/2)^2 + (w/2)^2]*[m^2 + l^2]. This link (Wikipedia article: Euler's factorization method) has what you're looking for. May 4th 2010, 12:21 PM #2
{"url":"http://mathhelpforum.com/number-theory/141936-show-following-process-will-find-factorization.html","timestamp":"2014-04-17T13:22:11Z","content_type":null,"content_length":"33270","record_id":"<urn:uuid:7c0da48c-065f-4ff9-aaed-07cb2bdb307c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
Graphs with tiny vector chromatic numbers and huge chromatic numbers Feige, Uriel and Langberg, Michael and Schechtman, Gideon (2004) Graphs with tiny vector chromatic numbers and huge chromatic numbers. SIAM Journal on Computing, 33 (6). pp. 1338-1368. ISSN 0097-5397. http://resolver.caltech.edu/CaltechAUTHORS:20111012-111737057 PDF - Published Version See Usage Policy. Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:20111012-111737057 Karger, Motwani, and Sudan [J. ACM, 45 (1998), pp. 246-265] introduced the notion of a vector coloring of a graph. In particular, they showed that every k-colorable graph is also vector k-colorable, and that for constant k, graphs that are vector k-colorable can be colored by roughly Δ^(1 - 2/k) colors. Here Δ is the maximum degree in the graph and is assumed to be of the order of n^5 for some 0 < δ < 1. Their results play a major role in the best approximation algorithms used for coloring and for maximum independent sets. We show that for every positive integer k there are graphs that are vector k-colorable but do not have independent sets significantly larger than n/Δ^(1- 2/k) (and hence cannot be colored with significantly fewer than Δ^(1-2/k) colors). For k = O(log n/log log n) we show vector k-colorable graphs that do not have independent sets of size (log n)^c, for some constant c. This shows that the vector chromatic number does not approximate the chromatic number within factors better than n/polylogn. As part of our proof, we analyze "property testing" algorithms that distinguish between graphs that have an independent set of size n/k, and graphs that are "far" from having such an independent set. Our bounds on the sample size improve previous bounds of Goldreich, Goldwasser, and Ron [J. ACM, 45 (1998), pp. 653-750] for this problem. Item Type: Article Related URLs: Additional © 2004 Society for Industrial and Applied Mathematics. Received by the editors July 9, 2003; accepted for publication (in revised form) March 2, 2004; published electronically August Information: 6, 2004. We would like to thank Luca Trevisan for his suggestion to analyze edge sampling. This author was supported in part by the Israel Science Foundation (grant 236/02). This author’s work was done while studying at the Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot, Israel 76100. The work of this author was supported in part by the Israel Science Foundation (grant 154/01). Funders: ┌─────────────────────────────┬───────────────┐ │ Funding Agency │ Grant Number │ │ Israel Science Foundation │ 236/02 │ │ Israeli Science Foundation │ 154/01 │ Subject semidefinite programming, chromatic number, independent set, approximation algorithms, property testing Classification AMS subject classifications: 68R05, 05C15, 90C22 Record Number: CaltechAUTHORS:20111012-111737057 Persistent http://resolver.caltech.edu/CaltechAUTHORS:20111012-111737057 Official Graphs with Tiny Vector Chromatic Numbers and Huge Chromatic Numbers Uriel Feige, Michael Langberg, and Gideon Schechtman SIAM J. Comput. 33, pp. 1338-1368 Usage Policy: No commercial reproduction, distribution, display or performance rights in this work are provided. ID Code: 27189 Collection: CaltechAUTHORS Deposited By: Ruth Sustaita Deposited On: 13 Oct 2011 14:52 Last Modified: 26 Dec 2012 14:16 Repository Staff Only: item control page
{"url":"http://authors.library.caltech.edu/27189/","timestamp":"2014-04-17T00:50:12Z","content_type":null,"content_length":"27815","record_id":"<urn:uuid:8f0ad703-d8d4-442b-8944-02890a766983>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Examples of common false beliefs in mathematics. up vote 364 down vote favorite The first thing to say is that this is not the same as the question about interesting mathematical mistakes. I am interested about the type of false beliefs that many intelligent people have while they are learning mathematics, but quickly abandon when their mistake is pointed out -- and also in why they have these beliefs. So in a sense I am interested in commonplace mathematical mistakes. Let me give a couple of examples to show the kind of thing I mean. When teaching complex analysis, I often come across people who do not realize that they have four incompatible beliefs in their heads simultaneously. These are (i) a bounded entire function is constant; (ii) sin(z) is a bounded function; (iii) sin(z) is defined and analytic everywhere on C; (iv) sin(z) is not a constant function. Obviously, it is (ii) that is false. I think probably many people visualize the extension of sin(z) to the complex plane as a doubly periodic function, until someone points out that that is complete A second example is the statement that an open dense subset U of R must be the whole of R. The "proof" of this statement is that every point x is arbitrarily close to a point u in U, so when you put a small neighbourhood about u it must contain x. Since I'm asking for a good list of examples, and since it's more like a psychological question than a mathematical one, I think I'd better make it community wiki. The properties I'd most like from examples are that they are from reasonably advanced mathematics (so I'm less interested in very elementary false statements like $(x+y)^2=x^2+y^2$, even if they are widely believed) and that the reasons they are found plausible are quite varied. big-list mathematics-education 58 I have to say this is proving to be one of the more useful CW big-list questions on the site... – Qiaochu Yuan May 6 '10 at 0:55 14 The answers below are truly informative. Big thanks for your question. I have always loved your post here in MO and wordpress. – Unknown May 22 '10 at 9:04 14 wouldn't it be great to compile all the nice examples (and some of the most relevant discussion / comments) presented below into a little writeup? that would make for a highly educative and entertaining read. – Suvrit Sep 20 '10 at 12:39 15 It's a thought -- I might consider it. – gowers Oct 4 '10 at 20:13 22 Meta created tea.mathoverflow.net/discussion/1165/… – quid Oct 8 '11 at 14:27 show 12 more comments protected by François G. Dorais♦ Oct 15 '13 at 2:34 Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site. Would you like to answer one of these unanswered questions instead? 176 Answers active oldest votes I'm not sure how common this is, but it confused me for years. Let $f : \mathbb{C} \to \mathbb{C}$ be an analytic function and $\gamma$ a path in $\mathbb{C}$. In your first class in complex analysis, you define the integral $\int_{\gamma} f(z) dz$. Now let $a(x,y) dx + b(x,y) dy$ be a $1$-form on $\mathbb{R}^2$ and let $\gamma$ be a path in $\mathbb{R}^2$. In your first class on differential geometry, you define the integral $\int_{\ gamma} a(x,y) dx + b(x,y) dy$. It took me at least three years after I had taken both classes to realize that these notations are consistent. Until then, I thought there was a "path integral in the sense of complex up vote 19 analysis", and I wasn't sure if it obeyed the same rules as the path integral from differential geometry. (By way of analogy, although I wasn't thinking this clearly, the integral $\int \ down vote sqrt{dx^2 + dy^2}$, which computes arc length, is NOT the integral of a $1$-form, and I thought complex integrals were something like this.) For the record, I'll spell out the relation between these notions. Let $f(x+iy) = u(x,y) + i v(x,y)$. Then $$\int_{\gamma} f(z) dz = \int_{\gamma} \left( u(x,y) dx - v(x,y) dy \right) + i \int_{\gamma} \left( u(x,y) dy + v(x,y) dx \right)$$ The right hand side should be thought of as multiplying out $\int_{\gamma} (u(x,y) + i v(x,y)) (dx + i dy)$, a notion which can be made 1 I think it is customary to say "contour integral" for the complex analysis gadget, "line integral" for the multivariable calculus gadget, and "path integral" for a (not necessarily rigorously defined) integral over a space of fields. – S. Carnahan♦ Jan 13 '11 at 2:47 add comment "Euclid's proof of the infinitude of primes was by contradiction." That is a very widespread false belief. up vote 18 down "Prime Simplicity", Mathematical Intelligencer, volume 31, number 4, pages 44--52, by me and Catherine Woodgold, debunks it. The proof that Euclid actually wrote is simpler and better than vote the proof by contradiction often attributed to him. 1 And you'd be surprised how many quite knowledgable PHD's spend decades repeating this mistake to thier students,Micheal. – Andrew L Jun 7 '10 at 0:07 5 Actually, if you read our paper on this, you'll find that I won't be surprised at all. (BTW, my first name is spelled in the usual way, not the way you spelled it.) – Michael Hardy Jun 7 '10 at 3:28 3 @ BlueRaja: I'm assuming "Euler" is a typo and you meant Euclid. Euclid said if you take any arbitrary finite set of prime numbers, then multiply them and add 1, and factor the result into primes, you get only new primes not already in the finite set you started with. The proof that they're not in that set is indeed by contradiction. But the proof as a whole is not, since it doesn't assume only finitely many primes exist. – Michael Hardy Jul 7 '10 at 21:55 Note indeed the original Euclid's statement: Prime numbers are more than any previously assigned finite collection of them (my translation). This reflects a remarkable maturity and 2 consciousness, if we think that mathematicians started speaking of infinite sets a long time before a well founded theory was settled and paradoxes were solved. Euclid's original proof in my opinion is a model of precision and clearness. It starts: Take e.g. three of them, A, B and Γ . He takes three prime numbers as the first reasonably representative case to get the general construction. – Pietro Majer Jul 20 '10 at 14:51 1 Actually I think the use of three letters was just a notational device. He clearly meant an arbitrary finite set of prime numbers (if he hadn't had that in mind, he couldn't have written that particular proof). – Michael Hardy Jul 20 '10 at 22:43 show 5 more comments up vote 17 down vote 4 This false belief is perhaps caused by the fact that continuity does imply sequential continuity, and sequential adherent points are adherent points. – Terry Tao Sep 27 '10 at show 1 more comment I'm not sure that anyone holds this as a conscious belief but I have seen a number of students, asked to check that a linear map $\mathbb{R}^k \to \mathbb{R}^{\ell}$ is injective, up vote 17 down just check that each of the $k$ basis elements has nonzero image. 10 Higher-level version: $n$ vectors are linearly independent iff no two are proportional. I've seen applied mathematicians do that. – darij grinberg Apr 10 '11 at 18:45 add comment By definition, an asymptote is a line that a curve keeps getting closer to but never touches. The teaching of this false belief at an elementary level is standard and nearly universal. Everybody "knows" that it is true. A tee-shirt has a clever joke about it. In the course of describing the function $f(x) = \dfrac{5x}{36 + x^2}$, I mentioned about an hour ago before a up vote class of about 10 students that its value at 0 is 0 and that it has a horizontal asymptote at 0. One of them accused me of contradicting myself. What of $y = \dfrac{\sin x}{x}$? And even 17 down with simple rational functions there are exceptions, although there the curve can touch or cross the asymptote only finitely many times. And $3 - \dfrac{1}{x}$ gets closer to 5 as $x$ grows, vote and never reaches 5, so by the widespread false belief there would be a horizontal asymptote at 5. 1 For this to be a false definition, it would have to be a definition in the first place. And this means you have to define a "curve" first, and then define "get closer" and "touch". – Laurent Moret-Bailly Mar 6 '11 at 16:01 7 @Laurent: It's hard to imagine a comment more irrelevant to what happens in classrooms than yours. – Michael Hardy Mar 7 '11 at 4:38 4 It happens to be the literal meaning of the word asymptote "not together falling". You could say that it is a bad choice of name, but for hyperbolas it worked just fine and then it was mercilessly generalized. – user11235 Apr 8 '11 at 14:35 show 1 more comment This is (I think) a fairly common misconception about maths that arises in connection with quantum mechanics. Given a Hermitian operator A acting on a finite dimensional Hilbert space H, the eigenvectors of A span H. It's easy to think that the infinite dimensional case is "basically the same", or that any "nice" operator that physicists might want to consider has a spanning up vote eigenspace. However, neither the position nor the momentum operator acting on $L^2(\mathbb{R})$ have any eigenvectors at all, and these are certainly important physical operators! Based on 16 down an admittedly fairly small sample size, it seems that it's not uncommon to simultaneously believe that Heisenberg's uncertainty relation holds and that the position and momentum operators vote possess eigenvectors. 1 Yeah, for some reason many physicists are taught exactly no functional analysis... In fact, I know of no "quantum mechanics for physicists" books which use much more than a beginning undergrad level of analysis. Though admittedly these details are not so important for doing simple calculations, though they can be important in doing more sophisticated calculations, or understanding, e.g., why field theory works the way it does... – jeremy Jun 1 '10 at 23:33 5 Reciprocally, many mathematicians are taught no quantum mecha... make it, no physics at all! This is shocking, since the biggest impetus to the development of PDEs and functional analysis was given by what? You guessed it, physics. – Victor Protsak Jun 10 '10 at 6:56 add comment "Suppose that two features $[x,y]$ from a population $P$ are positively correlated, and we divide $P$ into two subclasses $P_1$, $P_2$. Then, it cannot happen that the respective features ( $[x_1,y1]$ and $[x_2,y_2]$) are negatively correlated in both subclasses Or more succintly: up vote 16 down vote "Mixing preserves the correlation sign." This seems very plausible - almost obvious. But it's false - see Simpon's paradox show 2 more comments Here's a little factoid: (The Mean-value theorem for functions taking values in $\mathbb{R} ^n$.) If $\alpha : [a,b]\rightarrow \mathbb{R}^n$ is continuous on $[a,b]$ and differentiable on $(a,b)$, then there exists a $c\in (a,b)$ such that $\frac{\alpha (b)-\alpha (a)}{b-a}=\alpha '(c)$ up vote 15 A counterexample is the helix $(\cos (t),\sin (t), t)$ with $a=0$, $b=2\pi$. down vote Another common misunderstanding (although not mathematical) is about the meaning of the word factoid. In fact, the common mistaken definition of the word factoid is factoidal. 10 On the other hand, perhaps the most useful corollary of the mean value theorem is the "mean value inequality": that $|\alpha(b) - \alpha(a)| \le (b-a) \sup_{t \in [a,b]} |\alpha'(t)|$. If you look carefully, most applications of the MVT in calculus are really using this "MVI". The MVI remains true for absolutely continuous functions taking values in any Banach space, and so is probably the right generalization to keep in mind. – Nate Eldredge May 6 '10 at 14:37 According to at least one dictionary, there are two different definitions of factoid: (1) an insignificant or trivial fact, and (2) something fictitious or unsubstantiated that is 1 presented as fact, devised especially to gain publicity and accepted because of constant repetition. I am not convinced that the multi-d mean value “theorem” fits either definition. – Harald Hanche-Olsen May 8 '10 at 19:09 show 1 more comment Perhaps the most prevalent false belief in math, starting with calculus class, is that the general antiderivative of f(x) = 1/x is F(x) = ln|x| + C. This can be found in innumerable up vote 15 calculus textbooks and is ubiquitous on the Web. down vote 5 Well, the false belief is correct under the (frequently unspoken) condition that we only speak of antiderivatives over intervals on which the function we're antidifferentiating is "well-behaved" (and I'm not 100% sure what the right technical condition there is; "continuous"?). – JBL Jun 12 '10 at 0:57 6 Really? What about the function F(x) given by ln(x) + C_1, x > 0 F(x) = ln(-x) + C_2, x < 0 for arbitrary reals C_1, C_2 ? (The appropriate technical condition is that an antiderivative be differentiable on the same domain as the function it's the antiderivative of is defined on.) – Daniel Asimov Jun 12 '10 at 4:25 3 In case that wasn't clear: F(x) = ln(x) + C_1 for x > 0, and F(x) = ln(-x) + C_2 for x < 0, where C_1 and C_2 are arbitrary real constants. – Daniel Asimov Jun 12 '10 at 4:29 5 That function is not "nice" on any interval containing 0; on any interval not containing 0, it is of the form you are complaining about. This is exactly my point -- the word "interval" is important to what I wrote! – JBL Jun 12 '10 at 19:33 $\mathbb{R}^\times \to \mathbb{R}$ other than $\ln |x| + c$ with derivative $\frac{1}{x}$, I also agree with you; I just happen to think that the actual statement you wrote down is not 3 incorrect but rather has an unwritten assumption built into the word "antiderivative," namely that such a thing is only defined for an interval on which the supposed antiderivative is differentiable. I hope this is clearer (and also correct!). – JBL Jun 12 '10 at 22:13 show 4 more comments The gamma function is not the only meromorphic function satisfying $$f(z+1)=z f(z),\qquad f(1)=1,$$ with no zeroes and no poles other than the points $z=0,-1,-2\dots$. In fact, there is a whole bunch of such functions, which, in general, have the form $$f(z)=\exp{(-g(z))}\frac{1}{z\prod\limits_{n=1}^{\infty} \left(1+\frac{z}{m}\right)e^{-z/m}},$$ where up vote 15 $g(z)$ is an entire function such that $$g(z+1)-g(z)=\gamma+2k\pi i,\qquad g(1)=\gamma+2l\pi i,\qquad k,l\in\mathbb Z, $$ ($\gamma$ is Euler's constant). The gamma function corresponds to down vote the simplest choice $g(z)=\gamma z$. add comment Conditional probability: Let $X$ and $Y$ be real-valued random variables and let $a$ be a constant. Then $$ \mathbb P(X\le Y^2 \mid Y=a) = \mathbb P(X\le a^2) $$ (Here $X\le Y^2$ can up vote 15 down be replaced by any statement about $X$ and $Y$.) show 2 more comments Another false belief which I have been asked thrice so far in person is $$\lim_{x \rightarrow 0} \frac{\sin(x)}{x} = 1$$ even if $x$ is in degrees. I was asked by a student a year and half up vote back when I was a TA and by couple of friends in the past 6 months. 15 down 4 maybe not one of the best answers here, but why the down votes? – Yaakov Baruch Feb 23 '11 at 15:08 3 @downvoters: Kindly provide a reason for the down votes. – user11000 Feb 23 '11 at 15:54 2 +1. The limit when $x$ is in degrees is an exercise in many calculus textbooks (or equivalently, the derivative of $\sin (x degrees)$. Yet, it seems people are slow to pick up on it. Your point was made by Deane Yang in this answer: mathoverflow.net/questions/40082/… (and no one found anything wrong with it then...) – Thierry Zell Feb 27 '11 at 14:45 @JBL, what Sivaram says is taken directly from the question, an example of what is asked for. Granted, this is slightly more advanced. Yet, the second example given 'open dense sets in 2 R' is (in certain uni-curricula) something that comes up earlier than sin (at the level of rigor needed to talk about limits). @Laurent Moret-Bailly, yes and no: define sind(x)= sin(pi x /180), to ask what the limit of sind(x)/x is is not meaningless. And, on varios calculators pressing 'sin' gives this 'sind' (or at least they have that option). – quid Mar 11 '11 at @JBL: Well, there are also some universities outside the US ;) This is not standard, yet not unusual though becoming rarer, in certain parts of Europe: In HS one learns about trig. func. 4 in a geom. way; about diff./int. without a formal notion of limit, mainly rat. funct; in any case that limit wouldn't show up explictly. (Maybe 'invisibly' if derivative of trig. functions are mentioned.) Then, at univ. at the very start you take (real) analysis: constr. of the reals, basic top. notions(!), continuity,...,series of functions as application powerseries, and as appl exp and trig. func. – quid Mar 11 '11 at 17:47 show 9 more comments A subgroup of a finitely generated group is again finitely generated. up vote 15 down vote 1 True for abelian groups, though. – Mark Mar 3 '11 at 21:40 1 Also true for finite index subgroups, of finitely generated groups. – Michalis Mar 4 '11 at 21:18 show 2 more comments In descriptive set theory, we study properties of Polish spaces, typically not considered as topological spaces but rather we equip them with their "Borel structure", i.e., the collection of their Borel sets. Any two uncountable standard Borel Polish spaces are isomorphic, and the isomorphism map can be taken to be Borel. In practice, this means that for most properties we study it is irrelevant what specific Polish space we use as underlying "ambient space", it may be ${\mathbb R}$, or ${\mathbb N}^{\mathbb N}$, or ${\mathcal l}^2$, etc, and we tend to think of all of them as "the reals". In Lebesgue Sur les fonctions representables analytiquement, J. de math. pures et appl. (1905), Lebesgue makes the mistake of thinking that projections of Borel subsets of the plane ${\ mathbb R}^2$ are Borel. In a sense, this mistake created descriptive set theory. Now we know, for example, that in ${\mathbb N}^{\mathbb N}$, projections of closed sets need not be Borel. Since we usually call reals the members of ${\mathbb N}^{\mathbb N}$, it is not uncommon to think that projections of closed subsets of ${\mathbb R}^2$ are not necessarily Borel. This is false. Note that closed sets are countable union of compact sets, so their projections are $F_\sigma$. The actual results in ${\mathbb R}$ are as follows: Recall that the analytic up vote sets are (the empty set and) the sets that are images of Borel subsets of $\mathbb R$ by Borel measurable functions $f:\mathbb R\to\mathbb R$. 15 down vote • A set is Borel iff it and its complement are analytic. • A set is analytic iff it is the projection of the complement of the projection of a closed subset of ${\mathbb R}^3$. • A set is analytic iff it is the projection of a $G_\delta$ subset of $\mathbb R^2$. • There is a continuous $g:\mathbb R\to\mathbb R$ such that a set is analytic iff it is $g(A)$ for some $G_\delta$ set $A$. • A set if analytic iff it is $f(\mathbb R\setminus\mathbb Q)$ for some continuous $f:\mathbb R\setminus\mathbb Q\to\mathbb R$. (Note that if $f$ is actually continuous on $\mathbb R$, then $f(\mathbb R\setminus\mathbb Q)$ is Borel.) (See also here.) add comment In measure-theoretic probability, I think there is sometimes an idea among beginners that independent random variables $X,Y$ should be thought of as having "disjoint support" as measurable functions on the underlying probability space $\Omega$. Of course this is the opposite of the truth. up vote 14 I think this may come from thinking of measure theory as generalizing freshman calculus, so that one's favorite measure space is something like $[0,1]$ with Lebesgue measure. This is down vote technically a probability space, but a really inconvenient one for actually doing probability (where you want to have lots of random variables with some amount of independence). 1 +1 nice example! – Gil Kalai May 5 '10 at 11:51 1 A student this last semester made precisely this mistake, and it was a labor of three people to convince him otherwise. – Andres Caicedo May 17 '10 at 0:28 1 This disjoint support misconception reinforces the incorrect idea that pairwise independent implies independent. – Douglas Zare Oct 20 '10 at 18:47 add comment In a finite abelian $p$-group, every cyclic subgroup is contained in a cyclic direct summand. Added for Gowers: Maybe one reason why people fall into this error goes something like this: First you learn linear algebra, so you know about vector spaces, bases for same, splittings of same. Then you run into elementary abelian $p$-groups and recognize this as a special case of vector spaces. Then you learn the pleasant fact that all finite abelian $p$-groups are direct sums of cyclic $p$-groups, and a corresponding uniqueness statement. You notice that all of the cyclic subgroups of order $p^2$ in $\mathbb Z/p^2\times \mathbb Z/p$ are summands, and if you up vote have a certain sort of inquiring mind then you also notice that not every subgroup of order $p$ is a summand: one of them is contained in a copy of $\mathbb Z/p^2$, in fact in all of those 14 down copies of it. Having learned so much, both positive and negative, from the example of $\mathbb Z/p^2\times \mathbb Z/p$, you may think that it shows all the interesting basic features of the vote general case and overlook the fact that in $\mathbb Z/p^3\times \mathbb Z/p$ there is a $\mathbb Z/p^2$ not contained in any $\mathbb Z/p^3$. In any case, reputable people sometimes make this blunder; it happened to somebody here at MO just the other day. show 5 more comments A projection of a measurable set is measurable. Not only students believe this. I was asked once (the quote is not precise): "Why do you need this assumption of a measurability of projection? It follows from ..." up vote 14 down A polynomial which takes integer values in all integer points has integer coefficients. Another one seems to be more specific, I just recalled it reading this example. A sub-$\sigma$-algebra of a countably generated $\sigma$-algebra is countably generated. show 1 more comment There is a bijection between the set of [true: prime!] ideals of $S^{-1}R$ and the set of [true: prime!] ideals of $R$ which do not intersect $S$. up vote 14 down vote show 1 more comment In geometric combinatorics, there is a widespread belief that polytopes of equal volume are not scissor congruent (as in Hilbert's third problem) only because their dihedral angles are incomparable. The standard example is a cube and a regular tetrahedron, where dihedral angles are in $\Bbb Q\cdot \pi$ for the cube, and $\notin \Bbb Q\cdot \pi\ $ for the regular up vote tetrahedron. In fact, things are rather more complicated, and having similar dihedral angles doesn't always help. For example, the regular tetrahedron is never scissor congruent to a union 13 down of several smaller regular tetrahedra (even though the dihedral angles are obviously identical). This is a very special case of a general result due to Sydler (1944). show 1 more comment To my knowledge, noone has proven that the scheme of pairs of matrices (A,B) satisfying the equations AB=BA is reduced. But whenever I mention this to people someone says "Surely that's known to be reduced!" up vote 13 (Similar-sounding problem: consider matrices M with $M^2=0$. They must be nilpotent, hence have all eigenvalues zero, hence $Tr(M)=0$. But that linear equation can't be derived from down vote the original homogeneous quadratic equations. Hence this scheme is not reduced.) 1 Sadly, people rely on technology so much nowadays that it gets increasingly unlikely that it will $\textit{ever}$ be proved. – Victor Protsak Jun 10 '10 at 6:40 show 1 more comment A common misbelief for the exponential of matrices is $AB=BA \Leftrightarrow \exp(A)\exp(B) = \exp(A+B)$. While the one direction is of course correct: $AB=BA \Rightarrow \exp(A)\exp(B) = \exp(A+B)$, the other direction is not correct, as the following example shows: $A=\begin{pmatrix} 0 & 1 \\ 0 & 2\pi i\end{pmatrix}, B=\begin{pmatrix} 2 \pi i & 0 \\ 0 & -2\pi i\end up vote 13 {pmatrix} $ with $AB \neq BA \text{ and} \exp(A)=\exp(B) = \exp(A+B) = 1$. down vote 8 A more elementary, and I would bet more common, mistake is to believe that exp(A+B)=exp(A) exp(B) with no hypotheses on A and B. – David Speyer Sep 27 '10 at 13:40 3 Related to the mistake mentionned by David, the fact that the solution of a vector ODE $x'(t)=A(t)x(t)$ should be $$\left(\exp\int_0^tA(s)ds\right)x(0).$$ – Denis Serre Oct 20 '10 at add comment A stunning, ignorance-based false belief I have witnessed while observing a class of a math education colleague is that there is no general formula for the n-th Fibonacci number. I wonder if this false belief comes from conflating the (difficult) lack of formulas for prime numbers with something that is just over the horizon of someone whose interests never stretch beyond high-school math. up vote 13 down vote Behind a number of the elementary false beliefs listed here there is a widespread tendency among people to give up too easily (maybe when having to read at least to page 2 in a book), or to nourish an ego that allows to conclude that something is impossible if they cannot do it themselves. 2 I hope at least your colleague had it right! There is another one along these lines: there is no formula for the sequence $1,0,1,0,1,0,... .$ Your second paragraph is right on target, but I also think that the specific beliefs you and I mentioned have a lot to do with a very limited understanding of what is a "formula". – Victor Protsak Jun 10 '10 at 7:02 1 When, as an undergrad, I couldn't solve a problem given to me by the advisor, and asserted that it's "unsolvable", the advisor replied that "solvability of a problem is a function of two arguments: the problem and the solver." – Michael Dec 3 '13 at 1:14 show 1 more comment In the past I have found myself making this mistake (probably fueled by the fact that you can indeed extend bounded linear operators), and I think it is common in students with a not-deep-enough topology background: up vote 13 down "Let $T$ be a compact topological space, and $X\subset T$ a dense subset. Take $f:X\to\mathbb{C}$ continuous and bounded. Then $f$ can be extended by continuity to all of $T$ ". The classical counterexample is $T=[0,1]$, $X=(0,1]$, $f(t)=\sin\frac1t$ . It helps to understand how unimaginable the Stone-Cech compactification is. 2 Indeed; the key property is uniform continuity. – Nate Eldredge Oct 14 '10 at 14:37 10 How about this one: $T=[-1,1]$, $X=T-\{0\}$, $f(x)=$ sign of $x$. – Laurent Moret-Bailly Oct 19 '10 at 7:23 1 Nice! That's certainly a much simpler example. – Martin Argerami Oct 19 '10 at 10:45 add comment In his answer above, Martin Brandenburg cited the false belief that every short exact sequence of the form $$0\rightarrow A\rightarrow A\oplus B\rightarrow B\rightarrow 0$$ must split. I expect that a far more widespread false belief is that such a sequence can fail to split, when A, B and C are finitely generated modules over a commutative noetherian ring. up vote 13 (Sketch of relevant proof: We need to show that the identity map in $Hom(A,A)$ lifts to $Hom(A\oplus B,A)$. Thus we need to show exactness on the right of the sequence down vote $$0\rightarrow Hom(B,A)\rightarrow Hom(A\oplus B,A)\rightarrow Hom(A,A)\rightarrow 0$$ For this, it suffices to localize and then complete at an arbitrary prime $P$. But completion at $P$ is a limit of tensorings with $R/P^n$, so to check exactness we can replace the right-hand $A$ in each Hom-group with $A/P^nA$. Now we are reduced to looking at modules of finite length, and the sequence is forced to be exact because the lengths of the left and right terms add up to the length in the middle. This is due, I think, to Miyata.) show 1 more comment • Many students have the false belief that if a topological space is totally disconnected, then it must be discrete (related to examples already given). The rationals are a simple counter-example of course. • It is common to imagine rotation in an n-dimensional space, as a rotation through an "axis". this is of course true only in 3D, In higher dimensions there is no "axis". • In calculus, I had some troubles with the following wrong idea. A curve in a plane parametrized by a smooth function is "smooth" in the intuitive sense (having no corners). the curve up vote 13 that is defined by $(t^2,t^2)$ for $t\ge0$ and $(-t^2,t^2)$ for $t<0$ is the graph of the absolute value function with a "corner" at the origin, though the coordinate functions are down vote smooth. the "non-regularity" of the parametrization resolves the conflict. • When first encountering the concept of a spectrum of a ring, the belief that a continuous function between the spectra of two rings must come from a ring homomorphism between the 2 Unfortunately, "smooth" is a word which means whatever its utterer does not want to specify. Differentiable, C^infty, continuous, everything is mixed. – darij grinberg Apr 14 '11 at 1 I don't think the curve (-t^2,t^2) is the graph of the absolute value function. – Zsbán Ambrus May 2 '11 at 16:36 2 +1 for the discrete $\neq$ totally disconnected example. – Jim Conant May 4 '11 at 15:12 Discrete $\ne$ totally disconnected is a good one that I thought of today and just had to check to see if it was posted already. It adds to the confusion that every finite subset of a 1 totally disconnected space must have the discrete topology, and that in most topological spaces encountered "in nature," the connected components are open sets. – Timothy Chow Oct 20 '11 at 14:30 show 4 more comments False belief: Every commuting pair of diagonalizable elements of $PSL(2,\mathbb{C})$ are simultaneously diagonalizable. The truth: I suppose not many people have thought about it, but it up vote 13 surprised me. Look at $$\left(\matrix{i& 0 \cr 0 & -i\cr } \right), \left(\matrix{0& i \cr i & 0\cr } \right).$$ down vote To me, it is marvellous that the failure of this fact (as opposed to the truth of the corresponding fact for $\operatorname{SL}(2, \mathbb C)$) is a matter of topology; that is, from 1 the point of view of algebraic groups, it comes from the fact that $\operatorname{SL}(2, \mathbb C)$ is simply connected, whereas $\operatorname{PSL}(2, \mathbb C)$ (which I had rather call $\operatorname{PGL}(2, \mathbb C)$) is not (it is at the opposite extreme---`adjoint'). – L Spice Dec 12 '13 at 23:09 add comment Just today I came across a mathematician who was under the impression that $\aleph_1$ is defined to be $2^{\aleph_0}$, and therefore that the continuum hypothesis says there is no cardinal between $\aleph_0$ and $\aleph_1$. In fact, Cantor proved there are no cardinals between $\aleph_0$ and $\aleph_1$. The continuum hypothesis says there are no cardinals between $\aleph_0$ and $2^{\aleph_0}$. $2^{\aleph_0}$ is the cardinality of the set of all functions from a set of size $\aleph_0$ into a set of size $2$. Equivalently, it is the cardinality of the set of all subsets of a set up vote 13 of size $\aleph_0$, and that is also the cardinality of the set of all real numbers. down vote $\aleph_1$, on the other hand, is the cardinality of the set of all countable ordinals. (And $\aleph_2$ is the cardinality of the set of all ordinals of cardinality $\le \aleph_1$, and so on, and $\aleph_\omega$ is the next cardinal of well-ordered sets after all $\aleph_n$ for $n$ a finite ordinal, and $\aleph_{\omega+1}$ is the cardinality of the set of all ordinals of cardinality $\le \aleph_\omega$, etc. These definitions go back to Cantor. 3 I retract my above question to my suprise it indeed seems to be common. Yet, this answer is a dublicate see an answer of April 16. – quid Oct 6 '11 at 0:50 1 This example already appears on this very page. mathoverflow.net/questions/23478/… – Asaf Karagila Oct 6 '11 at 12:41 1 One of the deficiencies of mathoverflow's software is that there is no easy way to search through the answers already posted. Even knowing that the date was April 16th doesn't help. – Michael Hardy Oct 7 '11 at 20:26 2 @Michael Hardy: You can sort the answers by date by clicking on the "Newest" or "Oldest" tabs instead of the "Votes" tab. – Douglas Zare Oct 19 '11 at 23:03 show 5 more comments There are cases that people know that a certain naive mathematical thought is incorrect but largely overestimate the amount by which it is incorrect. I remember hearing on the radio somebody explaining: "We make five experiments where the probability for success in every experiment is 10%. Now, a naive person will think that the probability that at least one of the experiment succeed is five times ten, 50%. But this is incorrect! the probability for success is not much larger than the 10% we started with." up vote 12 Of course, the truth is much closer to 50% than to 10%. down vote (Let me also mention that there are various common false beliefs about mathematical terms: NP stands for "not polynomial" [in fact it stands for "Nondeterministic Polynomial" time]; the word "Killing" in Killing form is an adjective [in fact it is based on the name of the mathematician "Wilhelm Killing"] etc.) 10 And the Killing field has nothing to do with Pol Pot. – Nate Eldredge May 5 '10 at 14:40 2 Unfortunately I often slip up in class and say that the Killing vector field $T$ kills the metric term (well, I use the verb kills when a differential operator hits something and makes it zero, because, you know, bad terms are always "the enemy"). I'm not sure how much damage I did to the students' impressions... – Willie Wong May 5 '10 at 17:19 3 "Kills" is one of those terms I hear mathematicians use surprisingly often. The other one is "this guy." I never really understood the prevalence of either. – Qiaochu Yuan May 6 '10 at 15 "Guy" is a pretty standard English colloquialism for "person"; combine this with humans' tendency to anthropomorphize and this usage is understandable. (Though we shouldn't anthropomorphize mathematical objects, because they hate that.) – Nate Eldredge May 6 '10 at 14:51 8 In the only lecture I saw by David Goss he started with "guy", quickly went to something like "uncanny fellow" and then stayed with "sucker" for most of the talk. I don't know what those poor Drinfeld modules had done to him the day before :-) – Peter Arndt May 19 '10 at 12:24 show 8 more comments A common belief of students in real analysis is that if $$ \lim_{x\to x_0}f(x,y_0),\qquad\lim_{y\to y_0}f(x_0,y) $$ exist and are both equal to $l$, then the function has limit $l$ in $ (x_0,y_0)$. It is easly to show counter-examples. More difficult is to show that also the belief $$ \lim_{t\to 0}f(x_0+ht,y_0+kt)=l,\quad\forall\;(h,k)\neq(0,0)\quad\Rightarrow\quad\lim_ up vote {(x,y)\to(x_0,y_0)}f(x,y)=l $$ is false. For completeness's sake (presumably anybody who ever taught calculus has seen it, but it's easily forgotten) the standard counterexample is $$ f 12 down (x,y)=\frac{xy^2}{x^2+y^4} $$ at $(0,0$). 1 That counterexample has the advantage of being well-behaved away from $(0,0)$, but the (related) disadvantages of being easily forgotten and requiring a bit of thought to come up with. This can make things look trickier than they are. For this reason, I prefer brain-dead counterexamples like $f(x,y)=1$ if $y=x^2 \neq 0$, $f(x,y)=0$ otherwise. – Chris Eagle Jan 12 '11 at 17:11 1 @Chris As you know, this is not a "real function" to the minds of calculus students. – Ryan Reich Jan 2 at 3:04 add comment Piggybacking on one of Pierre's answers, I once had to teach beginning linear algebra from a textbook wherein the authors at one point stated words to the effect that the the trivial vector space {0} has no basis, or that the notion of basis for the trivial vector space makes no sense. It is bad enough as a student to generate one's own false beliefs without having textbooks presenting falsehoods as facts. up vote 12 down My personal belief is that the authors of this text actually know better, but they don't believe that their students can handle the truth, or perhaps that it is too much work or too vote time-consuming on the part of the instructor to explain such points. Whatever their motivation was, I cannot countenance such rationalizations. I told the students that the textbook was just plain wrong. 4 Bjorn Poonen once gave a lecture at MIT about the empty set; it really opened my eyes. If someone wrote a textbook or something on the matter I think everyone would be a lot less confused. – Qiaochu Yuan Jul 7 '10 at 23:56 3 For most of the history of civilization, zero was very controversial... – Victor Protsak Jul 9 '10 at 4:12 7 I can combine Qiaochu's and Victor's remarks in this memory I have of a coffee break conversation between two colleagues, who were arguing on whether it made sense to say that the 1-element group acts on the empty set. I wisely decided to stay out of the controversy... – Thierry Zell Aug 31 '10 at 2:24 4 Thierry: of course it makes sense. But the action is not transitive. – ACL Dec 1 '10 at 22:53 I once taught abstract algebra from a book that adopted the artificial convention that the domain of a map of sets must be nonempty. I eventually figured out that the reason was in order 3 to be able to say that every one-to-one map has a left inverse. And I have many times taught topology from a book that adopts the artificial convention that when speaking of the product of two spaces we require both spaces to be nonempty. I eventually figured out that the reason was in order to be able to say that $X\times Y$ is compact if and only if both $X$ and $Y$ are compact. – Tom Goodwillie Mar 14 '12 at 22:01 show 6 more comments Not the answer you're looking for? Browse other questions tagged big-list mathematics-education or ask your own question.
{"url":"http://mathoverflow.net/questions/23478/examples-of-common-false-beliefs-in-mathematics/27892","timestamp":"2014-04-18T21:44:38Z","content_type":null,"content_length":"239763","record_id":"<urn:uuid:1d882e81-b9ad-457c-8a38-d6306ae026ea>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help If f(x)= sqrt (x), g(x)=x/(x-1), and h(x)=^3sqrt (x) find fogoh. I don't know, I'm so far behind in my knowledge of functions, here's my guess... goh=^3sqrt(x)/^3sqrt(x)-1 and then I guess to plug that into f(x) would you just put the whole thing under another square root sign? I'm sorry, I know I probably sound really stupid with this, but that is the way I feel. I'm just not comprehending this. theres no reason to feel stupid about it, functions are something that just eventually click and you get it.... lets talk basics if you have f(x) = ln(x) and g(x) = x^2, then f(g(x)) = ln( (x^2) ) all u basically want to do is plugin g(x) as x in f(x). so if we have h(x) as sqrt(x) then f(g(h(x)))) = ln( ((sqrt(x)^2) ) rite? I didn't do your example cuz i wanted you to follow the process. I really thought that's what I was doing. I took h(x) because that is in the middle I think that's where you're supposed to start. So h(x)=^3sqrt(x) (by the way, does this mean cubed root of x?) Anyway, then I substitute g(x) into the h(x) so I get ^3sqrt(x/(x-1) Then I take f(x) and substitute where the x's are so I would have ^3sqrt(sqrt(x)/(sqrt(x)-1) Am I even on the right track? i think you're working backwards, i think that ^3sqrt(x) you are trying to say is cuberoot of x... lets call it (x)^1/3 so g(x) = x / (x - 1), thus g(h(x)) = x^1/3 / (x^1/3 -1) and f(g(h(x))) is going to be that ^ in a square root .... sqrt ( x^1/3 / .....)
{"url":"http://mathhelpforum.com/calculus/47540-find-fogoh.html","timestamp":"2014-04-17T13:38:13Z","content_type":null,"content_length":"43010","record_id":"<urn:uuid:b2ef4a13-f132-4356-b496-aa3de921e359>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
Simplifying Complex Numbers (i) March 18th 2008, 01:28 PM Simplifying Complex Numbers (i) Okay, I think I did this right, but I'm going to post it here just to make sure. Simplify the complex number i^31 as much as possible. Here's what I did: So would my final answer just be i? (assuming I did that right) March 18th 2008, 01:40 PM The method is correct, but not the result : i^3 = -i i^4 = 1 March 18th 2008, 01:43 PM I don't understand how the result is incorrect. Shouldn't it be simply $i$? What isn't right? March 18th 2008, 01:53 PM This step is strange. This means you suppose i^(6*5) = 1, which is false. You may write that $i^{31}=i^{28}i^3 = (i^4)^7 i^2 i$ $i^4 = 1$ as i've shown you. So what's the result ? March 18th 2008, 01:56 PM So would it be (1^7)i ? March 18th 2008, 01:57 PM Learn these four numbers $i,\,i^2 = - 1,\,i^3 = - i,\,i^4 = 1$. Now divide the exponent by 4 and take the remainder. Thus $i^{31} = i^3 = - i$ because 31 divided by 4 leaves a remainder of 3. Here why that works. $i^{31} = i^{28 + 3} = \left( {i^4 } \right)^7 \left( {i^3 } \right) = \left( 1 \right)^7 \left( {i^3 } \right) = - i$ March 18th 2008, 02:05 PM March 18th 2008, 09:17 PM Plato is right. When dealing with Imaginary numbers, learn those 4 powers. If the power you are to raise i to exceeds 4, you want to use $i^4$ and see what the remainder of powers is after you factor out a 4. Either 4 divides in evenly or it doesn't. If it does not, you are left with either $i^1$, $i^2$, or $i^3$. See here: Imaginary Numbers
{"url":"http://mathhelpforum.com/algebra/31345-simplifying-complex-numbers-i-print.html","timestamp":"2014-04-23T16:39:44Z","content_type":null,"content_length":"8906","record_id":"<urn:uuid:b0516eff-2d5d-48c9-8e81-8b3de8ef65c4>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
vanishing of cohomology sheaves with supports and values in the multiplicative group up vote 1 down vote favorite Let $X$ be a locally noetherian regular scheme and $Y$ be a closed subscheme of codimension $d > 0$ in every point. Why does it "immédiatement" (Grothendieck, Groupe de Brauer III, §6, p. 133 f.) follow that the local cohomology sheaves with supports $\mathcal{H}^i_Y(X,\mathbf{G}_m)$ vanish for $0 \leq i \leq 2$ (if $d \neq 1$ for $i=1$)? (For $i \neq 0$, it would be clear to me if there were no supports: The stalks are zero since the Picard group of a local ring and the Brauer group of a strictly henselian local ring vanish.) etale-cohomology cohomology ag.algebraic-geometry 1 These vanishing statements are equivalent to the following statements: put $j:Y\hookrightarrow X$ and $U:=X-Y$. Then (i) $\mathbf{G}_m\rightarrow j_{*}(\mathbf{G}_m|_U)$ is an isomorphism (this will give vanishing for $i=0,1$), and (ii) $\mathrm{R}^1j_{*}(\mathbf{G}_m|_U)=0$ (this will give vanishing for $i=2$). – Mahdi Majidi-Zolbanin Jan 31 '12 at 20:06 I don't get it: $\mathbf{G}_m|_U$ is a sheaf on $U = X \setminus Y$, but the domain of $j$ is $Y$, not $U$. – Timo Keller Feb 1 '12 at 14:13 1 Yes, you are right, $j$ is $U\hookrightarrow X$. – Mahdi Majidi-Zolbanin Feb 1 '12 at 15:25 add comment 1 Answer active oldest votes Let us assume that $X=Spec A$ for $A$ a strictly henselian ring. Since $Y$ has codimension at least $2$ and $X$ is normal, the restriction $H^0(X,\mathbb{G}_m)\rightarrow H^0(X\setminus Y,\ mathbb{G}_m)$ is an isomorphism (it is always an embedding, without any restriction on the codimension of $Y$). This implies the vanishing of $H^0$ and $H^1$ with support in $Y$. To see that up vote $H^2$ vanishes (without condition on codimension), it is enough to know that $H^1(U,\mathbb{G}_m)=0$ (where $U=X\setminus Y$). If $Y$ has codimension $1$, this is clear, since $U$ is then 3 down affine (This is wrong. See Moret-Bailly's comment below). Otherwise, the restriction functor from the category of line bundles over $X$ to that over $U$ is an equivalence of categories; see vote SGA 2, XI.3. @Keerthi: are you establishing vanishing of groups $H_Y$ or sheaves $\mathcal{H}_Y$? – Mahdi Majidi-Zolbanin Jan 31 '12 at 20:15 I think the same argument should show that the stalks of the sheaves $\mathcal{H}_Y$ are $0$, though I haven't thought it through. – Keerthi Madapusi Pera Jan 31 '12 at 23:06 1 In the codimension 1 case, the reason is not that "$U$ is affine" (affine schemes may have nontrivial Picard groups) but that $U$ is a point. – Laurent Moret-Bailly Feb 1 '12 at 7:52 Thanks for the correction. – Keerthi Madapusi Pera Feb 1 '12 at 16:35 add comment Not the answer you're looking for? Browse other questions tagged etale-cohomology cohomology ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/87156/vanishing-of-cohomology-sheaves-with-supports-and-values-in-the-multiplicative-g/87166","timestamp":"2014-04-17T12:49:54Z","content_type":null,"content_length":"59704","record_id":"<urn:uuid:b9bc0f50-2bfd-4286-9b1b-ace82a5aba4e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
[Haskell-cafe] Newbie: Is‘type’ synonym hiding two much? Dmitri O.Kondratiev dokondr at gmail.com Thu Mar 22 11:13:00 EDT 2007 I am learning Haskell working through Simon Thompson book "Haskell The Craft of Functional Programming" (second edition). Solving problems in the book with more or less success I arrived almost to the end of the book at the section 17.5 "Case study: parsing expressions". Most probably the question I am going to ask here is very stupid, so please bear with me. In any case, after going so far in the book I felt certain that I know what function declaration means. So I thought that declaration: F :: a -> b -> c Is the same as: F :: a -> (b -> c) And means either: - a function 'f' of one argument of type 'a' that returns a function of type (b -> c), or it can also be interpreted as: - a function 'f' of two arguments of type 'a' and 'b' returning value of type 'c' Now, in the 17.5 section of a book one may see the following declarations: succeed :: b -> Parse a b *Before looking at 'succeed' function definition* one may think that 'succeed' is a function of *one* argument of type 'b' that returns object of type 'Parse a b'. Yet, function definition given in the book is: succeed val inp = [(val, inp)] Looking at this definition, I am inclined to think that 'succeed' is a function of *two* arguments named 'val' and 'inp' in the definition ! How can such a definition be correct? Ok, lets see what is this 'Parse a b' type is: type Parse a b = [a] -> [(b, [a])] So how does this help to make definition 'succeed val inp = [(val, inp)]' sound right? Well, the only way for me to make sense out of this is as follows. In this case Haskell 'type' makes type synonym for the type: [a] -> [(b, [a])] In order not to get out of my mind I start thinking about 'type' as a macro substitution mechanism. Then I do this substitution *myself as a Haskell runtime* and get in the result the following declaration of a * real function that Haskell runtime* works with: Succeed :: b -> [a] -> [(b, [a])] Great! This last declaration matches perfectly with function definition: succeed val inp = [(val, inp)] So I start feeling better, after all, it looks like my understanding of Haskell function declarations is not flawed too much. Well, but here comes my main questions! So: 1. Should I work every time as a macro translator when I just see *!any!* function declaration? 2. Should I search through main and imported modules for treacherous 'type' 3. Where, in this case goes implementation abstraction principle? Why I must provide *all* the details about function argument type structure in order to understand how this function works? Another example of a similar function from the book is: alt :: Parse a b -> Parse a b -> Parse a b alt p1 p2 inp = p1 inp ++ p2 inp In function definition I see three parameters: p1 – matches with function declaration perfectly p2 – matches with function declaration perfectly inp – how to match this parameter with function declaration ? I can match 'inp' parameter with 'alt' function declaration *only* after working as macro processor that expands type synonym 'Parse a b' into '[a] -> [(b, [a])]' and getting *real* declaration: alt :: [a] -> [(b, [a])] -> [a] -> [(b, [a])] -> [a] -> [(b, [a])] with that matches alt p1 p2 inp = p1 inp ++ p2 inp where 'inp' matches with *last* '[a]' argument. It seems that life of "human macro processor" becomes really hard when not trivial functions with 'type' synonym arguments come into play! Where I am wrong? Please enlighten me; I am really at a loss! And thanks for reading all this! Below I give a code example of these functions. module Parser where import Data.Char type Parse a b = [a] -> [(b, [a])] none :: Parse a b none inp = [] succeed :: b -> Parse a b succeed val inp = [(val, inp)] suc:: b -> [a] -> [(b, [a])] suc val inp = [(val, inp)] spot :: (a -> Bool) -> Parse a a spot p [] = [] spot p (x:xs) | p x = [(x, xs)] | otherwise = [] alt :: Parse a b -> Parse a b -> Parse a b alt p1 p2 inp = p1 inp ++ p2 inp bracket = spot (=='(') dig = spot isDigit t1 = (bracket `alt` dig) "234" -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.haskell.org/pipermail/haskell-cafe/attachments/20070322/ac6d68c1/attachment.htm More information about the Haskell-Cafe mailing list
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2007-March/023780.html","timestamp":"2014-04-20T06:10:37Z","content_type":null,"content_length":"7282","record_id":"<urn:uuid:3e942a04-d383-44f4-8dd0-cd2d39902403>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
Part II Part II: Galileo's Analysis of Projectile Motion Galileo brought his lifetime of insight as an experimenter -- and mathematician -- to a conclusion in his greatest work, published in 1638, the Dialogues of the Two New Sciences. Here, in the second half of the book, he took up the question of projectile motion. This illustration reflects the general opinion before Galileo which followed largely Aristotelian lines but incorporating as well a later theory of "impetus" -- which maintained that an object shot from a cannon, for example, followed a straight line until it "lost its impetus," at which point it fell abruptly to the ground. Later, simply by more careful observation, as this illustration from a work by Niccolo Tartaglia clearly shows, it was realized that projectiles actually follow some sort of a curved path, but what sort of curve? No one knew until Galileo. It was another essential insight that led Galileo, finally, to his most remarkable conclusion about projectile motion. First of all, he reasoned that a projectile shot from a cannon is not influenced by only one motion, but by two -- the motion that acts vertically is the force of gravity, and this pulls the projectile down by the times-squared law. But while gravity is pulling the object down, the projectile is also moving forward, horizontally at the same time. And this horizontal motion is uniform and constant according to his principle of inertia. But could he demonstrate this? In fact, by using his inclined plane again, Galileo was indeed able to demonstrate that a projectile is subject to two independent motions, and these combine to provide a precise sort of mathematical curve. What would happen if, instead of rolling along the horizontal plane, the ball were now allowed to simply fall freely once it got to the bottom of the plane? If Galileo were correct about the horizontal and vertical motions being independent, it would still continue to move horizontally with a uniform, constant speed, but gravity would now begin to pull it down vertically at the same time, the distance increasing porportionally to the square of the time elapsed... and this is exactly what Galileo found. You can see the experiment simulated in the computer animation linked to the picture above. You will notice how the path of the ball traces an exact curve like the one below. Here is a page from one of Galileo's manuscripts in which he writes down the figures he obtained in performing this experiment himself. What he actually comes to see is that, in fact, the curve has an exact mathematical shape -- it is one the Greeks had already studied and called the parabola. The extraordinary conclusion Galileo reached in this book on the Two New Sciences is that the path any projectile follows is a parabola, and he drew exact consequences from this discovery which, as he said, could only have been achieved by the sort of exacting analysis that mathematics made possible. Go on the next section: Conclusion: Part I. Return to the Table of Contents.
{"url":"http://www.mcm.edu/academic/galileo/ars/arshtml/mathofmotion2.html","timestamp":"2014-04-20T21:05:31Z","content_type":null,"content_length":"3953","record_id":"<urn:uuid:615a59e9-73cc-48b6-a617-02719f12a5e3>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Confusing Circle Geometry October 7th 2008, 01:03 AM #1 Senior Member Jul 2008 In the diagram below, AC is the diameter of the circle AECFG with centre O and BD is a tangent to the circle at C. (ii) Prove that DFEB is a cyclic quadrilateral. (the first part links to this question - it involved me proving that angle FEC = angle FCD). Could someone please please give me a hint on how to start this proof??? Last edited by xwrathbringerx; October 8th 2008 at 05:22 PM. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/geometry/52378-confusing-circle-geometry.html","timestamp":"2014-04-18T18:15:25Z","content_type":null,"content_length":"29255","record_id":"<urn:uuid:2f351853-9c74-49fe-b360-dc9fc559ffca>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: A Lower Bound on the Expected Length of 1-1 Codes Noga Alon Alon Orlitsky February 22, 2002 We show that the minimum expected length of a 1-1 encoding of a discrete random variable X is at least1 H(X)-log(H(X)+1)-log e and that this bound is asymptotically achievable. 1 Introduction Let X be a random variable distributed over a countable support set X. A (binary, 1­1) encoding of X is an injection : X {0,1} , the set of finite binary strings. The expected number of bits uses to encode X is where Pr(x) is the probability that X = x and |(x)| is the length of (x). A string x1, . . . ,xm is a prefix of a string y1, . . . ,yn if m n and xi = yi for i = 1, . . . ,m.
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/963/1705635.html","timestamp":"2014-04-18T23:43:15Z","content_type":null,"content_length":"7788","record_id":"<urn:uuid:760c6e12-9744-4f60-803b-913a87b7b615>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
A Guide to Functional Analysis This is the ninth entry in the MAA’s Guide series, a subseries of the Dolciani Mathematical Expositions. The idea behind this series is to make topics in mathematics accessible to mathematically sophisticated non-specialists who are looking for an opportunity to get a quick overview of the subject. Graduate students studying for qualifying exams would be an obvious example of this cohort, and most or all of the books in this series (this one included) specifically mention that particular target audience. This series should also appeal to people whose student days are behind them but are not specialists in a given area, and who would like to either refresh their memory of a particular subject or get exposed to the main ideas quickly and efficiently. This is an excellent set of books, and I am pleased to have had the opportunity to review the last four of them. They are all quite slim, and written by people who have both expertise in the subject matter and also a knack for succinct, elegant mathematical exposition. The ability to write like this is not insignificant; Blaise Pascal, after all, once famously apologized for writing a long letter by saying he didn’t have time to write a short one. Steven Krantz, the author of this book, has had a lot of practice in writing succinctly: in addition to his other books, this is his fourth entry in the Guide series (the others are on complex variables, real variables and topology). This book (barely over a hundred pages of text) is very short, even by the standards of this series, but nevertheless addresses most or all of the standard topics that one would expect to see in an introductory graduate-level semester in functional analysis, and perhaps even one or two things that might not get mentioned. More specifically, the first chapter starts with normed linear spaces, then defines Banach spaces and discusses the “big three” results typically associated with them (Uniform Boundedness, Open Mapping, Hahn-Banach). This is followed by chapters on the dual space, Hilbert space, the algebra of bounded linear operators on a Banach space (including a fairly lengthy section on compact operators), and Banach algebras. The author then generalizes things by discussing (chapter 6) arbitrary topological vector spaces. The four remaining chapters of the text discuss, in order, distributions, spectral theory (for bounded, particularly bounded normal, operators on a Hilbert space; some background in measure theory is needed for this chapter), convexity (including the Krein-Milman theorem), and fixed point theorems (the contraction mapping principle and the Schauder theorem). The author has taken pains to make the book accessible. Measure theory, as noted above, is used in the chapter on spectral theory and also in some examples, but other than that a good grounding in real analysis and linear algebra should get a student through most of the book. There is some inconsistency, however, in the expectations of the audience’s background: at one point, for example, the author feels the need to remind the reader of the definition of a metric space, but later in the book we read “We interpret derivatives in a Banach space in the usual Fréchet sense.” My guess is, however, that most of the people reading this book will already have been exposed to functional analysis and will be looking to refresh their memory rather than learn the material for the first time, so occasional statements like this should not prove troublesome. Like other books in the Guide series, this one contains a good selection of examples, which I think is crucial. Another particularly nice feature of this book is the attention paid to applications of functional analysis, which even longer books frequently overlook. As some (non-exhaustive) examples, we see here, for example, the Uniform Boundedness theorem used to prove the existence of a broad class of functions with divergent Fourier series, the Hahn-Banach theorem invoked to establish the existence of the Green’s function for smoothly bounded domains in the plane, and the contraction mapping principle used both to establish an existence-uniqueness theorem for differential equations and to give an elegant proof of the implicit function theorem. Unlike a number of other books in this series, however, this one also contains many proofs, even when they are decidedly non-trivial: proofs are given (or at least succinctly sketched), for example, of the Baire Category theorem, the “big three” results mentioned earlier, the spectral radius theorem for Banach algebras, the Schauder fixed point theorem, and other sophisticated results. Opinions may certainly vary, but I am inclined to think that in very short books, particularly books in this Guide series, it is probably better to omit most or all proofs in favor of examples and broad, intuitive explanations of why something should be true. (The Guide to Groups, Rings and Fields refers to such explanations as “shadows of proofs”.) My feeling is that most people who want or need an actual proof in the first place will want to read one with the details spelled out, even at the expense of succinctness. Even excellent expositors cannot perform the impossible, and making proofs of difficult results completely comprehensible in a subject like functional analysis, in the space of a hundred pages or so, may well be asking for the impossible. Despite Krantz’s considerable skills in this area, there were times when, reading this book, I found the exposition a bit too concise for my taste. I would have, for example, appreciated a bit more detail and a bit more motivation in the section on the spectral theorem; George Simmons, in his book Introduction to Topology and Modern Analysis, does not prove the integral version of the spectral theorem that is proved here, but does spend a page or two explaining how it is a generalization of the familiar result (for finite-dimensional normal operators) that one learns about in sophisticated linear algebra courses. But, as I said, this is a subject on which people can reasonably disagree, and I am certainly not going to be critical of a person who attempts, even without complete success, to provide succinct proofs of theorems. If nothing else, such proofs give an overview of how the result is proved, even if all the details are not spelled out. This book continues the tradition of high-quality expositions that have characterized every other Guide that I have looked at. This series in general, and this book in particular, deserve, and I hope will get, a wide audience. I also hope that I will be able to snag the tenth book in this series to review when it becomes available. Mark Hunacek (mhunacek@iastate.edu) teaches mathematics at Iowa State University.
{"url":"http://www.maa.org/publications/maa-reviews/a-guide-to-functional-analysis","timestamp":"2014-04-18T13:52:02Z","content_type":null,"content_length":"101947","record_id":"<urn:uuid:01551864-514d-4068-a0f1-9c4bd0bf1ef7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
Forex Secret – Currency Pair Reversal Points – Pivot Points On the grounds that you may already know, Foreign Currency Markets is an excellent method to generate spending money. Nevertheless understanding the correct way to trading Forex is important. You can actually relinquish your cash without having a method in place. Here are actually good ideas , on your way to trading Fx successful. The currency pair pivot point is one of keystones in trading at Forex. First of all, let us introduce the following designations (notions), necessary for the subject. “High” is the maximum at the previous day; “Low” is the minimum at the previous day; “Close” is the price of closing at the previous day. Generally speaking, there are the three principal criteria. 1. There is the stock reserve – i.e., the difference between Low and High per the trading session. For instance, as regards GBP/USD pair, this difference can exceed 100 points in a trading day. 2. The reader must also consider the reversal point of the currency pair movement (the pivot point) in the daily trading session. Thus, it is easy to calculate the possible profit that could be gained by a trader regularly. 3. If “the trend is the friend” (see Book 1), it is necessary to work along the trend direction. Under these conditions, the detection of the trend pivot points can prevent losses that could be conditioned by the following factors · A change in the trend direction. · Besides, this conception of the trend pivot points permits us to understand when a deal must be opened in a new trend – i.e., in the beginning of the currency pair movement but not in the middle of it. The author especially doesn’t recommend opening a deal at the end of a new trend. Briefly to say, the skill of detecting the real pivot point is necessary for the regularly gaining of profit at Forex (for pity, the knowledge of it is insufficient). The given system makes the foundation of the Pivot Points tactics, well-known all over the world. The pivot point can be calculated according to the formula: Pivot=(High+Low+Close)/3 (the designations introduced are submitted above). After the calculation of Pivot, one can determine the levels of resistance and support according to the formulae given below: R1=2Pivot – Low S1=2Pivot – High R2=Pivot + (R1 – S1) S2=Pivot – (R1-S1) R3=High + 2*(Pivot – Low) S3=Low – 2*(High – Pivot) Here R1, R2, R3 are the levels of resistance; S1, S2, S3 are the levels of support. Thus, in its essence, the Pivot Points tactics is binary (binomial). That is, the next move is the logical continuation of the previous one. The point of reversal (pivot) is the keystone of this movement. The trend is going on. Subsequently, the point of reversal (pivot) of the given trend is being shifted. Not without a reason all first-rate banks and fund institutions make use of such simple calculations during 50 years and more. Briefly to say, this classical tactics of Pivot Points is well known all over the world. However, the application of it still could not change the ratio of successful traders to losers (1/20). Now the reader must try to see the drawbacks of the classical method of detecting Pivot Points. The goal is to understand the advantages of the Pivot Points technique according to Masterforex-V 1. How one can pick out an appropriate time frame for calculating the maximum (or minimum) and the price of closing. One must keep in mind that Forex market is functioning twenty-four hours a day regularly. That is, in Europe, America and Asia pivots are different under the same conditions. The reason is that the three variables mentioned (High, Low, Close) are different in various Let us emphasize again. “High” is the maximum of the previous day; “Low” is the minimum of the previous day; “Close” is the price of closing at the previous day. For instance, one can take a look at a chart that depicts USD/JPY pair movement during May 22-24, 2006. There it is clearly depicted that the next-day pivots in Moscow, Tokyo, London and New York would be cardinally different. Evidently, it is conditioned by the difference in calendar days. Consequently, all the three components of the classical Pivot Points are depicted in the above-submitted expression (High+Low+Close)/3). Chart 2.4.1. (For view the picture see notes in end of article) The Pivot points are calculated arithmetically. The result is rather an arithmetic-mean magnitude (as the moving average) than the determining of a real point, after crossing of which the currency logically makes a spurt (jump) towards the opposite direction. For instance, the pivot arithmetic-mean magnitude can be equal to 50% of the recoil. As it is evident, this value cannot be helpful in a flat. What is more, it can even be harmful in the flat if the recoil could reach 62% and 76%. For instance, a trader can open a deal at 50%-recoil against the trend. At the same time, the currency at 62%-recoil makes the U-turn (reversal) towards the previous trend continuation. As an example, the reader can look at Chart 2.4.2. This figure clearly indicates that on June 6, 2006 EUR/USD had fallen from the local maximum at 1.2981 down to 1.2922. After this, it raised by 76% – up to 1.2962. Further, within the intra-day trend, the currency pair has ascended down to the point 1.2594. Approximately this makes about 400 points. Chart 2.4.2. (For view the picture see notes in end of article) In addition, the reader must take into account the following factors. During a day a currency can cross the Pivot Point towards different directions several times. This is why the classical Pivot Point cannot be regarded as a real point, at which deals should be opened. As an example, let us examine EUR/USD pair movement on June 14, 2006 (see Chart 2.4.3 – M-15 chart). To start from the currency pair movement on June 13 2006, the pivot has made (1.2617 + 1.2529+ 1.2545)/3 = 1.2564). Chart 2.4.3. (For view the picture see notes in end of article) A Pivot must be dynamical. The author states the following. A currency pair can go through 70-100 points in European trading session. At American session, the pivot must change its value – as the true (real) point of reversal. For instance, it can be the reversal correction beginning of the Pivot previous value. Under such conditions, a trader can close his deals before the beginning of the reversal in question. Otherwise, a trader can keep on a deal being opened along the trend further on (a “long-term” deal). This is possible if the price would not “cross” the Pivot towards the reverse (opposite) direction. Let us examine a chart that depicts GBP/USD pair movement during June 29-30, 2006. As one can see, the currency pairs have broken through the Pivot Point during the weekly trend. However, these currency pairs have not once crossed the pivot point towards the opposite direction during the session trend – notwithstanding the fact that these currency pairs have passed through several hundreds of points during a day and a half. Chart 2.4.4. (For view the picture see notes in end of article) Chart 2.4.5. (For view the picture see notes in end of article) In different time frames the pivot must indicate different points. One must distinguish the reversal in the intra-day trend from the reversal in the intra-week trend. Then, again, the trend of duration of several weeks presents the principally different pattern – and so on. However, according to the classical approach to Pivot-Points problem, just one value is considered – i.e., that of the previous day. Hence, there logically arises the following question. The reversal of which trend does the pivot make? Again, the reader must keep in mind that this pivot is calculated according to the above-given formula (High+Low+Close)/3 on the previous day. R. Axel (from Dow Jones Agency) has developed his own technique of the pivot calculation when the levels of the previous day don’t fit into this formula (High+Low+Close)/3. This discrepancy also confirms that the classical method of determining Pivot Points is imperfect. One can make the following conclusions. The above-given examples clearly illustrate the principal difference between approaches to the notion of Pivot Point as a real point of reversal of currency pairs at Forex. That is, there is the Forex classicists’ approach and, in contrast to it, Masterforex-V’s viewpoint. According to the latter system, the following procedures must be done. 1. One must calculate the correction and reversal in various TF – to start from the intra-day session (M15) and up to several weeks (D1). This clearly depicts the difference between the correction and reversal. For instance, the following situations can take place. · The reversal can occur during the session trend when the currency pair movement does not exceed Pivot in a weekly trend, which is equal to the weekly session correction but not to the reversal. · The reversal can occur during the session trend when the currency pair movement does exceed Pivot in weekly trend. It is the first sign of the reversal that can occur within the weekly trend. 2. Such correlation between the two types of trends permits us to do the following. · To gain profit during the session trend. · To understand the duality (binarity) in the direction of the currency pair movement (the continuation or cancellation (abolition) within a session trend or longer types of them. 3. The 50%-recoil indicates rather not the trend reversal but quantitative changes in it. Here is implied either the further development of the currency pair movement or the given pair transition to the flat. According to Masterforex-V, one must correlate these tendencies with other factors – such as the time of movement, correlation between the ally currency pairs and technical levels in various TF, etc. Now let us regard this problem as it is presented in Masterforex-V Trading Academy. Again, one must take a look at the chart where EUR/USD pair movement during June 5-6, 2006 is depicted. The reader must try to detect Pivot Points by himself. · Pivot Points in the intra-day trend; · Pivot Points in the weekly trend session. This information is expedient. Due to it, one can understand the following facts (and make use of them). 1. one can detect the point at which the “bear” intra-day trend starts; 2. one can detect the point where the beginning of the “bear” weekly trend can be confirmed for sure. 3. On can see at what points the trend heavy (strong) corrections – or the trend recoil – could occur. 4. One can understand the conditions for the reversal of the trend and its changing from the “bear” type to the “bull” one. However, this has not happened in the case in question. 5. In addition, a trader must take into account the reversal point abolition (failure). Regarding this aspect, one could state in a deal for a long period. Note: Full text of this article and pictures of examples you can see on http://masterforex-v.su/002_004.htm If you wish to be trained on Trading System Masterforex-V – one of new and most effective techniques of trade on Forex in the world visit http://www.masterforex-v.su/ Vyacheslav Vasilevich (Masterforex-V) Professional Trader from 2000 year. President of Masterforex-V Trading Academy. Author of Books: 1. Trade secrets by a professional trader or what B. Williams, A. Elder and J. Schwager not told about Forex to traders. 2. Technical analyses in Trading System MasterForex-V. 3. Entry and Exit Points at Forex Market Free Books Website: Look for More about pivot point calculator
{"url":"http://www.pivotpointcalculator.org/forex-secret-currency-pair-reversal-points-pivot-points.html","timestamp":"2014-04-20T03:39:07Z","content_type":null,"content_length":"31389","record_id":"<urn:uuid:8f6dd304-7e3e-4751-895e-e4f514c0276e>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
LED Resistor Calculator To calculate the resistor needed for a simple LED circuit, simply take the voltage drop away from the source voltage then apply Ohm's Law. In Other Words... where: E[s] is the source voltage, measured in volts (V), E[led] is the voltage drop across the LED, measured in volts (V), I[led] is the current through the LED, measured in Amperes (Amps/A), and R is the resistance, measured in Ohms (Ω). This calculator is based on the Ohms Law Calculator, but takes into consideration the voltage drop from the LED. To use the calculator, enter any three known values and press "Calculate" to solve for the others.
{"url":"http://www.ohmslawcalculator.com/led_resistor_calculator.php","timestamp":"2014-04-23T22:59:46Z","content_type":null,"content_length":"10651","record_id":"<urn:uuid:1c5b7dd4-091a-46b9-b791-9c68d072d485>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
What is $16.00 to british pounds? You asked: What is $16.00 to british pounds? Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/what_is_$16.00_to_british_pounds","timestamp":"2014-04-23T14:23:50Z","content_type":null,"content_length":"56743","record_id":"<urn:uuid:df3f41c6-116c-4b3a-a2de-52cde8e9ef5d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Recently, there was a press release and a youtube video from University of Florida about one of my recent papers on neural code in the lobster olfactory system, and also by others [e.g. 1, 2, 3, 4]. I decided to write a bit about it in my own perspective. In general, I am interested in understanding how neurons process and represent information in their output through which they communicate with other neurons and collectively compute. In this paper, we show how a subset of olfactory neurons can be used like a stop watch to measure temporal patterns of smell. Unlike vision and audition, the olfactory world is perceived through a filament of odor plume riding on top of complex and chaotic turbulence. Therefore, you are not going to be in constant contact with the odor (say the scent of freshly baked chocolate chip cookies) while you search for the source (cookies!). You might not even smell it at all for a long periods of time, even if the target is nearby depending on the air flow. Dogs are well known to be good at this task, and so are many animals. We study lobsters. Lobsters heavily rely on olfaction to track, avoid, and detect odor sources such as other lobsters, predators, and food, therefore, it is important for them to constantly analyze olfactory sensory information to put together an olfactory scene. In auditory system, the miniscule temporal differences in sound arriving to each of your ears is a critical cue for estimating the direction of the sound source. Similarly, one critical component for olfactory scene analysis is the temporal structure of the odor pattern. Therefore, we wanted to find out how neurons encode and process this information. The neurons we study are of a subtype of olfactory sensory neurons. Sensory neurons detect signals, encode them into a temporal pattern of activity, so that it can be processed by downstream neurons. Thus, it was very surprising when we (Dr. Yuriy Bobkov) found that those neurons were spontaneously generating signals–in the form of regular bursts of action potentials–even in the absence of odor stimuli [Bobkov & Ache 2007]. We were wondering why a sensory system would generate its own signal, because the downstream neurons would not know if the signal sent by these neurons are caused by external odor stimuli (smell), or are spontaneously generated. However, we realized that they can work like little clocks. When external odor molecules stimulate the neuron, it sends a signal in a time dependent manner. Each neuron is too noisy to be a precise clock, but there is a whole population of these neurons, such that together they can measure the temporal aspects critical for the olfactory scene analysis. The temporal aspects, combined with other cues such as local flow information and navigation history, in turn can be used to track targets and estimate distances to sources. Furthermore, this temporal memory was previously believed to be formed in the brain, but our results suggest a simple yet effective mechanism in the very front end, the sensors themselves. Applications: Currently electronic nose technology is mostly focused on discriminating ‘what’ the odor is. We bring to the table how animals might use the ‘when’ information to reconstruct the ‘where’ information, putting together an olfactory scene. Perhaps it could inspire novel search strategies for odor tracking robots. Another possibility is to build neuromorphic chips that emulate artificial neurons using the same principle to encode temporal patterns into instantaneously accessible information. This could be a part of low-power sensory processing unit in a robot. The principle we found are likely not limited to lobsters and could be shared by other animals and sensory modality. • Bobkov, Y. V. and Ache, B. W. (2007). Intrinsically bursting olfactory receptor neurons. J Neurophysiol, 97(2):1052-1057. • Park, I. M., Bobkov, Y. V., Ache, B. W., and Príncipe, J. C. (2014). Intermittency coding in the primary olfactory system: A neural substrate for olfactory scene analysis. The Journal of Neuroscience, 34(3):941-952. [pdf] Evan and I wrote a summary of the COSYNE 2014 workshop we organized! Scalable models for high-dimensional neural data: [ This blog post is collaboratively written by Evan and Memming ] The Scalable Models workshop was a remarkable success! It attracted a huge crowd from the wee morning hours till the 7:30 pm close of the day. We attracted so much attention that we had to relocate from our original (tiny) allotted room (Superior A) to a (huge) lobby area (Golden Cliff). The talks offered both philosophical perspectives and methodological aspects, reflecting diverse viewpoints and approaches to high-dimensional neural data. Many of the discussions continued the next day in our sister workshop. Here we summarize each talk: Konrad Körding – Big datasets of spike data: why it is coming and why it is useful Konrad started off the workshop by posting some philosophical questions about how big data might change the way we do science. He argued that neuroscience is rife with theories (for instance, how uncertainty is… View original 1,677 more words Shannon’s entropy is the fundamental building block of information theory – a theory of communication, compression, and randomness. Entropy has a very simple definition, $H = - \sum_i p_i \log_2(p_i) $, where $p_i$ is the probability of i-th symbol. However, estimating entropy from observations is surprisingly difficult, and is still an active area of research. Typically, one does not have enough samples compared to the number of possible symbols (so called “undersampled regime”), there’s no unbiased estimator [Paninski 2003], and the convergence rate of a consistent estimator could be arbitrarily slow [Antos and Kontoyiannis, 2001]. There are many estimators that aim to overcome these difficulties to some degree. Deciding which estimator to use can be overwhelming, so here’s my recommendation in the form of a flow chart: Let me explain one by one. First of all, if you have continuous (analogue) observation, read the title of this post. CDM, PYM, DPM, NSB are Bayesian estimators, meaning that they have explicit probabilistic assumptions. Those estimators provide posterior distributions or credible intervals as well as point estimates of entropy. Note that the assumptions made by these estimators do not have to be valid to make them good entropy estimators. In fact, even if they are in the wrong class, these estimators are consistent, and often give reasonable answers even in the undersampled regime. Nemenman-Shafee-Bialek (NSB) uses a mixture of Dirichlet prior to have an approximately uninformative implied prior on entropy. This reduces the bias of estimator significantly for the undersampled regime, because a priori, it could have any entropy. Centered Dirichlet mixture (CDM) is a Bayesian estimator with a special prior designed for binary observations. It comes in two flavors depending if your observation is close to independent (DBer) or the total number of 1′s is a good summary statistic (DSyn). Pitman-Yor mixture (PYM) and Dirichlet process mixture (DPM) are for infinite or unknown number of symbols. In many cases, natural data have a vast number of possible symbols, as in the case of species samples or language, and have power-law (or scale-free) distributions. Power-law tails can hide a lot of entropy in their tails, in which case PYM is recommended. If you expect an exponentially decaying tail probabilities when sorted, then DPM is appropriate. See my previous post for more. Non-Bayesian estimators come in many different flavors: Best upper bound (BUB) estimator is a bias correction method which bounds the maximum error in entropy estimation. Coverage-adjusted estimator (CAE) uses the Good-Turing estimator for the “coverage” (1 – unobserved probability mass), and uses a Horvitz-Thompson estimator for entropy in combination. James-Stein (JS) estimator regularizes entropy by estimating a mixture of uniform distribution and the empirical histogram with the James-Stein shrinkage. The main advantage of JS is that it also produces an estimate of the distribution. Unseen estimator uses a Poissonization of fingerprint and linear programming to find the likely underlying fingerprint, and use the entropy as an estimate. Other notable estimators include (1) a bias correction method by Panzeri & Travis (1995) which has been popular for a long time, (2) Grassberger estimator, and (3) asymptotic expansion of NSB that only works in extremely undersampled regime and is inconsistent [Nemenman 2011]. These methods are faster than the others, if you need speed. There are many software packages available out there. Our estimators CDMentropy and PYMentropy are implemented for MATLAB with BSD license (by now you surely noticed that this is a shameless self-promotion!). For R, some of these estimators are implemented in a package called entropy (in CRAN; written by the authors of JS estimator). There’s also a python package called pyentropy. Targeting a more neuroscience specific audience, Spike Train Analysis Toolkit contains a few of estimators implemented in MATLAB/C. • A. Antos and I. Kontoyiannis. Convergence properties of functional estimates for discrete distributions. Random Structures & Algorithms, 19(3-4):163–193, 2001. • E. Archer*, I. M. Park*, and J. Pillow. Bayesian estimation of discrete entropy with mixtures of stick-breaking priors. In P. Bartlett, F. Pereira, C. Burges, L. Bottou, and K. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 2024–2032. MIT Press, Cambridge, MA, 2012. [PYMentropy] • E. Archer*, I. M. Park*, J. Pillow. Bayesian Entropy Estimation for Countable Discrete Distributions. arXiv:1302.0328, 2013. [PYMentropy] • E. Archer, I. M. Park, and J. Pillow. Bayesian entropy estimation for binary spike train data using parametric prior knowledge. In C.J.C. Burges and L. Bottou and M. Welling and Z. Ghahramani and K.Q. Weinberger}, editors, Advances in Neural Information Processing Systems 26, 2013. [CDMentropy] • A. Chao and T. Shen. Nonparametric estimation of Shannon’s index of diversity when there are unseen species in sample. Environmental and Ecological Statistics, 10(4):429–443, 2003. [CAE] • P. Grassberger. Estimating the information content of symbol sequences and efficient codes. Information Theory, IEEE Transactions on, 35(3):669–675, 1989. • J. Hausser and K. Strimmer. Entropy inference and the James-Stein estimator, with application to nonlinear gene association networks. The Journal of Machine Learning Research, 10:1469–1484, 2009. • I. Nemenman. Coincidences and estimation of entropies of random variables with large cardinalities. Entropy, 13(12):2013–2023, 2011. [Asymptotic NSB] • I. Nemenman, F. Shafee, and W. Bialek. Entropy and inference, revisited. In Advances in Neural Information Processing Systems 14, pages 471–478. MIT Press, Cambridge, MA, 2002. [NSB] • I. Nemenman, W. Bialek, and R. Van Steveninck. Entropy and information in neural spike trains: Progress on the sampling problem. Physical Review E, 69(5):056111, 2004. [NSB] • L. Paninski. Estimation of entropy and mutual information. Neural Computation, 15:1191–1253, 2003. [BUB] • S. Panzeri and A. Treves. Analytical estimates of limited sampling biases in different information measures. Network: Computation in Neural Systems, 7:87–107, 1996. • P. Valiant and G. Valiant. Estimating the Unseen: Improved Estimators for Entropy and other Properties. In Advances in Neural Information Processing Systems 26, pp. 2157-2165, 2013. [UNSEEN] • V. Q. Vu, B. Yu, and R. E. Kass. Coverage-adjusted entropy estimation. Statistics in medicine, 26 (21):4039–4060, 2007. [CAE] This year, NIPS (Neural Information Processing Systems) had a record registration of 1900+ (it has been growing over the years) with 25% acceptance rate. This year, most of the reviews and rebuttals are also available online. I was one of the many who were live tweeting via #NIPS2013 throughout the main meeting and workshops. Compared to previous years, it seemed like there were less machine learning in the invited/keynote talks. Also I noticed more industrial engagements (Zuckerberg from facebook was here (also this), and so was the amazon drone) as well as increasing interest in neuroscience. My subjective list of trendy topics of the meeting are low-dimension, deep learning (and drop out), graphical model, theoretical neuroscience, computational neuroscience, big data, online learning, one-shot learning, calcium imaging. Next year, NIPS will be at Montreal, Canada. I presented 3 papers in the main meeting (hence missed the first two days of poster session), and attended 2 workshops (High-Dimensional Statistical Inference in the Brain, Acquiring and analyzing the activity of large neural ensembles; Terry Sejnowski gave the first talk in both). Following are the talks/posters/papers that I found interesting as a computational neuroscientist / machine learning enthusiast. Theoretical Neuroscience Neural Reinforcement Learning (Posner lecture) Peter Dayan He described how theoretical quantities in reinforcement learning such as TD-error correlate with neuromodulators such as dopamine. Then he went on to Q (max) and SARSA (mean) learning rules. The third point of the talk was the difference between model-based vs model-free reinforcement learning. Model-based learning can use how the world (state) is organized and plan accordingly, while model-free learns values associated with each state. Human fMRI evidence shows an interesting mixture of model-based and model-free learning. A Memory Frontier for Complex Synapses Subhaneil Lahiri, Surya Ganguli Despite its molecular complexity, most systems level neural models describe it as a scalar valued strength. Biophysical evidence suggests discrete states within the synapse and discrete levels of synaptic strength, which is troublesome because memory will be quickly overwritten for discrete/binary-valued synapses. Surya talked about how to maximize memory capacity (measured as area under the SNR over time) with synapses with hidden states over all possible Markovian models. Using the first-passage time, they ordered states, and derived an upper bound. Area is bounded by $O(\sqrt{N}(M-1)) $ where M and N denote number of internal states per synapse and synapses, respectively. Therefore, less synapses with more internal state is better for longer memory. A theory of neural dimensionality, dynamics and measurement: the neuroscientist and the single neuron (workshop) Surya Ganguli Several recent studies showed low-dimensional state-space of trial-averaged population activities (e.g., Churchland et al. 2012, Mante et al 2013). Surya asks what would happen to the PCA analysis of neural trajectories if we record from 1 billion neurons? He defines the participation ratio $D = \frac{\left(\sum \lambda_i \right)^2}{\sum \lambda_i^2}$ as a measure of dimensionality, and through a series of clever upper bounds, estimates the dimensionality of neural state-space that would capture 95% of the variance given task complexity. In addition, assuming incoherence (mixed or complex tuning), neural measurements can be seen as random projections of the high-dimensional space; along with low-dimensional dynamics, the data recovers the correct true dimension. He claims that in the current task designs, the neural state-space is limited by task-complexity, and we would not see higher dimensions as we increase the number of simultaneously observed neurons. Distributions of high-dimensional network states as knowledge base for networks of spiking neurons in the brain (workshop) Wolfgang Maass In a series of papers (Büsing et al. 2011, Pecevski et al. 2011, Habenschuss et al. 2013), Maass showed how noisy spiking neural networks can perform probabilistic inferences via sampling. From Boltzmann machines (maximum entropy models) to constraint satisfaction problems (e.g. Sudoku), noisy SNN’s can be designed to sample from the posterior, and converges exponentially fast from any initial state. This is done by irreversible MCMC sampling of the neurons, and it can be generalized to continuous time and state space. Epigenetics in Cortex (workshop) Terry Sejnowski Using an animal model of schizophrenia using ketamine that shows similar decreased gamma-band activity in the prefrontal cortex, and decrease in PV+ inhibitory neurons, it is known that Aza and Zeb (DNA methylation inhibitors) prevents this effect of ketamine. Furthermore, in Lister 2013, they showed a special type of DNA methylation (mCH) in the brain grows over the lifespan, coincides with synaptogenesis, and regulates gene expressions. Optimal Neural Population Codes for High-dimensional Stimulus Variables Zhuo Wang, Alan Stocker, Daniel Lee They extend previous year’s paper to high-dimensions. Computational Neuroscience What can slice physiology tell us about inferring functional connectivity from spikes? (workshop) Ian Stevenson Our ability to infer functional connectivity among neurons is limited by data. Using current-injection, he investigated exactly how much data is required for detecting synapses of various strength under the generalized linear model (GLM). He showed interesting scaling plots both in terms of (square root of) firing rate and (inverse) amplitude of the post-synaptic current. Hierarchical Modular Optimization of Convolutional Networks Achieves Representations Similar to Macaque IT and Human Ventral Stream (main) Mechanisms Underlying visual object recognition: Humans vs. Neurons vs. machines (tutorial) Daniel L. Yamins*, Ha Hong*, Charles Cadieu, James J. DiCarlo They built a model that can predict (average) activity of V4 and IT neurons in response to objects. Current computer vision methods do not perform well under high variability induced by transformation, rotation, and etc, while IT neuron response seems to be quite invariant to them. By optimizing a collection of convolutional deep networks with different hyperparameter (structural parameter) regimes and combining them, they showed that they can predict the average IT (and V4) responds reasonably well. Least Informative Dimensions Fabian Sinz, Anna Stockl, Jan Grewe, Jan Benda Instead of maximizing mutual information between the features and target variable for dimensionality reduction, they propose to minimize the dependence between the non-feature space and the joint of target variable and feature space. As a dependence measure, they use HSIC (Hilbert-Schmidt independence criterion: squared distance between joint and the product of marginals embedded in the Hilbert space). The optimization problem is non-convex, and to determine the dimension of the feature space, a series of hypothesis testing is necessary. Dimensionality, dynamics and (de)synchronisation in the auditory cortex (workshop) Maneesh Sahani Maneesh compared the underlying latent dynamical systems fit from synchronized state (drowsy/inattentive/urethane/ketamine/xylazine) and desyncrhonized state (awake/attentive/urethane+stimulus/ fentany/medtomidine/midazolam). From the population response, he fit a 4 dimensional linear dynamical system, then transformed the dynamics matrix into a “true Schur form” such that 2 pairs of 2D dynamics could be visualized. He showed that the dynamics fit from either state were actually very similar. Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions (main) Extracting information from calcium imaging data (workshop) Eftychios A. Pnevmatikakis, Liam Paninski Eftychios have been developing various methods to infer spike trains from calcium image movies. He showed a compressive sensing framework for spiking activity can be inferred. A plausible implementation can use a digital micromirror device that can produce “random” binary patterns of pixels to project the activity. Andreas Tolias (workshop talk) Noise correlations in the brain are small (0.01 range; e.g., Renart et al. 2010). Anesthetized animals have higher firing rate and higher noise correlation (0.06 range). He showed how latent variable model (GPFA) can be used to decompose the noise correlation into that of the latent and the rest. Using 3D acousto-optical deflectors (AOD), he is observing 500 neurons simultaneously. He (and Dimitri Yatsenko) used latent-variable graphical lasso to enforce a sparse inverse covariance matrix, and found that the estimate is more accurate and very different from raw noise correlation Whole-brain functional imaging and motor learning in the larval zebrafish (workshop) Misha Ahrens Using light-sheet microscopy, he imaged the calcium activity of 80,000 neurons simultaneously (~80% of all the neurons) at 1-2 Hz sampling frequency (Ahrens et al. 2013). From the big data while the fish was stimulated with visually, Jeremy Freeman and Misha analyzed the dynamics (with PCA) and orienting stimuli tuning, and make very cool 3D visualizations. Normative models and identification of nonlinear neural representations (workshop) Matthias Bethge In the first half of his talk, Matthias talked about probabilistic models of natural images (Theis et al. 2012) which I didn’t understand very well. In the later half, he talked about an extension of the GQM (generalized quadratic model) called STM (spike-triggered mixture). The model is a GQM with quadratic term $\mathbf{x}^\top (\Sigma_0^{-1} - \Sigma_1^{-1}) \mathbf{x}$, if the spike-triggered and non-spike-triggered distributions are Gaussian with covariances $\Sigma_0$ and $\Sigma_1$. When both distributions are allowed to be mixture-of-Gaussians, then it turns out the nonlinear function becomes a soft-max of quadratic terms making it an LNLN model. [code on github] Inferring neural population dynamics from multiple partial recordings of the same neural circuit Srini Turaga, Lars Buesing, Adam M. Packer, Henry Dalgleish, Noah Pettit, Michael Hausser, Jakob Macke Under certain observability conditions, they stitch together partially overlapping neural recordings to recover the joint covariance matrix. We read this paper earlier in UT Austin computational neuroscience journal club. Machine Learning Estimating the Unseen: Improved Estimators for Entropy and other Properties Paul Valiant, Gregory Valiant Using “Poissonization” of the fingerprint (a.k.a. Zipf plot, count histogram, pattern, hist-hist, collision statistics, etc.), they find a simplest distribution such that the expected fingerprint is close to the observed fingerprint. This is done by first splitting the histogram into “easy” part (many observations; more than square root # of observations) and “hard” part, then applying two linear programs to the hard part to optimize the (scaled) distance and support. The algorithm “UNSEEN” has a free parameter that controls the error tolerance. Their theorem states that the total variations is bounded by $1/\sqrt{c}$ with only $k = \frac{c\ n}{\log n}$ samples where n denotes the support size. The resulting estimate of the fingerprint can be used to estimate entropy, unseen probability mass, support, and total variations. (code in appendix) A simple example of Dirichlet process mixture inconsistency for the number of components Jeffrey W. Miller, Matthew T. Harrison They already showed that the number of clusters inferred from DP mixture model is inconsistent (at ICERM workshop 2012, and last year’s NIPS workshop). In this paper they show theoretical examples, one of which says: If the true distribution is a normal distribution, then the probability that # of components inferred by DPM (with $\alpha = 1$) is 1 goes to zero, as a function of # of samples. A Kernel Test for Three-Variable Interactions Dino Sejdinovic, Arthur Gretton, Wicher Bergsma To detect a 3-way interaction which has a ‘V’-structure, they made a kernelized version of the Lancaster interaction measure. Unfortunately, Lancaster interaction measure is incorrect for 4+ variables, and the correct version becomes very complicated very quickly. B-test: A Non-parametric, Low Variance Kernel Two-sample Test Wojciech Zaremba, Arthur Gretton, Matthew Blaschko This work brings both test power and computational speed (Gretton et al. 2012) to MMD by using a blocked estimator, making it more practical. Robust Spatial Filtering with Beta Divergence Wojciech Samek, Duncan Blythe, Klaus-Robert Müller, Motoaki Kawanabe Supervised dimensionality reduction technique. Connection between generalized eigenvalue problem and KL-divergence, generalization to beta-divergence to gain robustness to outlier in the data. Optimizing Instructional Policies Robert Lindsey, Michael Mozer, William J. Huggins, Harold Pashler This paper presents a meta-active-learning problem where active learning is used to find the best policy to teach a system (e.g., human). This is related to curriculum learning, where examples are fed to the machine learning algorithm in a specially designed order (e.g., easy to hard). This gave me ideas to enhance Eleksius! Reconciling priors” & “priors” without prejudice? Remi Gribonval, Pierre Machart This paper connects the Bayesian least squares (MMSE) estimation and MAP estimation under Gaussian likelihood. Their theorem shows that MMSE estimate with some prior is also a MAP estimate under some other prior (or equivalently, a regularized least squares). There were many more interesting things, but I’m going to stop here! [EDIT: check out these blog posts by Paul Mineiro, hundalhh, Yisong Yue, Sebastien Bubeck, Davide Chicco] Computational NeuroScience (CNS) conference is held annually alternating in America and Europe. This year it was held in Paris, next year is Québec City, Canada. There are more theoretical and simulation based studies, compared to experimental studies. Among the experimental studies, there were a lot of oscillation and synchrony related subjects. Disclaimer: I was occupied with several things, so I was not 100% attending the conference, so my selection is heavily biased. These notes are primarily for my future reference. Simon Laughlin. The influence of metabolic energy on neural computation (keynote) There are three main categories of energy cost in the brain: (1) maintenance, (2) spike generation, and (3) synapse. Assuming a finite energy budget for the brain, the optimal efficient coding strategy can vary from small number of neurons with high rate to large population with sparse coding [see Fig 3, Laughlin 2001]. Variation of cost ratios across animals may be associated with different coding strategies to optimize for energy/bits. He illustrated the balance through various laws of diminishing return plots. He emphasized reverse engineering the brain, and concluded with the 10 principles of neural design (transcribed from the slides thanks to the photo by @neuroflips): (1) save on wire, (2) make components irreducibly small, (3) send only what is needed, (4) send at the lowest rate, (5) sparsify, (6) compute directly with analogue primitives, (7) mix analogue and digital, (8) adapt, match and learn, (9) complexify (elaborate to specialize), (10) compute with chemistry??????. (question marks are from the original slide) Sophie Denev. Rescuing the spike (keynote) She proposed that the observation of high trial-to-trial variability in spike trains from single neurons is due to degeneracy in the population encoding. There are many ways the presynaptic population can evoke similar membrane potential fluctuations of a linear readout neuron, and hence she claims that through precisely controlled lateral inhibition, the neural code is precise in the population level, but seems variable if we only observe a single neuron. She briefly mentioned how a linear dynamical system might be implemented in such a coding system, but it seemed limited as to what kind of computations can be achieved. There were several noise correlation (joint variability in the population activity) related talks: Joel Zylberberg et al. Consistency requirements determine optimal noise correlations in neural populations The “sign rule” says that if the signal correlation is opposite of the noise correlation, linear Fisher information (and OLE performance) is improved (see Fig 1, Averbeck, Latham, Pouget 2006). They showed a theorem confirming the sign rule in general setup, and furthermore showed the optimal noise correlation does NOT necessarily obey the sign rule (see Hu, Zylberberg, Shea-Brown 2013). Experiments from the retina does not obey the sign rule; noise correlation is positive even for cells tuned to the same direction, however, it is still near optimal according to their theory. Federico Carnevale et al. The role of neural correlations in a decision-making task During a vibration detection task, cross-correlations among neurons in the premotor cortex (in a 250 ms window) were shown to be dependent on behavior (see Carnevale et al. 2012). Federico told me that there were no sharp peaks in the cross-correlation. He further extrapolated the choice probability to the network level based on multivariate Gaussian approximation, and a simplification to categorize neurons into two classes (transient or sustained response). Alex Pouget and Peter Latham each gave talks in the Functional role of correlations workshop. Both were on Fisher information and effect of noise correlations. Pouget’s talk was focused on “differential correlation” which is the noise in the direction of the manifold that tuning curves encode information (noise that looks like signal). Peter talked about why there are so many neurons in the brain with linear Fisher information and additive noise (but I forgot the details!) On the first day of the workshop, I participated in the New approaches to spike train analysis and neuronal coding workshop organized by Conor Houghton and Thomas Kreuz. Florian Mormann. Measuring spike-field coherence and spike train synchrony He emphasized on using nonparametric statistics for testing circular variable of interest: the phase of LFP oscillation conditioned on spike timings. In the second part, he talked about spike-distance (see Kreuz 2012) which is a smooth, time scale invariant measure of instantaneous synchrony among spike trains. Rodrigo Quian Quiroga. Extracting information in time patterns and correlations with wavelets Using Haar wavelet time bins as the feature space, he proposed scale free linear analysis of spike trains. In addition, he proposed discovering relevant temporal structure through a feature selection using mutual information. The method doesn’t seem to be able to find higher order interactions between time bins. Ralph Andrzejak. Detecting directional couplings between spiking signals and time-continuous signals Using distance based directional coupling analysis (see Chicharro, Andrzejak 2009; Andrzejak, Kreuz 2011), he showed that it is possible to find unidirectional coupling between continuous signals and spike trains via spike train distances. He mentioned the possibility of using spectral Granger causality for a similar purpose. Adrià Tauste Campo. Estimation of directed information between simultaneous spike trains in decision making Bayesian conditional information estimation through the use of context-tree weighting was used to infer directional information (analogous to Granger causality, but with mutual information). A compact Markovian structure is learned for binary time series. I presented a poster on Bayesian entropy estimation in the main meeting, and gave a talk about nonparametric (kernel) methods for spike trains in the workshop. Last Sunday (April 28th, 2013) was the 8th Black board day (BBD), which is a small informal workshop I organize every year. It started 8 years ago on my hero Kurt Gödel‘s 100th birthday. This year, I found out that April 30th (1916) is Claud Shannon‘s birthday so I decided the theme would be his information theory. I started by introducing probabilistic reasoning as an extension of logic in this uncertain world (as Michael Buice told us in ). I quickly introduced two key concepts, Shannon’s entropy $H(X) = -\sum_i p_i \log_2 p_i$ which additively quantifies the of a sequence of independent random quantity in , and mutual information $I(X;Y) = H(X) - H(X|Y) = H(Y) - H(Y|X)$ which quantifies the how much uncertainty is reduced in by the knowledge of (and vice versa, it’s symmetric). I showed a simple example of the source coding theorem which states that a symbol sequence can be maximally compressed to the length of it’s entropy (information content), and stated the noisy channel coding theorem , which provides an achievable limit of information rate that can be passed through a channel (the channel capacity ). Legend says that von Neumann told Shannon to use the word “entropy” due to its similarity to the concept in physics, so I gave a quick picture that connects the Boltzmann entropy to Shannon’s entropy. Andrew Tan: Holographic entanglement entropy Andrew wanted to connect how space-time structure can be derived from holographic entanglement entropy, and furthermore to link it to graphical models such as the restricted Boltzmann machine. He gave overviews of quantum mechanics (deterministic linear dynamics of the quantum states), density matrix, von Neumann entropy, and entanglement entropy (entropy of a reduced density matrix, where we assume partial observation and marginalization over the rest). Then, he talked about the asymptotic behaviors of entropy for the ground state and critical regime, and introduced a parameterized form of Hamiltonian that gives rise to a specific dependence structure in space-time, and sketched what the dimension of boundary and area of the dependence structure are. Unfortunately, we did not have enough time to finish what he wanted to tell us (see Swingle 2012 for details). Information theory is widely applied to neuroscience and sometimes to machine learning. Jonathan sympathized with Shannon’s note (1956) called “the bandwagon”, criticized the possible abuse/ overselling of information theory. First, Jonathan focused on the derivation of a “universal” rate-distortion theory based on the “information bottleneck principle”. Then, he continued with his recent ideas in optimal neural codes under different Bayesian distortion functions. He showed a multiple-choice exam example where maximizing mutual information can be worse, and a linear neural coding example for different cost functions. • C. E. Shannon. A Mathematical Theory of Communication. Bell System Technical Journal 27 (3): 379–423. 1948 • E. T. Jaynes. Information Theory and Statistical Mechanics. Physical Review Online Archive (Prola), Vol. 106, No. 4. (15 May 1957), pp. 620-630 • Brian Swingle. Entanglement renormalization and holography. Physical Review D, Vol. 86 (Sep 2012), 065007 • MIT Open CourseWear lectures: Statistical Mechanics I: Statistical Mechanics of Particles, Statistical Mechanics II: Statistical Physics of Fields (recommended by Andrew Tan) • C. E. Shannon. The Bandwagon. IRE Transactions on Information Theory, 1956 • N. Tishby, F. Pereira, W. Bialek. The Information Bottleneck Method. In Proceedings of the 37-th Annual Allerton Conference on Communication, Control and Computing (1999), pp. 368-377 Feb 28–Mar 5 was the 10th COSYNE meeting, and my 6th participation. Thanks to my wonderful collaborators, I had a total of 4 posters in the main meeting (Jonathan Pillow had 7 which was a tie for the most number of abstracts with Larry Abbott). Hence, I didn’t have a chance to sample enough posters for the first two nights (I also noticed a few presentations that overlapped NIPS 2012). I tried to be a bit more social this year; I organized a small (unofficial) Korean social (with the help of Kijung Yoon), a tweet-up, and enjoyed many social drinking nights. Following are my notes on what I found interesting. EDIT: here are some other blog posts about the meeting: [Jonathan Pillow] [Anne Churchland] Main meeting—Day 1 William Bialek. Are we asking the right questions? Not all sensory information are equally important. Rather, Bialek claims that the information that can predict the future are the important bits. Since neurons only have access to the presynaptic neurons’ spiking pattern, this should be achieved by neural computation that predicts its own future patterns (presumably under some constraints to prevent trivial solutions). When such information is measured over time, at least in some neurons in the fly visual system, its decay is very slow: “Even a fly is not Markovian”. This indicates that the neuronal population state may be critical. (see Bialek, Nemenman, Tishby 2001) Evan Archer, Il Memming Park, Jonathan W Pillow. Semi-parametric Bayesian entropy estimation for binary spike trains [see Evan's blog] Jacob Yates, Il Memming Park, Lawrence Cormack, Jonathan W Pillow, Alexander Huk. Precise characterization of multiple LIP neurons in relation to stimulus and behavior Jonathan W Pillow, Il Memming Park. Beyond Barlow: a Bayesian theory of efficient neural coding Main meeting—Day 2 Eve Marder. The impact of degeneracy on system robustness She stressed about how there could be multiple implementations of the same functionality, a property she refers to as degeneracy. Her story was centered around modeling the Lobster STG oscillation (side note: connectome is not enough to predict behavior). Since there are rapid decay and rebuilding of receptors and channels, there must be homeostatic mechanisms that constantly tune parameters for the vital oscillatory bursting in STG. There are multiple stable fixed points in the parameter space and single cell RNA quantification supports it. Mark H Histed, John Maunsell. The cortical network can sum inputs linearly to guide behavioral decisions Using optogenetics in a behaving mice, they tried to resolve the synchrony vs rate code debate. He showed that behaviorally, the population showed almost perfect integration to weak input, and not sensitive to synchrony. Hence, he claims that the brain may just well operate on linear population codes. Arnulf Graf, Richard Andersen. Learning to infer eye movement plans from populations of intraparietal neurons Spike trains from monkey area LIP were used for an “eye-movement intention” based brain—machine interface. During the brain–control period, LIP neurons changed their tuning. Decoding was done with a MAP decoder which was updated online through the trials. To encourage(?) the monkey, the brain–control period had different target distribution, and the decoder took this “behavioral history” or “prior” into account. Neurons with the lowest performance enhanced the most, demonstrating the ability of LIP neurons to swiftly change their firing pattern. Il Memming Park, Evan Archer, Nicholas Priebe, Jonathan W Pillow. Got a moment or two? Neural models and linear dimensionality reduction David Pfau Eftychios A. Pnevmatikakis Liam Paninski. Robust learning of low dimensional dynamics from large neural ensembles Estimation of latent dynamics with arbitrary noise process is recovered from high dimensional spike train observation using low-rank optimization techniques (convex relaxation). Even spike history filter can be included by assuming low-rank matrix corrupted by sparse noise. Nice method that I look forward for its application to real data. Main meeting—Day 3 Carlos Brody. Neural substrates of decision-making in the rat Using the rats trained in a psychophysics factory on the Poisson click task, he showed that rats are noiseless integrators by fitting a detailed drift diffusion model with 8 (or 9?) parameters. From the model, he extracted detailed expected decision variable statistics related to activity in PPC and FOF (analogue of LIP and FEF in monkeys), which showed FOF is more threshold like, and PPC is integrator like in their firing rate representation. However, upon pharmacologically disabling either area, the rat psychophysics was not harmed, which indicates that the accumulation of sensory evidence is somewhere earlier in the information processing. (Jeffrey Erlich said it might be auditory cortex during the workshop.) [EDIT: Brody's science paper is out.] N. Parga, F. Carnevale, V. de Lafuente, R. Romo. On the role of neural correlations in decision-making tasks I had hard time understanding the speaker, but it was interesting to see how spike count correlation and Gaussian assumption for decision making could accurately predict the choice probability. Jonathan Aljadeff, Ronen Segev, Michael J. Berry II, Tatyana O. Sharpee. Singular dimensions in spike triggered ensembles of correlated stimuli Due to large concentration in the eigenvalues in stimulus covariance natural scenes, they show that spike triggered covariance analysis (using the difference between the raw STC and stimulus covariance) result contains a spurious component that corresponds to the largest eigenvalue. They claim this using random matrix theory, and proposed a correction by projecting out the spurious dimension before STC analysis, and surprisingly, they recover more dimensions with larger than surrogate eigenvalue. I wonder if a model based approach like GQM (or BSTC) would do a better job for those ill-conditioned stimulus distributions. Gergo Orban, Pierre-Olivier Polack, Peyman Golshani, Mate Lengyel. Stimulus-dependence of membrane potential and spike count variability in V1 of behaving mice It is well known that the Fano factor of spike trains is reduced when stimulus is given (e.g. Churchland et al. 2010). Gergo measured contrast dependent trial-to-trial variability of V1 membrane potentials in awake mice. By computing the statistics from 5 out of 6 cycles of repeated stimulus, he found that the variability is reduced as the contrast gets stronger. The spikes were clipped from the membrane potential for this analysis. Jakob H Macke, Iain Murray, Peter Latham. How biased are maximum entropy models of neural population activity? This was based on their NIPS 2011 paper with the same title. If you use maximum entropy model as an entropy estimator, like all entropy estimators, your estimate of entropy will be biased. They have an exact form of the bias which is inversely proportional to the number of samples, if the model class is right. Ryan P Adams, Geoffrey Hinton, Richard Zemel. Unsupervised learning of latent spiking representations By taking a limit of small bin size of an RBM, they built a point process model with continuous coupling with a hidden point process. The work seems to be still preliminary. They used Gaussian process to constrain the coupling to be smooth. Main meeting—Day 4 D. Acuna, M. Berniker, H. Fernandes, K. Kording. An investigation of how prior beliefs influence decision–making under uncertainty in a 2AFC task. Subjects performing optimal Bayesian inference could be using several different strategies to generate behavior from the posterior; sampling from the posterior vs MAP inference are compared. Different strategies predict the just-noticeable-difference (JND) as a function of prior uncertainty. However, they find that human subjects were consistent with MAP inference and not sampling. Phillip N. Sabes. On the duality of motor cortex: movement representation and dynamical machine He poses the question of whether activities in the motor cortex is a representation (tuning curve) of motor related variables or the motor cortex is just generating dynamics for motor output. Also, he says jPCA applied to feed-forward non-normal dynamics shows similar results to Churchland et al. 2012: not necessarily oscillating. He suggested that dynamics is the way to interpret them, but the neurons were after all also tuned in the end. Workshops—Day 5 Randy Bruno. The neocortical circuit is two circuits. (Why so many layers and cell types? workshop) By studying the thalamic input to the cortex in a sedated animal, he discovered that they synapse 80% to layer 4 and 20% to layer 5/6 (more 5 than 6; see Oberlaender et al. 2012). He blocked the L4 and above activity which did not change the L5/6 membrane potential response to whisker deflection. He suggested that L1/2/3/4 and L5/6 are two different circuits that can function independently. Alex Huk. Temporal dynamics of sensorimotor integration in the primate dorsal stream (Neural mechanism for orienting decisions across the animal kingdom workshop) Workshops—Day 6 Matteo Carandini. Adaptation to stimulus statistics in visual cortex (Priors in perception, decision-making and physiology workshop) Matteo showed adaptations in LGN and V1 due to changes in the input statistics. For LGN, the position of stimulus was used which in turn shifted V1 receptive fields (V1 didn’t adapt, it just didn’t know about the adaptation in LGN). For V1, random full field orientation was used (as in Beniucci et al. 2009) but with a sudden change in distribution over orientations. The effect on V1 tuning could be explained by changes in gain of each neuron, and each stimulus orientation. This equalized the population (firing rate) response. [EDIT: this is published in Nat Neurosci 2013] Eero Simoncelli. Implicit embedding of prior probabilities in optimally efficient neural populations (Priors in perception, decision-making and physiology workshop) Eero presented an elegant theory (work with Deep Ganguli presented in NIPS 2010; Evan’s review) of optimal tuning curves given the prior distribution. He showed that 4 visual and 2 auditory neurophysiology and psychophysics can be explained well with it. Albert Lee. Cellular mechanisms underlying spatially-tuned firing in the hippocampus (Dendritic computation in neural circuits workshop) Among the place cells there are also silent neurons in CA1. Using impressive whole cell patch on CA1 cells in awake freely moving mice, he showed that not only they don’t spike, the silent cells do not have tuned membrane fluctuation. However, by injecting current into the cell so that it would have a higher membrane potential (closer to the threshold), they successfully activated the silent cell and and made them place cells (Lee, Lin and Lee 2012). Marina Garrett. Functional and structural mapping of mouse visual cortical areas (A new chapter in the study of functional maps in visual cortex workshop) She used intrinsic imaging to find continuous retinotopical maps. Using the gradient of the retinotopy, in combination with the eccentricity map, she defined boarders of visual areas. She defined 9 (or 10?) areas surrounding V1 (Marshel et al. 2011). Several areas had temporal selectivity, while others had temporal selectivity, which are the hallmark of parietal and temporal pathways (dorsal and ventral in primates). She also found connectivity patterns which showed increasingly multi-modal for higher areas. Recent Comments memming on Eleksius, the dual of twenty… A guide to discrete… on Bayesian entropy estimation fo… jack on NIPS 2013 My favourite papers… on NIPS 2013 memming on CNS 2013
{"url":"http://memming.wordpress.com/","timestamp":"2014-04-19T17:06:25Z","content_type":null,"content_length":"123675","record_id":"<urn:uuid:3ce393a4-87a6-4589-91cd-05e6841cc2b4>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. '3 Rings' printed from http://nrich.maths.org/ I found a number of small bracelets and rings on a table some time ago and I noticed how some were on their own, others were touching at the edges, others were overlapping each other and some small ones had found themselves inside larger ones. I took two of these, one ring and one bracelet, and explored what possibilities there were. I thought that this would be the next challenge for you all. To look at the situation when you have three rings, circles, bracelets . . . . it doesn't matter what they are really or what size they are. They could even expand and get bigger or get smaller if you liked. But, thinking of the four things I noticed at the start:- 1) TOUCHING 2) OVERLAPPING 3) SEPARATE 4) IN/OUT SIDE I wonder what would be the number of ways in which 3 such circles could be? Here are some ways, remember I said they could be different sizes each time, but I've coloured them so that it is easy to know which one we are talking about. Well I feel you could carry on at this point, just a few points to remember:- When writing you must say something about each of the three circle/rings/bracelets. Three separate ones could be anywhere yet separate and they would all count as one arrangement, and the same kind of things goes for any other arrangement, if the words are the same then, for this challenge the arrangement is the same. You could now ask "I wonder what would happen if.....?"
{"url":"http://nrich.maths.org/38/index?nomenu=1","timestamp":"2014-04-19T12:33:29Z","content_type":null,"content_length":"4733","record_id":"<urn:uuid:e9cc7f58-ac25-44e6-9e30-a3a08ea59ad3>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Need to calculate yield to maturity on a bond. N=10 PV=1058 PMT=73 or 7.3% Interest FV=1000 Can someone help with this. I come up with 8.12 but my school finance lab says it is 6.49. Can someone explain how they come up with that number? Please. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f25d556e4b0a2a9c26722d5","timestamp":"2014-04-17T04:00:37Z","content_type":null,"content_length":"25387","record_id":"<urn:uuid:1487019b-43df-4f92-8670-22eb9d8a6913>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] 277:Strict Predicativity Harvey Friedman friedman at math.ohio-state.edu Wed Apr 5 13:58:26 EDT 2006 We give a straightforward analysis of one of many flavors of what is generally called "predicativity". The one we present is called strict predicativity in order to distinguish it from other flavors of "predicativity". Our proposed analysis of "strict predicativity" is substantially weaker than the Feferman/Schutte analysis of "predicativity". However, strict predicativity is sufficiently liberal so as to allow all of the obviously natural mathematics that I am aware of for which mathematicians do not easily sense that something logically exotic is going on connected with, broadly speaking, "predicativity". Strict predicativity seems to have the advantage of perhaps lending itself particularly well to formal analysis. The issue centers around the construction {n: phi(n)} where phi may have variables over nonnegative integers in addition to n, and also phi may have variables over sets of nonnegative integers. Below, by set we will always mean: set of nonnegative integers. Now prima facie, phi(n) does not have a definite meaning, as the range of the set (of natural number) quantifiers in phi(n) are not fixed. They are not fixed, because the sets do not form a completed totality. So we will accept {n: phi(n)} exists if we can PROVE that it doesn't make any difference what completed totality the set variables in phi range over, as long as that completed totality approximates what we already know about the incompleted totality of all Formally, in order to get things going in a completely nonproblematic way, let us start off with the system ACA_0. (This is not really necessary, but fine for this first posting). To this, we add the following rule of proof: COMPREHENSION RULE. Let phi, psi be formulas of the language of second order arithmetic, and n be a variable over the nonnegative integers. Suppose we have proved psi. Also suppose that we have proved that for all enumerated families A,B of sets of nonnegative integers that include the set parameters of phi, if psi holds in A,B, then phi holds in A iff phi holds in B. Then we can conclude that {n: phi} exists. Because we have started with ACA_0, there is no problem making sense of the above talk of "sufficiently inclusive enumerated families" appropriately as a single set of nonnegative integers. We also add the following rule of proof: INDUCTION RULE. Let phi be a formula of the language of second order arithmetic, and n be a variable over the nonnegative integers. Suppose we can prove the formulas phi[n/0], and phi implies phi[n/n+1]. Then we can conclude phi. There are variants of the Comprehension Rule that don't feature enumerated families. We expect to take this up in the future. Now, what can we say about the above formal system, SP = strict It appears that SP is measured by omega^omega Turing jumps. This is of course far less than Gamma_0 Turing jumps. But I don't know of any Theorem in nature that is strictly in between. I use http://www.math.ohio-state.edu/%7Efriedman/ for downloadable manuscripts. This is the 277th in a series of self contained numbered postings to FOM covering a wide range of topics in f.o.m. The list of previous numbered postings #1-249 can be found at http://www.cs.nyu.edu/pipermail/fom/2005-June/008999.html in the FOM archives, 6/15/05, 9:18PM. NOTE: The title of #269 has been corrected from the original. 250. Extreme Cardinals/Pi01 7/31/05 8:34PM 251. Embedding Axioms 8/1/05 10:40AM 252. Pi01 Revisited 10/25/05 10:35PM 253. Pi01 Progress 10/26/05 6:32AM 254. Pi01 Progress/more 11/10/05 4:37AM 255. Controlling Pi01 11/12 5:10PM 256. NAME:finite inclusion theory 11/21/05 2:34AM 257. FIT/more 11/22/05 5:34AM 258. Pi01/Simplification/Restatement 11/27/05 2:12AM 259. Pi01 pointer 11/30/05 10:36AM 260. Pi01/simplification 12/3/05 3:11PM 261. Pi01/nicer 12/5/05 2:26AM 262. Correction/Restatement 12/9/05 10:13AM 263. Pi01/digraphs 1 1/13/06 1:11AM 264. Pi01/digraphs 2 1/27/06 11:34AM 265. Pi01/digraphs 2/more 1/28/06 2:46PM 266. Pi01/digraphs/unifying 2/4/06 5:27AM 267. Pi01/digraphs/progress 2/8/06 2:44AM 268. Finite to Infinite 1 2/22/06 9:01AM 269. Pi01,Pi00/digraphs 2/25/06 3:09AM 270. Finite to Infinite/Restatement 2/25/06 8:25PM 271. Clarification of Smith Article 3/22/06 5:58PM 272. Sigma01/optimal 3/24/06 1:45PM 273: Sigma01/optimal/size 3/28/06 12:57PM 274: Subcubic Graph Numbers 4/1/06 11:23AM 275: Kruskal Theorem/Impredicativity 4/2/06 12:16PM 276: Higman/Kruskal/impredicativity 4/4/06 6:31AM Harvey Friedman More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2006-April/010343.html","timestamp":"2014-04-18T23:53:02Z","content_type":null,"content_length":"7200","record_id":"<urn:uuid:277a1d3f-0a28-44bb-b3f0-f1d408510336>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
outline the following two amounts December 8th 2012, 03:12 AM outline the following two amounts This is from My exam today and i did not know how to solve that kind of pproblem Its problem nr 1 and it says outline the following two amountsAttachment 26140 December 8th 2012, 03:52 AM Re: outline the following two amounts You do know that you posted it upside down, don't you? It's also in Swedish which I, at least, cannot read. Surely, it wouldn't have been that hard to just type in the problem yourself. I assume that you to describe, or write in a more simplified way, the sets a) $\{x\in R: (x-1)(x-5)\le 0\}\cap\{x\in R: (x- 3)(x- 1)> 0\}\cap \{x\in R: (x- 2)(x- 6)\le 0\}$ b) $\{z\in C: Re((z-1)(\overline{z}+ i)= 1/2\}$ The first is simply asking for all numbers that satisfy all three of the inequalities. Do you know how to find numbers satisfying the inequalities separately? For example (x- 1)(x-5)= 0 when x= 1 or x= 5. Since the product is a continuous function of x, only those numbers can separate "<" and ">". In particular, if x= 2 (between 1 and 5), (2- 1)(2- 5)= (1)(-3)= -3< 0 so does satisfy the first inequality. You can check a number less than 1, say 0, and a number larger than 5, say 6, to see that numbers less than 1 or larger than 5 do NOT satisfy it. That is, the first set is simply the interval $1\le x\le 5$ of all number between 1 and 5, including the endpoints. Find the solution sets of the other two inequalities in the say way and see what numbers, if any, satisfy all three. (b) is harder. In fact, I am surprised you would have trouble with (a) if you are in a course where you are expected to be able to do (b). I would write z= x+ iy so that $\overline{z}= x- iy$. Then $z- 1= (x- 1)+ iy$ and $\overline{z}+i= x+ i(1- y)$ and then $(z-1)(\overline{z}+i)= [(x- 1)+iy][x+ i(1- y)]= (x(x-1)- y(1- y))+ i(xy+ (x-1)(1- y))= (x^2+ y^2- x- y)+ (x+ y- 1)i$ Find the real part of that and set it equal to 1/2. What is the graph of that in the complex plane? December 8th 2012, 05:19 AM Re: outline the following two amounts No i did not know it was upside down because i did uppload through My phone:P ... I did translate in My text:S the thing is i did start same with a as u did but u dont have to say like that.. I did never read this in My book actually so... Im actually searching for this in My book but cant find.. Ima see what i can progress with December 8th 2012, 05:17 PM Re: outline the following two amounts note that (x - 1/2)^2 = x^2 - x + 1/4. this may prove useful.
{"url":"http://mathhelpforum.com/advanced-algebra/209333-outline-following-two-amounts-print.html","timestamp":"2014-04-18T11:10:32Z","content_type":null,"content_length":"7943","record_id":"<urn:uuid:ba50d213-6829-4ed8-920c-99ecf47d6760>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: July 2002 [00241] [Date Index] [Thread Index] [Author Index] RE: = versus := and trying to speed up calculations • To: mathgroup at smc.vnet.net • Subject: [mg35438] RE: [mg35410] = versus := and trying to speed up calculations • From: "DrBob" <majort at cox-internet.com> • Date: Fri, 12 Jul 2002 04:29:03 -0400 (EDT) • Reply-to: <drbob at bigfoot.com> • Sender: owner-wri-mathgroup at wolfram.com Here's a recursive definition for n! that saves answers so they don't have to be computed again: f[0] = 1; f[n_] := f[n] = n f[n - 1] ?? f f[0] = 1 f[1] = 1 f[n_] := f[n] = n*f[n - 1] The rule defined by SetDelayed (":=") is only invoked once for each n; next time the same n is used, the rule defined by Set ("=") takes The downside is that you're storing a rule for each value of n encountered. You also have to compute the low-level values ahead of time. If the above code is followed immediately by $RecursionLimit::"reclim": "Recursion depth of 256 exceeded." Instead you have to compute from the bottom up (the first time you want to reach 2000): f /@ Range[1, 2000, 256]; f[2000] In your problem, the memory requirements of this method may be prohibitive. If so, you might experiment with saving SOME intermediate results but not others. Bobby Treat -----Original Message----- From: Geoff Tims [mailto:Geoff at swt.edu] To: mathgroup at smc.vnet.net Subject: [mg35438] [mg35410] = versus := and trying to speed up calculations I have the following function and it is called many many times in a I have written. This is a bracket operator for a certain type of Lie Algebra I am looking at and it must be called over and over to test or not the Jacobi is actually 0 as it should be. With very low numbers n, my program runs in a second or less. If n is around 20+ or if the coefficients are large, the program takes nearer a minute or two. not a long time, but I have a feeling that it's having to calculate the Bracket many times in the program and I'm hoping to get rid of that However, I don't understand the differences between := and = enough. read the help files, but I don't understand the subtle differences such as I can call Bracket[stuff, something]:= somethingelse more than once, but it doesn't seem as if I can use = more than once. Bracket[a_.*e[i_], b_.*e[j_]] := i + j > n, 0, i == j, 0, i == 1, a*b*e[j + 1], i == 2 && j == 3, 14a*b*e[7], i == 3 && j == 4, 0, i < j, -a*b*Bracket[e[i + 1], e[j - 1]] + Bracket[e[1], Bracket[e[i], e[j - 1]]], i > j, -a*b*Bracket[e[j], e[i]] (* bilinear function *) Bracket[mul_, expr_Plus] := Map[Bracket[mul, #] &, expr]; Bracket[expr_Plus, mul_] := Map[Bracket[#, mul] &, expr]; Bracket[0, x_] := 0 Bracket[x_, 0] := 0 Any help would be much appreciated. Geoff Tims
{"url":"http://forums.wolfram.com/mathgroup/archive/2002/Jul/msg00241.html","timestamp":"2014-04-17T18:27:15Z","content_type":null,"content_length":"36863","record_id":"<urn:uuid:79a9afeb-0d85-4004-986f-ce765756b842>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Why You Should Ignore the Prerequisites in Math Classes You’ve seen the prerequisites part of a syllabus or course listing that begins: To take this course you should have completed… But what does that really mean anyway? A prerequisite is a way to keep out the rubbish. Have you ever sat in a class with someone asking tons of questions about things they should have known before signing up? To prevent this, instructors and institutions have instituted the “stay out if you’re going to get on everyone’s nerves” clause. It’s called the prerequisites. Prerequisites are a way out of a class that you didn’t want to take anyway. Prerequites are designed in such a way to allow you to escape. If you have any apprehensions about taking the class at all, you can just refrain from ever signing up – because of the prerequisites. Regardless of if you have the prereq’s, you can play this card. The course description reads: To take this course you should have completed College Algebra. You can convince yourself using one of these: 1. “I passed College Algebra, but only with a C. They probably mean that I should have made a B or C.” 2. “I passed College Algebra with a B. But I was really uncomfortable about it. They probably mean that I should feel really good about all the content in College Algebra.” 3. “I passed College Algebra with an A. But there were quite a few things I didn’t understand really really well. They probably mean that I should be really good with all of the stuff in College See how you can talk yourself out of anything? But there are no real prerequisites. All topics of math can be learned independently. Every topic can be learned before or after any other topic. And every topic can be used to support as well as be supported by any other topic. There is no order to this stuff. There is merely the order in which we learned it – one of a hundred bazillion ways that you could order it. My little sister was interested in math in college. I suggested she take Linear Algebra, a sophomore level class, in her first semester. The course catalog listed three semesters of calculus as the prerequisites. I told her that Linear Algebra had nothing at all to do with Calculus and she should ignore the prereq’s. She did. She finished her degree in her way – following her interests. (By the way, she’s currently the Business Administrator in that same math department!) Prerequisites are bogus. Education and learning should be focused on what you’re excited about. It’s about following what the learner wants – and what he or she (or you) will engage with. If you, or your kids, don’t want to do it, then don’t. But if you do – then don’t let some nutty arbitrary prerequisite statement stop you! Or even slow you down. Try it on this class… The sweet and talented Keith Devlin is teaching an online course in Math Thinking soon that has a “Recommended background of High School Mathematics.” Unfortunately those words sound like, “The prerequisite for this is high school math.” The class is online and it’s free. If your teens are interested, encourage them to join. If you have a precocious pre-teen, see if he or she is curious. And if you have a GED or no high school math at all, jump in – if you want. And the next time you’re faced with anything that looks like prerequisites, ignore them! You might also like: Highly dosed with Sildenafil for better absorption. It sure works for me. ! This website provides highest quality generic medicines, which are shipped directly from India. This post may contain affiliate links. When you use them, you support us so we can continue to provide free content! 4 Responses to Why You Should Ignore the Prerequisites in Math Classes 1. This is the most ridiculous article I’ve ever read regarding math. No, you don’t need 3 semesters of Calculus to take Linear Algebra, but that doesn’t equate to “prereqs mean nothing.” If you are planning to take Calculus, you better have taken Algebra or else you are pretty much screwed. You use 1 example to make this general statement. I take it your concentration wasn’t mathematical logic! □ Thanks for your thoughts, Jay. I would beg to differ though. You can certainly take Calculus and get a lot from it without ever having Algebra. In fact, you could learn a great deal about Algebra by studying Calculus. You might mean that in order to do the work and get an A you would need the prerequisites. Which may be true in most cases. But you certainly don’t need the prerequisites to learn a lot and keep yourself fueled for the next thing. The myth is that to learn means you must do well in a class. Learning is thought to be equivalent to successfully doing the required work as it is prescribed. And the perpetuation of that myth is what holds many learners back. 2. Too many students have low expectations of themselves. You seem to be encouraging that, which I don’t agree with. □ I’m not sure how this encourages low expectations, Jay. In fact, I think students haven’t been allowed to set their own expectations – which might be one of the reasons that we have prereq’s. If we could set our own expectations, as students, instead of having them imposed on us, we might be able to determine if we are indeed ready to take a class. Thanks for your thoughts! Leave a reply
{"url":"http://mathfour.com/general/why-you-should-ignore-the-prerequisites-in-math-classes","timestamp":"2014-04-17T12:30:18Z","content_type":null,"content_length":"36136","record_id":"<urn:uuid:d0f14c77-9f9b-4bf7-ba5b-c4949a59eb88>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Alice and Bob in Cipherspace Alice and Bob in Cipherspace A new form of encryption allows you to compute with data you cannot read Hard Problems Gentry described his FHE system in his doctoral dissertation and in a paper at the Symposium on the Theory of Computing in 2009. In the three years since then, dozens of variations, elaborations and alternative schemes have been published, along with at least three attempts to implement homomorphic encryption in a working computer program. Most of the systems share the same overall architecture, with a somewhat homomorphic scheme that gets promoted to full homomorphism. Where the ideas differ is in the underlying cryptographic mechanism—the way that bits are twiddled and secrecy is achieved. Every cryptosystem is based on a problem that’s believed to be hard in general (so that Eve can’t solve it) but easy if you know a shortcut (so that Alice and Bob can decrypt messages efficiently). RSA’s hard problem is the factoring of large integers; the shortcut is knowledge of the factors. Gentry’s 2009 algorithm relies on a problem from the theory of integer lattices—sets of discrete points arranged like the atoms of a crystal in a high-dimensional space. Lattices give rise to an abundance of computationally difficult problems. For example, from a random position in space it is hard to find the closest lattice point unless you happen to know a specific set of coordinates that serve as a geometric guidebook to the lattice. In 2010 another homomorphic cryptosystem was invented by Marten van Dijk of MIT, Gentry, Shai Halevi of IBM and Vinod Vaikuntanathan, now at the University of Toronto. In this case the hard problem comes from number theory; it’s called approximate GCD. The exact GCD, or greatest common divisor, is easy to calculate; Euclid gave an efficient (and famous) algorithm. A “noisy” version of the problem seems to be much harder. If two large numbers have the GCD p, and you alter those numbers by adding or subtracting small random quantities, it becomes difficult to find p. In the cryptosystem, p is the secret key. A problem called learning with errors forms the basis of a third FHE system introduced by Zvika Brakerski of Stanford and Vaikuntanathan. Here the task is to solve a system of simultaneous equations where each equation has some small probability of being false. As with GCD, this is an easy problem in the exact case, where there are no errors, but searching for a subset of correct equations is More recently, Brakerski, Vaikuntanathan and Gentry have developed a variant of the learning-with-errors system that takes a different approach to noise management. Instead of stopping the computation at intervals to re-encrypt the data, they incrementally adjust parameters of the system after every computational step in a way that prevents the noise level from ever approaching the
{"url":"http://www.americanscientist.org/issues/pub/alice-and-bob-in-cipherspace/7","timestamp":"2014-04-21T10:28:18Z","content_type":null,"content_length":"128245","record_id":"<urn:uuid:836f4d8a-9f23-47e6-80fd-4b65bd35c4b5>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
how to convert square root to a decimal Author Message jrienjeeh Posted: Sunday 03rd of Mar 16:00 Welcome everybody out there, I'm hung up with a set of math exercises that I feel are truly difficult to solve. I'm taking a Pre Algebra course and need assistance with how to convert square root to a decimal. Do you have experience with any useful math helping product? To be frank, I'm a tiny bit leery about how useful these programs might be. Only I really do not understand how to solve these problems and thought it is worth an attempt. Back to top Jahm Xjardx Posted: Tuesday 05th of Mar 09:09 Have you checked into Algebra Buster? This is an exceptional application | helpful tool plus I have employed it several times to help me with my how to convert square root to a decimal homework. It's truly straightforward - you just need to enter the exercise and it will present to you a step by step solution that will help figure out your problem. Test it out and determine if it is useful. From: Odense, Denmark, EU Back to top Voumdaim of Posted: Wednesday 06th of Mar 08:42 Obpnis I'm a regular user of Algebra Buster and it has truly helped me understand math questions better by providing detailed steps for the solution. I recommend this online tool to aid you with your (math stuff. You just need to adhere to the directions provided there. From: SF Bay Area, CA, USA Back to top SoS Posted: Thursday 07th of Mar 08:51 Thanks with the help. How do I obtain this very application program? Back to top Koem Posted: Friday 08th of Mar 08:27 Click on http://www.algebra-online.com/why-algebra-buster.htm , you can purchase this tool. You will get an exceptional tool at a fair price. And (if you're not happy with it, they’ll give you back your money so it is utterly risk free. From: Sweden Back to top
{"url":"http://www.algebra-online.com/algebra-homework-1/how-to-convert-square-root-to.html","timestamp":"2014-04-18T01:03:44Z","content_type":null,"content_length":"27195","record_id":"<urn:uuid:11bb2cb4-9b15-4b4d-8cbb-90a562e9b561>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
descent for L-infinity algebras $\infty$-Lie theory Smooth structure Higher groupoids Lie theory ∞-Lie groupoids ∞-Lie algebroids Formal Lie groupoids $\infty$-Lie groupoids $\infty$-Lie groups $\infty$-Lie algebroids $\infty$-Lie algebras Locality and descent under construction so far these are notes taken in talks by Ezra Getzler The notion of L-infinity algebra is something that naturally arises in deformation theory and in descent problems. A dg-Lie algebra is a chain complex of vector spaces $\cdots \to V^{-1} \stackrel{d}{\to} V^0 \stackrel{d}{\to} V^1 \stackrel{d}{\to} V^2 \to \cdots$ and is equipped with a bracket operation $[-,-] : V^i \otimes V^j \to V^{i + j}$ which is • bilinear • graded antisymmetric: $[x,y] = - (-1)^{deg(x) deg(y)} [y,x]$ • satisfies the graded Jacobi identity $[x,[y,z]] = [[x,y],z] + (-1)^{deg(x) deg(y)} [y,[x,z]]$. • is graded Leibnitz: $d[x,y] = [d x,y] + (-1)^{deg(x)} [x, d y]$. (aka a graded derivation) Note If $deg(x)$ is odd, then $[x,x]$ need not vanish. (see also super Lie algebra). Let $L$ be a dg-Lie algebra, degreewise finite dimensional (“of finite type” in the language of rational homotopy theory). We can form its Chevalley-Eilenberg algebra (see there for details) of cochains $CE(L) = (\wedge^\bullet L[1]^*, d)$ (N.B. In full generality, read this as $CE(L) = ((\wedge^\bullet L[1])^*, \delta)$ and regard $\wedge^\bullet L[1]$ as a graded commutative COalgebra.) The underlying graded algebra we may dually think of as functions on some space, a (so-called) formal graded manifold. The total differential $\delta = \delta_1 + \delta_2$ where the first is $d$ and the second is the dual of the bracket: $[-,-]^*$ extended as a graded derivation. Being a derivation, dually we may think of this as a vector field on our formal smooth manifold. This is sometimes called an NQ-supermanifold. An L-infinity algebra (see there) of finite type is the evident generalization of this inroduced by Jim Stasheff and Tom Lada what kind of link is needed here?? in the early 1990s: An L-infinity algebra is equivalent to ( in the degreewise finite dimensional case) a free graded-commutative algebra equipped with a differential of degree +1. Now the differential corresponds to a sequence of $n$-ary brackets. For $n = 1$ this is the differential on the complex, for $n = 2$ this is the binary bracket from above, and then there are higher Morphisms of $L_\infty$-algebras One can consider two notions of morphisms: strict ones and general ones. A strict one would be a linear map of the underlying vector spaces that strictly preserves all the brackets. A general definitin of morphisms is: in terms of the dual dg-algebras just a morphism of these, going the opposite way. In the dual formulation this is due to Lada and Stasheff. We may also think of this as a morphism of NQ-supermanifolds. All this arose in this form probably most vivedly in the BFV-BRST formalism? or in the BV-BRST formalism. So in components such a morphism $f : L \to K$ of $L_\infty$-algebras consists of $n$-ary maps $f_k : L^{i_1} \otimes \cdots L^{i_k} \to K^{i_1 + \cdots + 1 - k}$ (where the shift in the indices is due to the numbering convention here only). The homotopical category of $L_\infty$-algebra We will now describe on the category of $L_\infty$-algebras the structure of a category of fibrant objects. The issue is that the category of $L_\infty$-alghebras as defined above has not all products and coproducts. But we can turn it into a category of fibrant objects. A notion of category of fibrant objects This is analogous to (in fact an example of the same general fact) how Kan complexes inside all simplicial sets are the fibrant objects of the model structure on simplicial sets but do not form among themselves a model category but a category of fibrant objects. See Kan complex for more We now look at the axioms for our category of fibrant objects. It is a slight variant of those in BrownAHT described at category of fibrant objects and draws bit from work of Dwyer and Kan. Let $C$ be a category. The axioms used here are the following. 1. There is a subcategory $W \subset C$ whose morphisms are called weak equivalences, such that this makes $C$ into a category with weak equivalences. 2. There is another subcategory $F \subset C$, whose morphisms are called fibrations (and those that are also in $W$ are called acyclic fibrations) , such that □ it contains all isomorphisms; □ the pullback of a fibration is again a fibration. □ the pullback of an acyclic fibrations is an acyclic fibration. 3. $C$ has all products and in particular a terminal object $*$. Filtered $L_\infty$-algebras as a Getzler-category of fibrant objects Write $\mathbb{L}$ for the category of filtered L-infinity algebras Let $L^\bullet$ be a graded vector space a decreasing filtrration on it is $L = F^0 L \supset F^1 L \supset \cdots$ such that $L$ is the limit over this $L \simeq \lim_{\leftarrow} L/F^1 K$ i.e. if $(x_i \in F^i L)$ then $\sum_{i=0}^\infty x_i$ exists something missing here $[-,- , \dots ]_k$ has filtration degree 0 if $k \gt 0$ or filtration degree 1 if $k = 0$. The differential $d x = [x]_1$ has the property Then $gr(d)$ is a true differential $gr(L)$ $f_k : L^{\otimes k} \to N$ where we take $f_k$ to have filtration degree 0 for $k \gt 0$ and filtration degree 1 for $k = 0$. So $gr(f_1)$ is a morphism of complexes from $gr(L)$ to $gr(N)$. Definition A morphism $f$ is a weak equivalence if $gr(f)$ is a quasi-isomorphism of complexess. It is a fibration if $gr(f_1)$ is surjective. Theorem This defines the structure of a (Getzler-version of a) category of fibrant objects as defined above. Given $C$ be a Getzler-category of fibrant objects. Define a new Getzler-category of fibrant objects $s C$ as follows: • The objects of $s C$ are simplicial objects in $C$, subject to a condition stated in the following item. • As in the Reedy model structure, for $X_\bullet$ a simplicial objects let $M_k X_\bullet$ be the corresponding matching object , defined by the pullback diagram $\array{ M_k X_\bullet &\to& (X_{k-1})^{k+1} \\ \downarrow && \downarrow \\ (X_{k-2})^{\left(k+1 \atop 2\right)}&\to& (M_{k+1} X_\bullet)^{k+1} } \,.$ Here the right vertical morphism is assumed to be a fibration, hence so is the left vertical morphism. So $M_k X_\bullet$ comes with a map $X_k \to M_k X_\bullet$. We assume that this is a fibration. This allows us to define $M_{k+1} X_\bullet$ and to continue the induction. So this defines a Reedy fibrant object . So the objects of $s C$ are Reedy fibrant objects $X_\bullet$ and morphisms are morphisms of simplicial objects. The weak equivalences in $s C$ are taken to be the levelwise weak equivalences. The fibrations are taken to be the Reedy fibrations, as in the Reedy model structure, i.e. those morphisms $X_\bullet \to Y_\bullet$ such that $X_k \to Y_k \times_{M_k Y_\bullet} M_k X_\bullet$ is a fibration for all $k$. So this is just the full subcategory of the Reedy model structure on $[\Delta^{}op], C$ on the fibrant objects. There is still a fourth axiom for Getzler-cats of fibrant objects to be stated, which is the existence of path space objects. We take this to be gven by a path space functor $P : C \to s C$ which is such that 1. for all $X$ the face maps of $(P X)_\bullet$ are weak equivalences; 2. $P$ preserves fibrations and acyclic fibrations; 3. $(P X)_0$ is naturally isomorphic to $X$. If $C$ is the category of Kan complexes, then $P_k X = sSet(\Delta[k],X)$. For our category $\mathcal{L}$ of filtered $L_\infty$-algebras we may set $P_k L = L \otimes_{compl} \Omega^\bullet(\Delta^k) = \lim_{\leftarrow} L \otimes \Omega / F^i L \otimes \Omega \,,$ where $\otimes_{compl}$ denotes the completed tensor product, more commonly denoted $\hat \otimes$. We may also speak of cofibrant objects in a (Getzler-) category of fibrant objects: those objects $X$ such that for all acyclic fibrations $f : A \to B$ the induced map $C(X,A) \to C(X,B)$ is surjective (i.e. those with left lifting property again acyclic fibrations). Maurer-Cartan elements All the above is designed to make the following come out right. Generally, $C(*,X)$ is the set of points (global elements) of $X$. A morphism from the terminal object into an $L_\infty$-algebra is a Maurer-Cartan element in the $L_\infty$-algebra. Such a point is just an element of degree 1 and filtration degree 1 that satisfies the equation $\sum_{k= 0}^{\infty} \frac{1}{k!} [\omega, \cdots, \omega]_k = 0 \,.$ In the case of dg-Lie algebras, this is just the familiar Maurer-Cartan equation $d \omega + \frac{1}{2}[\omega, \omega] = 0 \,.$ We have that $C(*, P_k X)$ is a functor from $C$ to the category of Kan complexes. For the category of Kan complexes, it is the identity functor. For filtered $L_\infty$-algebras it gives $L \mapsto MC_\bullet(L) = MC(L \otimes \Omega^\bullet(\Delta^\bullet))$ This functor $MC_\bullet$ takes fibrations to fibrations and acyclic fibrations to acyclic fibrations and weak equivalences to weak equivalences. Other applications to sheaves of $L_\infty$-algebras Evaluate on a Cech-nerve to get a cosimplicial $L_\infty$-algebra $L^\bullet = (L^0 , L^1, \cdots)$ $L^k = \prod_{i_0, \cdots, i_k} L(U_{i_0} \cap \cdots \cap U_{i_k})$ $Tot(L^\bullet) = \int_{k \in \Delta} L^k \otimes \Omega^\bullet(\Delta^k) \,.$ If $L$ is a dg-Lie algebra, then $MC_1(L) = \left\{ \omega_0 + \omega_1 d t |\quad \omega_1 \in F^1 L^1 [t],\quad \omega_1 \in F^1 L^0 [t] , \quad d_L \omega_0 + [\omega_1, \omega_1] = 0, \quad d_{dR} \omega_0 + [\omega_1, \omega_0] = 0 \right\}$ Now define the Deligne groupoid as in Getzler’ integration article. We find inside the large Kan complex of MC-elements a smaller one that is still equivalent. $\gamma_1(L) \left\{ \omega \in MC_1(L) | \omega_1 is constant \right\}$ To get this impose a gauge condition known from homological perturbation theory. A context is $L \stackrel{\overset{f}{\to}}{\underset{g}{\leftarrow}} M$ $g \circ f = Id_L$ $f \circ g = Id - (d_M h + h d_M)$ $g \circ h = 0, \; h \circ f = 0, \; h \circ h = 0$ $MC(L) \simeq \{\omega \in MC(M) | h \omega = 0\}$ See Kuranishi’s article in Annals to see where the motivation for all this comes from. Jim Stasheff: citation please and how much does all refer to?? Example Consider the space of Schouten Lie algebras $L^k = \Gamma(X, \wedge^{k+1} T X)$ Then $MC_(L)$ is the set of Poisson brackets $\mathcal{O}(\hbar)$. For let $P \in MC(L)$. Then $\pi_1(MC_\bullet(L), P)$ is the locally Hamiltonian diffeomorphisms / Hamiltonian diffeos. $\pi_2(MC_\bullet(L), P)$ is the set of Casimir operators of $P$. for $k \gt 2$$\pi_k(MC_\bullet(L), P)$. Descent for $L_\infty$-algebra valued sheaves Associated to an L-infinity algebra $L$ is a Kan complex whose set of $k$-cells is the set of Maurer-Cartan elements on the $n$-simplex $MC_k(L) = MC( L \otimes \Omega^\bullet(\Delta^k) ) \,.$ Now assume that we have a sheaf L-∞ algebras over a topological space $X$. Let $\{U_\alpha \to X\}$ be an open cover of $X$. On $k$-fold intersections we form $L^k = \oplus_{\alpha_0,\cdots, \alpha_k} L(U_{\alpha_0, \cdots, \alpha_k}) \,.$ The problem of descent is to glue all this to a single $L_\infty$ algebra given by the totalization end $Tot(L^\bullet) = \int_k L^k \otimes \Omega^\bullet(\Delta^k)$ and check if that is equivalent to the one assigned to $X$. We now want to compare the $\infty$-stack of $L_\infty$-algebras and that of the “integration” to the Kan complexes of Maurer-Cartan elements, so compare Notice that we have an evident map $MC(\int_l L^l \otimes \Omega^\bullet(\Delta^l) \otimes \Omega^\bullet(\Delta^k) ) \to MC(\int_l L^l \otimes \Omega^\bullet(\Delta^l \times \Delta^k))$ Hinich shows in a special case that this is a homotopy equivalence. It is easy to prove it for abelian $L_\infty$-algebras. Theorem (Getzler) This is indeed a homotopy equivalence. Proof By E.G.’s own account he has “a terrible proof” but thinks a nicer one using induction should be possible. Gauge fixing Recall the notion of “context” from above, which is a collection of maps $L \stackrel{\overset{f}{\to}}{\underset{g}{\leftarrow}} M \stackrel{h}{\to} M$ between filtered complex-like things, meaning?? more general or complex with additional structure?? satisfying some conditions. We can arrange this such that $h$ imposes a certain gauge condition on $L$, or something I missed some details here $MC(M,h) := \left\{ \omega \in MC(M) | h \omega = 0 \right\}$ we have $g : MC(M,h) \stackrel{\simeq}{\to} MC(L)$ $MC_\bullet(M,h) \stackrel{g \simeq}{\to} MC_\bullet(L) \stackrel{MC_\bullet(f) \simeq}{\to} MC_\bullet(M)$ Proof Along the lines of Kuranishi’s construction: $h \left( [-]_0 + d_M \omega + \sum_{k = 1}^\infty \frac{1}{k!} [\omega, \cdots , \omega]^h \right) = 0 \,.$ So the big $\infty$-groupoid that drops out of the integration procedure is equivalent to the smaller one which is obtained from it by applying that gauge fixing condition. It would be nice if in the definition of the MC complex we could replace differential forms on the $n$-simplex with just simplicial cochains on $\Delta[n]$. $MC(L \otimes C^\bullet(\Delta^\bullet)) \,.$ This would make the construction even smaller. If $L$ is abelian, then this is the Eilenberg-MacLane space which features in the Dold-Kan correspondence. This is true if one takes care of some things. This is part of the above “terrible proof”. Because one can prove that using explicit Eilenberg-MacLane’s homotopies that proove the Eilenberg-Zilber theorem in terms of simplicial cochains we have an equivalence $MC( L \otimes C^\bullet(\Delta[k] \otimes \Delta[l]) ) \to MC( L \otimes C^\bullet(\Delta[k] \times \Delta[l]))$ The discussion of the Deligne groupoid (the $\infty$-groupoid “integrating” an $L_\infty$-algebra) and the gauge condition on the Maurer-Cartan elements is A reference for the theorem above seems not to be available yet, but I’ll check.
{"url":"http://www.ncatlab.org/nlab/show/descent+for+L-infinity+algebras","timestamp":"2014-04-18T00:15:39Z","content_type":null,"content_length":"102118","record_id":"<urn:uuid:6c7633fc-ae4a-4dd9-a160-fde56b8fd0f6>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Work to pump water out of a tank - My Math Forum February 15th, 2012, 06:32 #2 Global Moderator Re: Work to pump water out of a tank The length of the rectangle is the length of the tank, which is 10 ft. The width of the rectangle is the width of the parabola. Let y be the depth of the water, then the width of the rectangle is $w=2x=2\sqrt{y}$. The rectangular cross section must be pumped up (11 - y) ft. So, we have: Joined: Jul 2010 $dW=\(62.5\text{ \frac{lb}{ft^3}}\)\(10\text{ ft}\)\(2\sqrt{y}\text{ ft}\)\((11-y)\text{ ft}\)\,\(dy\text{ ft}\)$ From: St. Augustine, FL., U.S.A.'s oldest city $W=1250\int_4\,^9 11y^{\frac{1}{2}}-y^{\frac{3}{2}}\,dy\text{ lb\cdot ft}$ Posts: 11,555 You should get: Thanks: 101 $W=\frac{206000}{3}\:\text{lb\cdot ft}$ Math Focus: The calculus
{"url":"http://mymathforum.com/calculus/24874-work-pump-water-out-tank.html","timestamp":"2014-04-17T21:30:59Z","content_type":null,"content_length":"36379","record_id":"<urn:uuid:c905308e-c6ff-49b9-ab10-90d8f409b8ba>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculator Program Tablet Software • Advertisement Area unit converter and price calculator software gives fast and accurate results and change area from one unit to any preferred unit. Free area conversion and cost calculator program is helpful for real estate measurement to deal the business. ... • The PASzamolo is a free and multilateral calculator program. With this you can do involution , prime number searching , area and perimeter calculates and ever more. You can skin it and do every every more. • Quick Calculator is an easy to use calculator program for Windows. Its main advantage is the easy way in which you can enter and (re)edit calculations. It also provides many scientific functions, mathematical expressions, a currency calculator,. ... • Triple Integral Calculator Level 2 1.0.0.1 is designed as a useful and flexible calculator program that allows you to calculate definite triple integrals of real functions with three real variables. Numerical values are calculated with precision up. ... □ TripleIntegralCalculatorLeve l2.exe • This will be a basic calculator program I originally developed to help me grasp the concept of elasticity and how it affects supply and demand. This version will be a little more advanced than the one I made for myself. □ Supply and Demand Elasticity Calculator • Pitacalc is a command line interface calculator program. It works like a regular calculator, that it calculates and keeps a running total, but the calculations are performed by entering instructions that the calculator processes.Pitacalc is available for download. • Amebius is a handy calculator program designed with computer usability in mind, as opposed to just being a clone of your desktop calculator. Amebius can evaluate expressions written in the form a human is used to. □ Win98, WinME, WinNT 4.x, WinXP, Windows2000, Windows2003, Windows Vista • Numero is a calculator program that does the order of operations, so you can type in the whole expression at once instead of just one number at a time. It has about everything that you would expect to see on a scientific calculator. Also, in addition. ... • DebtRecalc Software is a accelerated debt payoff calculator program that analyzes your debt based on the principal of rolling over payments from prior paid off debts and applying that payment to the current debt. □ Download_Debt_ReCalc_Install _now.exe □ WinXP, WinNT 4.x, WinNT 3.x, WinME, Win2003, Win2000, Win98 • XICalc is a calculator program with the following features: * Multiple precision integers with millions of digits * Uses Fast Hartley Transform to speed-up long multiplies * Uses Binary Splitting to speed-up computing factorials * Separate input and. ... □ Windows Vista, XP, 2000, 98, Me, NT • XJCalc is a freeware calculator program with the following features: Can calculate with integer matrices and integer scalars Multiple precision integers with millions of digits Uses Fast Hartley Transform to speed-up long multiplies Uses Binary. ... □ Windows Vista, XP, 2000, 98, Me, NT • XMCalc is a freeware calculator program with the following features: Can calculate with real matrices, real scalars and quaternions Extra precision complex numbers to millions of decimal places Uses Fast Hartley Transform to speed-up long multiplies. ... □ Windows Vista, 2003, XP, 2000, 98, Me, NT Related: Calculator Program Tablet - Tablet Calculator Application - Program For Tablet - Notes Program Tablet - Optimization Calculator Program
{"url":"http://www.winsite.com/calculator/calculator+program+tablet/","timestamp":"2014-04-19T18:13:19Z","content_type":null,"content_length":"30155","record_id":"<urn:uuid:3c61c460-65d6-49b6-b9e3-30f86d0e175e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
November 2 I am really trying to beef up my Area, Surface Area and Volume unit for Geometry this year. It gets the job done regents-exam-wise, but it is so dissatisfying and I feel it could be so much better. Overall it basically boils down to plugging things into formula-sheet-provided formulas, and isolating variables in formula-sheet-provided formulas. There are some good things in there... we find composite areas using aerial and other images, for example. Finding the areas of regular polygons is a good application of right triangle trig. There is an investigation of how areas change when dimensions change, which is serviceable but I suspect kids don't really see the big picture. We "do" volumes and surface areas of prisms, pyramids, cylinders, cones, and spheres. My students tend to do very well on questions from this unit on the Regents exam, and I don't want to mess that up, but in this case I don't believe that the exam is valid for measuring understanding. These are the kinds of things I want them to understand and/or be able to do: • what physical property you are actually calculating when you calculate a volume or a surface area • why the formulas are what they are • how changing a 2D or 3D figure's dimensions affects its area or volume. for example, I think they understand that if you order a pizza that has twice the diameter, you get way more than double the amount of pizza. But I don't think that intuition has any ties to math class. • isolate a variable in a formula. for example, solve S = lw + wh + lh for w. I have a bunch of great resources and problems and tasks that I have collected in my Evernote over the past few years that could potentially work very nicely here. 1. Design a new label for a given tennis ball canister, oatmeal canister, or soda can. (a) Create a prototype label so that it covers the entire lateral surface of the canister with little to no overlapping paper. (b) Congratulations! The company chose your design and wants to produce 100,000 labels. Calculate how much material (paper, aluminum, whatever) you will need to order. This game at NLVM is quite nice for challenging your intuition about how volumes are related to dimensions. This video features people with charming accents complaining about how the volume of their chocolate bar decreased even though it appears that the surface area stayed the same or possibly increased. I've shown this in the past and found that students are unable to articulate what these people are upset about using the word "volume" (much less intelligently discuss surface area.) The word "volume" from math class is not connected in their brains to "how much stuff inside." 4. Starting with a piece of copier paper, roll it into a cylinder both the long way and the short way. Will it contain the same amount either way? If not, which way holds more? Mathematically justify your response. 5. Starting with a sheet of copier paper, cut four congruent squares out of the corners and fold up the sides to make a box. Who can make the box that holds the most? Kristen Fouss did something like this but in pre-calculus. Geometry probably doesn't need to get into deriving and optimizing a polynomial equation. 6. Starting with a sheet of copier paper, design, cut out, and assemble a right pyramid with a square base. First pass: any pyramid will do. Second pass: make the area of the square exactly ___. Third pass: make the overall height of the pyramid a specified length. Present your best-looking pyramid, including the area of its base, its overall height, its lateral surface area, its total surface area, and its volume. 7. Investigate what happens to area when dimensions change. What happens to volume when dimensions change. (Somehow.) The car talk fuel-tank problem 9. Some version of the PCMI volume/surface area problems. (If you know the perimeter and area of a rectangle, can you determine its dimensions? Are there any rectangles whose perimeter = area? If you know the surface area and volume of a rectangular prism, can you determine its dimensions? Are there any rectangular prisms whose volume = surface area?) Derive the formula for the volume of a sphere without calculus . From Exeter Book 3. Would pose quite a challenge for my students. They would not be able to do it on their own. In fact, as it is written, it would completely mystify them. What I am struggling with and probably will be for the next week or so is, how do I take any of these things and fit them into a logical, coherent unit of study of surface area and volume? NY/my district/my school does not provide us with a curriculum. We have : a list of standards, a collection of previous exams, a pacing calendar, and a kind-of crappy textbook, which are all useful in their own limited ways, but none of them tells you what to do in class. I have lessons already written that get it done, so there is no incentive to bother, other than it bothers me when I feel I could be doing a better job. Part of the dilemma is, I feel that any of this would have to be added to what I already do, not replace it. I still need them to be able to, for example, identify that the bases of a prism are the parallel sides, even if they are not on the top and the bottom. And I'm already about two weeks behind in this course. How do you take a compelling resource and turn it into an effective lesson? I heard of a lovely activity for pseudo-discovering the Intermediate Value Theorem at a recent compulsory workshop for the calculus course I teach. It has everything i like in a thing. I do not have a record of the name of the teacher who presented it (and even if I did, I don't know if he wants to be famous on the Internet) so if you are he, please email me if you want credit. (begin basic text of student handout/activity:) The intermediate value theorem states: If a function y = f(x) is continuous on a closed interval [a,b], then f(x) takes on every value between f(a) and f(b). Think about what you remember of conditional statements (from your geometry course:) 1) State the hypothesis of the IVT. 2) State the conclusion of the IVT. 3) In the following, be sure to use the endpoints (a, f(a)) and (b, f(b)). A. Sketch a diagram where both the hypothesis and the conclusion hold true. B. Sketch a diagram where the hypothesis is false, but the conclusion is true. C. Sketch a diagram where the hypothesis and the conclusion are false. D. Sketch a diagram where the hypothesis is true, but the conclusion is false. (on to the back of the page) 4) Which one is impossible to do? Explain why. 5) Compare your diagrams with a partner. How are they similar? Different? If they are different, are they both valid? 6) Is any real number exactly 1 less than its cube? A. Create a function whose roots satisfy the equation. B. Find f(1) and f(2). How do you know there is a point (c, 0)? What do you know about c? This recent Geometry lesson is a good example of setting the kids in pursuit of a problem, where they have to learn the thing you want them to learn anyway in the process. (That wasn't that eloquent, sorry, I will illustrate.) On Tuesday, we developed the rule for the sum of the angles in a polygon by the chopping-into-triangles technique that many of you are probably familiar with. The next day I wanted them to be able to find the degree measure of one angle in any regular polygon, so I set them this task, which I stole from a PCMI problem set: I did not include that first question when I did this in class, and many students stumbled over restricting their search to regular polygons. So I added it after the fact for next time I give this There are lots of these triplets to find, so all the kids met with some success pretty quickly. It is also a little like finding a pearl in an oyster, so they were rewarded and motivated to keep looking. Regular polygons are hard to draw, so with a little reminding and prodding, they started to find the degree measure of one angle in a regular pentagon, hexagon, octagon, etc (the whole, covert point of the activity, anyway! Yay!) I had them add their finds to a whiteboard everyone could see as they were discovered. They also wanted to verify by using the Smartboard to render regular polygons perfectly, and fit them together like puzzle pieces, which I was happy to allow them to do. This was actually a pretty great class - some kids conjecturing likely candidates, some kids armed with calculators cranking out angle measures, some kids organizing all their finds, some kids going up to the smartboard in groups of two or three for visual/spatial verification. And when I assessed them the next day, no one had any trouble understanding the question or coming up with correct angle measures. This problem is a keeper. "What is 1 Radian?" Try it. Dare ya. They'll do a little better with: "What is 1 Degree?" I made some final tweaks to Completing the Square in Algebra 2, and I find it just amazing the difference between this year and previous years, in that so much more often now, I just know what to do. It doesn't feel like I changed all that much, but the kids just get it. I don't think it's a difference in delivery or anything. Here are the important bits. First, I took two days instead of one. Go to hell, pacing calendar. The first day is just to see the pattern and get the idea with easy easy problems. a = 1 and b is even. The second day we work with a != 1 and odd values of b (fractions. eep. but the kids are even dealing with fractions okay.) Tee it up: why would we want to do this? It saves us time. Look for patterns. The kids fill out this whole table all on their own. I don't say a thing. I convince them to try and focus by telling them that if they really get how this table works, their lives will be a million times easier for the next six months. It's an exaggeration but you need them to engage here. The bottom three rows were new this year. Hardly any students needed an assist with the * rows. I was surprised. The important part - the mathematics - was the ** row. Again I was surprised that they mostly worked this out on their own. There were some kids, I had to point at numbers, and say "Look at the 10, the 25, and the 5. Look at the 14, the 49, and the 7. How are those related? How can you write that relationship but use b?" Once we're all on board with the table, we put the pattern together with "the genius method" from before to solve a simple quadratic in standard form: And that is basically that. We practice a bunch of easy ones. The next day, we come back and practice a bunch of really hard ones. Here are the smartboard files: Day 1, Day 2. I just find it stunning that you can plan out a lesson 95% correctly and it will miss most of your kids. And you can change one little thing - add three rows to a table - and now all the kids basically get completing the square, think it's easy, prefer it to other methods of solving quadratics, and tell you why they don't get why this is such a big deal. I feel a little like I have super
{"url":"http://function-of-time.blogspot.com/2011_11_01_archive.html","timestamp":"2014-04-20T21:49:11Z","content_type":null,"content_length":"187861","record_id":"<urn:uuid:c3c8ee4f-c0a7-4d3f-8c52-b6103304981b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
Fair Value Data * Sat. Nov. 10, 2007 process Each stock analysis contains a link to a detailed analytical PDF. In this PDF is a section titled Fair Value Data (located in the top left section of the PDF). This section provides metrics to help you to determine if the investment is trading at a premium, discount or if it is fairly priced. Below is a description of each item in the Fair Value Data section from page 2 of the detailed Closing Price: Recent closing price. The Closing Price is as of the date shown in the Fair Value Data title. A Star is added if the closing price is less than the “Fair Value Buy Price”. Avg. High Yield Price: Price calculated by dividing current dividend per share by the average high dividend yield for each of the last 5-years (dividend per share divided by the year’s low share price). For example, say a stock has a 5-year average yield of 2.5% and its current annual dividend is $1.00 per share, then the calculated fair value is $40.00 per share ($1.00 / .025). If the closing price is less than $40.00 then the stock is selling at a discount based on the Avg. High Yield Price. 20-Year DCF Price: Price calculated by taking the Net Present Value (NPV) of the next 20 years of dividends and the estimated value of the stock at the end of 20 years. Includes the assumptions used for the calculation. The value of any investment can be estimated using a discounted cash flow (DCF) model. That is what the 20-Year DCF Price is based on. The historical inputs to this model are: annual earnings per share (EPS), annual dividend per share and price earnings (P/E) ratio. In addition, the following future assumptions are entered into the model: discount rate, EPS growth rate and dividend growth rate. My model defaults to the following values based on historical data. EPS growth rate: the minimun of the historical 5- or 10-year growth rate; dividend growth rate: as described in my earlier post, Dividend Analytical Data. My target discount rate is 15%. The model assumes the stock is sold at the end of 20 years. The assumptions used for any given stock analysis are shown in the 20-Year DCF Price section on page 2, along with the calculated net present value (NPV). Needless to say, this is the most complicated fair value calculation of those presented and these two paragraphs can’t begin to do it justice. Avg. P/E Price: Price calculated by multiplying the EPS (trailing twelve months) times the minimum of: 1.) 5-year average of high and low P/Es or 2.) Last years high P/E. For example, if the TTM EPS for a company was $3.80 and it had a P/E of 12, then the calculated fair value is $45.60 per share ($3.80 x 12). If the closing price is less than $45.60 then the stock is selling at a discount based on the Avg. P /E Price. Graham Number: Price calculated by taking the square root of 22.5 times the tangible book value per share times EPS (lower of trailing twelve months or average last 3 years). Benjamin Graham, Warren Buffett’s mentor and the father of value investing, developed rules for the defensively screening stocks. This formula uses his principles to calculate the “maximum” price one should pay for the stock. He believed, as a rule of thumb, the product of P/E ratio and price-to-book should not be more than 22.5 (P/E ratio of 15 x price-to-book value of 1.5). The 15 P/E was a result of Graham wanting his portfolio to have a yield equal yield to that of a AA bond (back then around 7.5%). The inverse of this yield is 1 divided by 7.5%. That works out to 13.3; he rounded up to 15. For example, if the TTM EPS for a company was $6.80 and it had a tangible book value per share of $12.50, then the calculated fair value is $43.73 per share (square root[$6.80 x 22.5 x 12.50]). If the closing price is less than $43.73 then the stock is selling at a discount based on the Graham Number. Since the Graham Number tends to be the most conservative value, the stock is awarded a fair value Star if it is trading below it. Mid-2 Price: Of the four fair value calculations, “Avg. High Yield Price”, “20-Year DCF Price”, “Avg. P/E Price” and “Graham Number”, the highest and lowest fair values are excluded and the remaining two calculations are averaged to calculate the Mid-2 price. NPV MMA Price: Price is price where NPV MMA value equals the NPV MMA target. The basis of NPV MMA value calculation is a hypothetical $1,000 investment in the subject stock and a Money Market Account (MMA) earning a 20 year average rate (I use a 20 year Treasury as a proxy). The value calculated is the net present value (NPV) of the difference between the dividend earnings of this investment and the interest income from the MMA over 20 years. Other assumptions include: 1.) dividends grow at a historically calculated rate, 2.) dividends are reinvested, 3.) share price appreciation is not considered, 4.) interest income is reinvested in the MMA. The NPV MMA target is determined based on the number of consecutive years of dividend increases. The formula is: Target = Base – (Years x Increment) + Minimum where Base=3,000, Increment=100, Minimum=500. Thus 0 years of dividend growth yields a $3,500 target and 30 years of growth yields a $500 target. Fair Value Buy Price: Historically, I have conservatively taken the lower of the NPV MMA Price or Mid-2 Price as the stock’s fair value. This made sense when the markets were don. However, as the market recovered and companies histories include some very lean times, the pendulum has swung to the other extreme where very few companies were trading below my conservative calculation of fair value (in most cases driven by the Mid-2 value). I have added to my model the ability to calibrate the Fair Value calculation based on where we are within the market cycle. Below are the various options: Option: 1 = The lower of the Mid-2 price or the NPV MMA price. Option: 2 = Lesser of the Mid-2 price or NPV MMA price + lower of 10% increase or 25% of the difference between Mid-2 price and NPV MMA price. Option: 3 = same as Opt: 2 except + lower of 20% increase or 50% of the difference. Option: 4 = same as Opt: 2 except + lower of 30% increase or 75% of the difference. Option: 5 = The higher of the Mid-2 price or the NPV MMA price. Option: 6 = Weighted: 25% Mid-2 price + 75% NPV MMA price. The option used is disclosed on the back of the analytical PDF. For more information, see “Seven Dividend Stocks Trading Below Fair Value” Sponsored Links With debt consolidation, people borrow loans to cover the bills on their credit card, their due payments of travel insurance and the car insurance. Related Articles: I have been reading your blog for the last 2 – 3 months. I am a pretty new investor and the information presented is awesome. Gives an insight on what other investors do and gives me a great starting point for my research. Thanks Guppy: Thank you for reading my blog and your kind words! Best Wishes, How do you calculate Tangible Book Value, is it just Book value with Goodwill subtracted from total assets? Anon: I actually pull Tangible Book Value from a S&P report. It is calculated by taking total assets less all intangibles (including goodwill). Best Wishes, Hi, I used to use Bloomberg to get my raw data. Now that I do not have a job, no Bloomberg anymore. Where can I get the information? Thanks Sam: My primary sources are Morningstar, Yahoo Finance and S&P. The first two are freely available on the internet. Best Wishes, If you take the Ave. HIGH Yield Price as an indication of fair value, why don’t you consider the LOW p/e price (average low p/e over 5 yrs * ttm EPS) rather than an AVERAGE p/e price which involves the “5-year average of high and low P/E” times ttm EPS? This would be more conservative… i beleive in the formul and the theory expect for one thing. Should book value be adjusted ror fair value? For instance, if a company bought a building or a large track of timber land 40 years ago, we could assume those values increased so that would understate the book value. Have you considered this or found and good sources that discuss this? Determining fair value is messy. When the company I work for does a large acquisition, we hire a third party to help us peg the fair value of assets. Generally, it takes about a year and our valuation people end up “debating” the fair values with our auditor’s valuation people. Leaving it at book value is not only easier, but it is also more conservative. Best Wishes,
{"url":"http://dividendsvalue.com/1117/fair-value-data/comment-page-1/","timestamp":"2014-04-16T18:56:42Z","content_type":null,"content_length":"48781","record_id":"<urn:uuid:abdbd958-c525-40c7-b04f-0cdd6eeb25d9>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
Cheese cutting This problem comes from our sister site NRICH, which is packed with mathematical problems, games and articles for all ages. The problem is similar to the kind girls will be tackling at the European Girls' Mathematical Olympiad, which will take place in Cambridge next year. I have a cube of cheese and cut it into pieces using straight cuts from a very sharp cheese wire. In between cuts I do not move the pieces from the original cube shape. For example, with just one cut I will obviously get two smaller pieces of cheese, with two cuts I can get up to 4 pieces of cheese and with three cuts I can get up to 8 pieces of cheese, as shown in the picture: Suppose I now make a fourth cut. How many individual pieces of cheese can I make? Suppose now that I am allowed more generally to cut the block N times. Can you say anything about the maximum or minimum number of pieces of cheese that you will be able to create? Although you will not be able to determine the theoretical maximum number of pieces of cheese for N cuts, you can always create a systematic cutting system which will generate a pre-detemined number of pieces (for example, making N parallel cuts will always result in N+1 pieces of cheese). Investigate developing better cutting algorithms which will provide larger numbers of pieces. Using your algorithm what is the largest number of pieces of cheese you can make for 10, 50 and 100 cuts? Here is a hint
{"url":"http://plus.maths.org/content/cheese-cutting","timestamp":"2014-04-20T16:10:19Z","content_type":null,"content_length":"23531","record_id":"<urn:uuid:127d1b5b-b9f8-4a2b-9b3e-da1e29f237c4>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Method and a System for Estimating a Symbol Time Error in a Broadband Transmission System Patent application title: Method and a System for Estimating a Symbol Time Error in a Broadband Transmission System Inventors: Volker Aue (Dresden, DE) Assignees: NXP B.V. IPC8 Class: AH04B1700FI USPC Class: 375224 Class name: Pulse or digital communications testing Publication date: 2008-10-30 Patent application number: 20080267273 Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP The invention relates to a method and a system for estimating a symbol time error in a broadband transmission system, comprising: determination a time error signal of an output-signal of a discrete Fourier-transformation block (5) in a data symbol stream on the basis of intersymbol correlation using a predetermined period in each received symbol, selecting as a predetermined period last samples of a useful data part of an actual symbol and a preceding symbol after the discrete Fourier-transformation, determining the time error value (ε) based on the intersymbol interference of the selected samples of the actual symbol and the preceding symbol. Method for estimating a symbol time error in a broadband transmission system, comprising:determination a timing error signal of an output-signal of a discrete Fourier-transformation block in a data symbol stream on the basis of intersymbol correlation using a predetermined period in each received symbol,selecting as a predetermined period a number of samples of a useful data part of an actual symbol and a preceding symbol,determining the time error value based on the intersymbol interference of the selected samples of the actual symbol and the preceding symbol. Method according to claim 1, comprising:using instead of the preceding symbol a succeeding symbol. Method according to claim 1, wherein the determination of the time error value comprises:copying the selected samples as a cyclic extension preceding or succeeding in an interval of the output-signal for each symbol at a transmitter after a inverse Fourier-transformation for the cyclic extension. Method according to claim 1, wherein:the number of the selected samples on which the discrete Fourier-transformation is performed equals the length of the discrete Fourier-transformation. Method according to claim 3, wherein after performing the discrete Fourier-transformation of the selected number of output samples of said Fourier-transformation is further subject to:shifting the output samples of the relevant symbol by a predetermined number of N samples to left or right,providing a predetermined phase vector to the shifted output samples of one of the symbols,element-wise complex-conjugate multiplication the phase modified output samples of the relevant symbol with the samples of a buffered symbol,calculation an averaged sum-signal of the element-wise complex-conjugate multiplied samples,multiplication the sum-signal with a phase rotating constant to map the time error value (ε) to a real or imaginary axis. Method according to claim 5, comprising:providing the predetermined phase over two branches. Method according to claim 5, comprising:shifting the sample to one of the two branches with one of the following shifting value of -2, -1, 1, 2. 8. Method according to claim 5, wherein:the phase rotating constant dependents on a predetermined period of one of the symbols, especially on a guard interval duration, and on the amount of the number of sample shifts and on a phase of an integer multiple of π/ 2. 9. Method according to claim 8, wherein:the phase of the integer multiple has an value of pi.|N|Tg/Tu or pi.|N|s/FFTSizewith s denotes the number of cyclic shifts in samples prior to calculating the DFT and FFTSize denotes the DFT/FFT input and output vector size. Method according to claim 1, comprising:determining a number of individual time error values by using different sample shift factors. Method according to claim 10, comprising:determining a combined time error value by adding the individual time error values. Method according to claim 1, comprising:using the determined time error value to adjust the timing. Method according to claim 1, comprising:using the determined time error value to adjust, especially to advance or retard the fast Fourier-transformation selection window. Method according to claim 1, comprising:using the determined time error value to increase or decrease a sample conversion rate in a sample rate converter. Method according to claim 1, comprising:using the determined time error values to increase or decrease a sample rate in an analog-digital converter. System for estimating a symbol time error in a broadband transmission system, which receives a data symbol stream from a transmitter comprising:a symbol time error estimator for determination a timing error signal of an output-signal of a discrete Fourier-transformation block in the data symbol stream on the basis of intersymbol correlation using a predetermined period in each received symbol, wherein a number of samples of a useful data part of an actual symbol and a preceding symbol are selected as a predetermined period and the time error value His determined on the base of the intersymbol interference of the selected samples of the actual symbol and the preceding symbol. A system according to claim 16, comprising: a unit for selecting a number of samples of the output samples of the discrete Fourier-transformation block. A system according to claim 17, comprising: a buffer for storing the selected samples of one of the symbol. A system according to claim 16, comprising:a unit for shifting the samples of the relevant symbol by a predetermined number of N samples to left or right,a unit for providing a predetermined phase vector to the shifted samples of one of the symbols,a unit for element-wise complex-conjugate multiplication the phase modified samples of the relevant symbol with the samples of a buffered symbol,a unity for calculation an averaged sum-signal of the element-wise complex-conjugate multiplied samples, anda unit for multiplication the sum-signal with a phase rotating constant to map the time error value to a real or imaginary axis. System according to claim 16, wherein:the time error value of the time error estimator is used to adjust the timing. System according to claim 16, wherein:the time error value of the time error estimator is used to adjust, especially to advance or retard the fast Fourier-transformation selection window. System according to claim 16, wherein:the time error value of the time error estimator is used to increase or decrease a sample conversion rate in a sample rate converter. System according to claim 16, comprising:the time error valued of the time error estimator is used to increase or decrease a sample rate in an analog-digital converter. System according to claim 16, wherein the time error value is averaged by means of an FIR or IIR loop filter and a output of the loop filter is used to adjust the timing. The invention relates to a method and a system for symbol time error estimation in a broadband transmission system. The invention is preferably used in data transmission systems employing orthogonal frequency division multiplexing (OFDM), in particular in wireless applications for digital video broadcasting (DVB, e.g. DVB-H, DVB-T), but can also be used for other transmission modes, such as ISDB-T, DAB, WiBro and WiMax. DVB, e.g. DVB-H and DVB-T are known standards for bringing digital television content for instance to mobile devices. Such orthogonal frequency division multiplexing systems are very sensitive to the intersymbol interference (ISI), which is caused by the loss of orthogonality of the symbols. The invention relates to the compensation of the intersymbol interference by estimating a symbol time error. The orthogonal frequency division multiplexing mode is a mode which converts a stream of symbols in a frame into parallel data of a block unit and then multiplexes the parallel symbols into different sub-carrier frequencies. The multi-carrier multiplex has the property that all carriers are orthogonal to one another with respect to a certain length that is typically 2 such that a fast Fourier-transformation can be used. The OFDM mode is implemented with the discrete Fourier-transformation (DFT) at a receiver and the inverse discrete Fourier-transformation (IDFT) at a transmitter, which is simply obtained by the orthogonal property and the definition of the discrete Fourier-transformation. In broadband transmission systems, a guard interval is formed by a cyclic extension preceding the output of the inverse discrete Fourier-transformation for each OFDM symbol. FIG. 1 shows the conventional structure of an OFDM symbol that is protected by a guard interval. The guard interval is formed by a cyclic prefix, i.e. a copy of the last samples of the so-called useful part is preceding the useful part. If there is no multipath, the receiver can select a window that is the size of the useful part anywhere within this symbol as shown in FIG. 2. The guard interval protects the useful data carrying part from multipath distortion, and, if chosen sufficiently long, allows for single frequency networks (SFN). In an SFN, multiple transmitters transmit the same signal synchronously such that at a receiver those signals can be treated as multipath signals. In multipath propagation environments, a transmitted signal reaches the receiver through multiple paths each of which may introduce a different delay, magnitude and phase thereby enlarging the transition time from one symbol to the next. If the transition time is smaller than the guard interval, the receiver can select a portion of the received symbol that is free from any interference introduced by adjacent symbols. Identifying the useful part, i.e. the part of an OFDM symbol that contains minimum interference from adjacent symbols (intersymbol interference), is a time synchronization task to be performed by the receiver. This task is critical to the overall receiver performance. Time synchronization can be grouped into two main categories: acquisition and tracking. Symbol time acquisition defines the task of initially finding the correct timing. Often, the symbol time acquisition is divided into two or more steps, where in the first step, coarse time synchronization is achieved. In the following steps, the time window is refined. For those successive steps, similar or identical algorithms that are used for tracking are often applied. Tracking defines the task of constantly adjusting the time window in the course of continuous reception to keep the time window in its optimum location. For OFDM, many efforts have been made for time tracking. The known methods can be grouped into data assisted and non-data assisted tracking, and pre-FFT or post-FFT based tracking. Data assisted tracking makes use of known symbols in OFDM, e.g. pilot symbols or preambles, where non-data assisted tracking makes use of the correlation properties of the signal. In DVB-T which is aimed at continuous reception, the standard does not define any preambles. Pilot symbols are included in the multiplex, where the standard defines so-called scattered pilots at every 12 carrier, and a smaller number of continual pilots that are present at fixed carrier locations. The conventional insertion of the scattered pilots that are boosted in power as described in FIG. 11, on page 27 of European Telecommunication Standards Institute ETSI EN 300 744 V1.4.1 (2001-01). Those pilot symbols are only accessible after the DFT and only after some coarse time synchronization has already been established. Therefore, most time synchronization algorithms for DVB-T/H use the auto-correlation properties of the OFDM symbols with its cyclic extension for coarse symbol time estimation, and then rely on the pilots for fine time synchronization and tracking. In DVB-T the guard interval can be selected to be 1/4, 1/8, 1/16, or 1/32 of the FFT (or DFT) size. In large scale single frequency networks (SFNs) even a guard interval of 1/4 of the FFT size can almost be fully used by multipath. In some cases, it has been found that the delay spread even exceeds the guard interval. With pilots at every 12 carrier, a channel impulse response of a time span of only 1/12 of the FFT length can be estimated which is clearly not sufficient for guard intervals equal or greater than 1/8. For reliable time synchronization for guard intervals equal to 1/8 of the FFT size or longer, it is therefore necessary to collect pilots from successive symbols in the same or similar fashion as it is done for estimating the channel transfer function that is needed for the frequency domain equalizer. Two basic approaches for post-FFT based time synchronization are known both using an estimate of the channel transfer function: The first one calculates the average phase difference from one scattered pilot to the next thereby estimating the mean slopes of the channel transfer function. This is based on the property of the FFT that a delay in time domain corresponds to a phase proportional to the carrier index and proportional to the delay in time domain. Therefore, in single paths channels, the time delay, which is denoted as r in FIG. 2, can be directly estimated from the slope. Unfortunately, this technique does not perform satisfactorily under heavy multipath conditions. The more rigorous approach is to transfer the estimated channel transfer function back into time domain by means of an IFFT to obtain an estimate of the channel impulse response. Afterwards an energy search is performed on the estimated channel impulse response. Another known approach is based on the continual pilots only. A known alternative to post-FFT based time synchronization is to further improve the time domain correlation based method typically used for coarse time synchronization. As discussed above, time tracking is crucial for the overall system performance. In DVB-T/H, the lack of preambles that could help accurately estimate the channel impulse response makes it difficult to find the optimum time window. Some pre-FFT time domain based time tracking techniques that make use of the auto-correlation properties have been found to require relatively long averaging times to yield adequate results. Another disadvantage is that after the signal has been acquired; those types of calculations are not required elsewhere in the receiver. Additionally, the performance under heavy multipath is not always The post-FFT based methods introduced above also have disadvantages. As said above, the simple method using the estimate of the mean value of the slope of the channel transfer function, albeit giving satisfactory results in channels with low delay spread, has been found not to give adequate results under heavy multipath conditions as can be experienced in SFNs. Experiments have shown that this method does not withstand tests for guard interval utilization in single frequency networks. The most robust technique up to now seems to be the IFFT based method, which calculates the channel impulse response from the estimated channel transfer function. This method, however, also is the most computational intensive method and requires additional memory. The problem that needs to be overcome when using this type of algorithm is the cyclic wrapping of the channel impulse response after 1/3 the FFT length that is due to the scattered pilot spacing at every third carrier when multiple symbols are collected. The cyclic wrapping may make it difficult to identify the beginning and end of the channel impulse response. Identifying the impulse response is also difficult in noisy environments, when the energy of the impulse response is spread over a large time interval. DVB-H designed for mobile reception imposes additional challenges on the symbol time synchronization algorithms: (1) In a mobile environment, the coherence time of the channel is lower, i.e. the channel is more time-varying. (2) DVB-H makes use of time slicing. In time slicing, data are transmitted in bursts allowing the receiver to be switched off between bursts. This feature that allows the receiver to save a great deal of power consumption, however, also means that the channel cannot be tracked between bursts. As a merit, the time tracking algorithms for DVB-H must be substantially faster than for DVB-T. To illustrate those challenges, the following example of a two-path model as used in a test case is considered. FIG. 3 shows the magnitude of the impulse responses of the conventional two path model at two timing instances, t1 and t2, respectively. The two paths are separated by 0.9 times the guard interval duration Tg. At time instant t1, the second path is not really visible, as it is faded. In the real world, the first path may originate from one transmitter, and the second from another transmitter. Both transmitters synchronously transmit the same signal on the same frequency (SFN). At time instant t1, the second path is not visible as it can be blocked by an obstacle (shadow fading) or the path is actually a superposition of multiple paths that at time instance t1 add destructively (fast fading). A receiver locking to a received signal that experienced this channel at time instance t1, only sees the first path, and may just center this path to the middle of the guard interval. If the receiver is synchronizing to the signal to receive time-sliced bursts, it essentially has no history on the channel to rely on. When after a relatively short time, e.g., a couple 10 ms the second path occurs, the receiver has to quickly readjust the symbol timing and place both paths into the guard interval such that no intersymbol interference occurs in the useful part. Likewise, it is also possible that at time instance t1, the first path was subject to fading, and the receiver initially locked onto the second path. This example shows that the symbol time tracking requirements for DVB-H are much more stringent than for continuous reception especially in stationary or quasi-stationary environments. For DVB-T, it has often been argued that the computational load of the IFFT based method can be reduced, since the symbol time tracking can be done on a lower rate, and thus an IFFT does not have to be computed for every received symbol. In the context of mobile DVB-H, i.e. rapidly time varying channels and fast reacquisition times to reduce on-times and therefore power consumption, this assumption does not hold. It is a object of this invention to specify a new method and a system for estimating a symbol time error for avoiding intersymbol interference. According to the invention, the problem is solved by a method for estimating a symbol time error in a broadband transmission system comprising the attributes given in claim 1 and by a system for estimating a symbol time error in a broadband transmission system comprising the attributes given in claim 16. Advantageous embodiments are given in the dependent claims. The key aspect of the invention is the determination of a time error signal of an output-signal of a discrete Fourier-transformation block in the data symbol stream on the basis of intersymbol correlation using a predetermined period in each received symbol. A number of samples of the output of the DFT or FFT of an actual symbol and a preceding symbol are selected as a predetermined period. A time error value is determined on the base of the intersymbol interference of the selected samples of the actual symbol and the preceding symbol. Instead of the preceding symbol a succeeding symbol can be used. In more detail, in a receiver of the broadband transmission system a symbol time error estimator is established. The preferred symbol time error estimator comprises a unit for selecting a number of output samples of the discrete Fourier-transformation of each received symbol and a buffer for storing the selected samples of one of the symbol, e.g. of the actual symbol, the preceding symbol or the succeeding symbol. Furthermore, the symbol time error estimator comprises a unit for shifting the selected output sample of the relevant symbol (e.g. the actual symbol or the preceding symbol or the succeeding symbol) by a predetermined number of samples to left or right. These shifted samples of the relevant symbol are element-wise complex-conjugate multiplied with a predetermined phase vector. After the phase modification of the shifted samples the phase modified output samples are element-wise complex-conjugate multiplied with the selected output samples of the buffered symbol. The element-wise multiplied output signals are accumulated to an averaged sum-signal which represents the time error value. To map the time error value to a real or imaginary axis, the sum-signal is multiplied with a phase rotating constant. Alternatively, the phase rotating constant is included in the phase vector. Accordingly, the present invention provides a robust scheme to rapidly acquire and continuously track the timing of OFDM symbols. In a preferred embodiment, the determined time error value of the time error estimator is used to adjust, especially to advance or retard the fast Fourier-transformation selection window or to increase or decrease a sample conversion rate in a sample rate converter in a case a sample rate converter is used or to increase or decrease a sample rate in an analog-digital converter in a case an analog-digital converter is used. In other words: The present invention is a new non-data assisted method for time tracking the symbols. The symbol time error estimator and the method for estimating the symbol time error is based on the frequency domain. Based on a new non-data assisted criterion the invention is concerned with symbol time synchronization for data modulated OFDM signals that use a cyclic prefix (or suffix) to protect the symbols from intersymbol interference. Since mostly all OFDM systems make use of this scheme, and the criterion is non-data assisted, the invention is applicable to a wide range of ODFM based systems. The invention works for OFDM with arbitrary FFT lengths (large FFT sizes yield less noisy estimates) and most practical guard intervals (at least from 1/32 to 1/2). The invention makes use of a novel criterion that yields as an absolute value a value proportional to the occurring intersymbol interference and as its sign the direction in which to adjust the timing. This way, the receiver can adjust its timing such that the intersymbol interference of the received symbols is reduced to its minimum. The error estimate itself is unbiased. The invention delivers an error signal for the symbol timing that can be used in a conventional tracking loop to adjust the time window to select the optimum sample vectors for the demodulator. The performance of the invention combined with the conventional tracking loop is expected to be equivalent, if not exceed the performance of the IFFT based channel impulse response estimation method. The criterion yields good results in single path and multipath environments including SFN, even if the delay between paths exceeds the guard interval duration. It also yields good results when the impulse response is spread out over a long duration inside the guard interval. The error signal is derived from the output of the FFT and takes into account the FFT output of either the preceding or succeeding symbol. Thus, the invention is solely post-FFT based. The computational complexity and memory requirements are comparable to the simple slope estimation method. An additional IFFT as most commonly used today is not needed. The invented time tracking algorithm maps well on standard digital signal processors. Different implementation variants exist such that the tracking loop can be adapted to the implementation and performance needs of the application. Furthermore, it is possible to combine those implementation variants to even increase the performance. If parameters are chosen correctly, the tracking range of the invented tracking loop is half the FFT size samples (equivalent to a duration Tu/2) to the left or right of the guard interval. In the range of a quarter FFT size samples (equivalent to duration of Tu/4), the mean error signal derived by the time error estimator is almost proportional to the actual time error, making the time estimator ideal for conventional tracking loop implementations. Depending on the equalizer implementation (not subject of this invention), a compensation of the mean slope of the channel transfer function may be needed. Compensation of the slope can be done by multiplication in the frequency domain with a vector that has a linearly increasing or decreasing phase, or by cyclically shifting the input vector of the FFT. With an inclusion of a correction factor, the invention can cope with FFT outputs for which the FFT input has been cyclically shifted. Thus, the invention also fits well into receiver structures that make use of the cyclic FFT input vector shift technique. FIG. 4 shows a block diagram of a preferred embodiment of a receiver for a broadband transmission system, FIG. 5 to 6 show block diagrams of different preferred embodiments of a symbol time error estimator of a receiver, FIG. 7 shows a block diagram of a embodiment of a suitable loop filter for time tracking DLL, FIG. 8 shows a diagram with an example for an S-curve for single path with a guard interval 1/4, FFT size 2 k, SNR 10 dB, FIG. 9 shows a diagram with an example for an S-curve for two ray path with a guard interval 1/4, FFT size 2 k, SNR 10 dB, where the first path has zero delay and the second a delay of the 0.9 times of the guard interval duration. For a detailed description on how to use the invention, at first, a typical DVB-T/H receiver is considered. FIG. 4 shows the block diagram of a typical DVB-T/H receiver 1. For simplicity, the circuitry for pre-FTT based acquisition is not shown. The digital IQ input IN that is provided by the analog-front-end, an analog-to-digital-converter (ADC), and additional digital filter circuitry, is further frequency error corrected often by controlling a digital frequency shifter in a frequency error correction unit 2. The corrected signal is then fed through a sample-rate-converter 3 (SRC) that can correct for sampling frequency offset between the transmitter and the receiver ADC(s). The sample-rate-converter 3 may optionally include additional decimation and low-pass filtering. After correction of frequency and sample frequency clock offsets, for each symbol, a unit 4 for window selection and removing the guard interval Tg is used. In more detail, a vector of FFT size samples is selected. On this vector, the FFT is performed in an FFT unit 5. Depending on the receiver implementation, residual common phase error (CPE) needs to be removed. Typically, the continuous pilots are extracted from the multiplex in a unit 6 and are used for estimating the common phase error in a unit 7 from which an adequate estimate is obtained. This estimate is then used to correct the common phase error at the output of the FFT unit 5 in a CPE correction unit 8. The estimate common phase error can further be used for tracking any residual frequency offset in a frequency tracking circuit 9 to control the frequency error correction block 2. For successive processing, the impairments added by the channel must be removed from the CPE corrected symbol by means of an equalizer 10. An estimate of the channel transfer function (CTF) is obtained from a channel estimator 11 by using the scattered pilots extracted from the multiplex in a scattered pilot extraction unit 12. Typically, the channel estimate is obtained by means of interpolation the channel from the scattered pilots based estimates in time- and frequency domain. The corrected OFDM symbol and the estimated channel transfer function are then transferred to the outer receiver 13. The outer receiver 13 then performs symbol demapping, symbol and bit deinterleaving, depuncturing, convolutional decoding typically by the means of a Viterbi processor, outer (Forney) deinterleaving, Reed-Solomon decoding, and finally derandomizing (descrambling) to deliver an MPEG transport stream (MPEG-TS). Therefore, the outer receiver 13 comprises a plurality of conventional functional blocks or units 13.1 to 13.7. The proposed time tracking algorithm as described in this invention disclosure uses the output of the FFT unit 5 (this configuration is not shown) or of the CPE correction unit 8, as shown in FIG. 4, which connects with a symbol time error estimator 14 for symbol time control the window selection unit 4 or the sample-rate-converter 3. This is in contrast to other known techniques that use the scattered pilots or an estimate of the channel transfer function CTF. FIG. 5 shows the block diagram of a possible implementation of the proposed time tracking algorithm, where the invented symbol time error estimator 14 is emphasized. The symbol time error estimator 14 takes the output samples of the FFT unit 5. For best performance, it is suggested that the CPE corrected output is fed into the symbol time error estimator 14. For time error estimation, only the output samples of the FFT that contain carriers are useful. For clarity, a block 14.1 that selects those carriers is shown in the block diagram. In order to reduce computational complexity, it is also possible to only select a subset of those carriers. Selecting only a subset of carriers, however, comes at the expense of a noisier error estimate ε requiring a smaller loop filter bandwidth for similar time jitter. If this can be tolerated depends on the required tracking convergence time. The set of carriers or subset should be in sequential order. In the depicted implementation, in a unit 14.2 the selected output samples are shifted by a fixed number of N samples to either the left or right. In a further functional unit 14.3, the shifted output samples are element-wise complex-conjugate multiplied with a complex phasor vector, where the elements of this phase vector are of the kind exp(jφ ). The absolute value of the slope of this phase vector, i.e. the difference between φ and φ -1 is 2πTg/Tu, where Tg/Tu is the ratio of the guard interval duration Tg and the length of the useful part Tu. The sign of the slope, i.e. whether the slope is positive or negative depends on where the succeeding sample shift is applied, and if a cyclic prefix or suffix is used. Then in a functional unit 14.4, the output vector of the multiplication with the phasor vector (=phase modified samples) is element-wise complex-conjugate multiplied with the selected FFT output samples of a preceding symbol. Therefore, the selected samples of the preceding symbol are stored in a buffer unit 14.5. In a different embodiment of the invention, the shifting of the selected FFT output samples is applied after the phasor multiplication, see FIG. 6. In a different embodiment of the invention, the phasor multiplication is applied to the output of the buffered carriers, e.g. of the preceding symbol or of a succeeding symbol, instead of the carriers of the current symbol. Another embodiment shifts the buffered symbol, if applicable after the phasor multiplication, by either N carrier samples to the left or right. Both alternative embodiments are not shown. Yet another variant is to distribute the phasor multiplication over both branches, and/or apply a shift to one of the both branches. Practical values for the samples shifts are -2, -1, 1, and 2, but other values are possible albeit the performance then typically decreases in both tracking range and noise level of the error Out of the output of the element-wise multiplication of the FFT samples from the current and the previous or preceding symbol, the averaged sum is calculated in a sum unit 14.6. This operation is often referred to as "integrate and dump". The output of this operation is multiplied with another phase rotating constant α of the type α=exp(jφ) in a unit 14.7. To map the symbol time error value ε to either the real or imaginary axis a mapping unit 14.8 follows. If the symbol time error value ε is mapped to the real axis, the real part from this complex multiplication is given out as the symbol time error value ε as shown in FIG. 6. If the symbol time error value ε is rotated to the imaginary axis, the imaginary part of this multiplication is given out as the symbol time error value E. Typically, the phase φ of α is the sum of a phase that is dependent on the guard interval Tg and the amount of sample shifts N applied to either of the two branches, and a phase of an integer multiple of π/2 to rotate the signal to either the real or imaginary axis and adjust the sign to the demands of the succeeding loop filter(s). The absolute value of the first phase is 2π|N| Tg/Tu and accommodates the length of the guard interval Tg with respect to the useful part of the OFDM symbol as well as sample shift difference between the two branches formed by the samples of the current symbol and the delayed (buffered) symbol. In another embodiment of the application, α is included in the phasor vector. This way, only the real or imaginary part needs to be computed by the multiplication of the two branches that include the samples of the previous and the current symbol. This way, half of the real multiplications can be saved and the averaged sum only needs to be computed over one part, either the real or imaginary part, respectively. In case the receiver implementation demands a cyclic shift of the FFT, another cyclic shift factor dependent phase can be added or subtracted (dependent on the used variant of the invention) to make the tracking loop immune to any number of cyclic shifts applied prior to the FFT. This phase is 2π N s/FFTSIZE, where s denotes the number of cyclic shifts in samples, and FFTSIZE is the FFT input and output vector size. Yet another embodiment of this invention foresees a combination of different variants of the disclosed time error estimator to reduce the noise in the error signal. This combination consists of multiple parallel variants of the implementation shown in FIG. 6 that each individually estimate the symbol time error value ε, but e.g. use different shift factors. A combined error estimate is then obtained by adding the estimates provided by the individual symbol time error values ε of the time error estimator 14. For closing the tracking loop, the symbol time error value ε is fed into a loop filter that performs additional averaging to reduce noise of the symbol time error value E. The design of a tracking loop is straightforward, once a suitable time error estimator 14, e.g., as the one disclosed in this document, has been found. A suitable first order loop filter 15 can be the one depicted in FIG. 7. In FIG. 7, the symbol time error value ε from the time error estimator 14 is first multiplied with integration constant K in a multiplication block 15.1. This constant determines the loop filter bandwidth. The product is accumulated in the successive integration circuit 15.2 with sum blocks 15.3 and a delay block 15.4 and a quantizer 15.5 with a sum block 15.6. In Detail, i.e. the output-signal of the multiplication block 15.1 is added to the sum of all previously accumulated values enabled by the one value delay element denoted as z . The accumulated value is also given to the quantizer 15.5 that contains the zero value. If the sum exceeds one or more integer numbers, the integer number is given out as a retard/advance signal to the guard interval/time window control block 4 to advance or retard by an integer number of samples on the incoming sample stream in the sample rate converter 3. At the same time, the integer value is subtracted from the accumulated value in the loop filter 15. In a similar fashion, typically using a second or higher order loop filter, the time drift can be estimated. The time drift estimate can then be used to adjust the sample rate conversion factor at the sample rate converter 3. In the remainder of this section, the performance of the time error estimator 14 is illustrated. FIG. 8 shows the almost perfect S-curve obtained from simulations for two consecutive OFDM symbols, where the channel is a single path channel. The FFT size is 2048 samples, and the guard interval is 1/4. White Gaussian noise has been added with an SNR of 10 dB. The combined estimator is used that applies a positive and a negative shift by one FFT sample to the output of the previous symbol. The S-curve has been obtained by simulating the symbol time error value ε, for time offsets, τ, where τ is defined in samples, and here T=0 means the FFT is calculated on the first samples of an OFDM symbol, i.e. the cyclic prefix is fully included in the input vector to the FFT. The S-curve shows that for offsets of τ from 0 to 511, the symbol time error value ε is essentially zero. For the single path channel, for this ranges no intersymbol interference occurs, and therefore, for this range there is no need for adjusting the FFT window. For τ negative, the receiver 1 experiences intersymbol interference from the previous symbol. The symbol time error value ε becomes negative telling the tracking circuitry to retard on the received sample stream. For τ exceeding the guard interval duration, the receiver 1 experiences intersymbol interference from the succeeding symbol. In this case, the symbol time error value ε becomes positive telling the tracking circuitry to advance on the received sample stream. Another example of the performance of the symbol time error estimator 14 is shown in FIG. 9. Here, a test channel with two paths of equal strength and phase with a separation of 0.9 the guard interval is used. The SNR again has been set to 10 dB. The S-curve in FIG. 9 differs from the one in FIG. 8 in that respect that the range for which symbol time error value ε is close to zero is substantially reduced. The channel causes OFDM symbols to overlap with the guard interval Tg of the next adjacent neighboring symbols. The range for which no intersymbol interference occurs is now limited of to the range of τ greater than 460 and less or equal than 512 samples offset from the beginning of the OFDM symbol. The S-curve shown in FIG. 9 clearly shows that the time error estimator 14 is using the correct criterion. Again, for a symbol time error value ε being negative which happens for τ less than 460, the receiver 1 needs to retard on the received sample stream, and for τ being larger than 511, the symbol time error value ε becomes positive telling the receiver 1 to advance on the received sample stream. As discussed above, albeit proposed for the context of DVB-T/H, the invention is not limited to DVB-T/H only, but applicable to a wide range of OFDM systems including DAB, ISDB-T, DMB-T, and possibly others e.g. in ADSL/VDSL or to the upcoming WiBro and WiMax standards. LIST OF NUMERALS [0086] 1 receiver 2 frequency error correction unit 3 sample rate converter 4 window selection and guard interval removing unit 5 FFT unit 6 pilot extraction unit 7 common phase error estimator 8 common phase error correction unit 9 frequency tracking unit 10 equalizer 11 channel estimator 12 scattered pilot extraction unit 13 outer receiver 13.1 to 13.7 functional blocks of the outer receiver 14 symbol time error estimator 14.1 sample selection block 14.2 phase vector multiplication block 14.3 samples shifter block 14.4 sample symbols multiplication block 14.5 buffer unit 14.6 sum block 14.7 constant multiplication block 14.8 mapping block 15 loop filter 15.1 multiplication block 15.2 successive integration block 15.3 sum block 15.4 delay block 15.5 quantizer 15.6 sum block Patent applications by Volker Aue, Dresden DE Patent applications by NXP B.V. Patent applications in class TESTING Patent applications in all subclasses TESTING User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20080267273","timestamp":"2014-04-16T12:03:41Z","content_type":null,"content_length":"67752","record_id":"<urn:uuid:d21e3528-9b5e-4600-9608-33309ffe7231>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: word problem: Kristin spent $131 on shirts.Fancy shirts cost $28 and plain shirts cost $15.If she bought a total of 7 then how many of each kind did she buy? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50798fd6e4b0ed1dac5136ca","timestamp":"2014-04-16T22:33:53Z","content_type":null,"content_length":"71244","record_id":"<urn:uuid:9f881cd0-c63c-48e4-9af7-a34878b82d73>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
Maximum Majority Voting Improving the Roots of Democracy through Election Reform A. Problem Voter participation and majority rule are often considered the heart of democracy. However, the most common form of single-winner voting in the United States — “one person, one vote” (technically known as Plurality or First Past the Post) — implicitly assumes there are only two candidates. When there are more than two candidates, not only is there a risk that no candidate will get an absolute majority (versus a plurality), but voters are faced with the dilemma between voting ‘strategically’ (for the lesser of two evils) vs. ‘sincerely’ (for whom they feel is the ‘best’ candidate). This dilemma tends to promote a two-party system, which despite its many merits is vulnerable to systemic bias (as evidenced by low voter turnout, due to a perception that neither party offers a meaningful choice). The end result is that candidates lack a true majoritarian mandate, due both to low voter turnout and the possibility of a split vote. B. Solution To address these problems, we recommend an alternate election system we call Maximum Majority Voting, or MMV [1]. Maximum Majority Voting is based on the latest research into election reform, but is still designed to be as simple as possible to use and understand. Each voter simply fills out a ranked ballot, listing the candidates in order of preference. The winner is the candidate with the largest majority voting for them over other candidates. By specifying a complete set of preferences, you never have to worry about “wasting your vote”; e.g., if your most-preferred candidate doesn’t win, your vote still helps your second choice to beat your third (or worse) choice. Thus, you can still vote your conscience without giving up the ability to influence the election. C. Features While there are several forms of ranked voting[2], Maximum Majority Voting is one of the best at reducing the need for strategic voting[3]. The reason you generally don’t need to vote strategically under MMV is that each election is broken down into a series of one-on-one matchups, like a round-robin tournament. For each pairing of candidates A and B, MMV compares the number of voters who prefer A over B (often written “A > B”) to those who prefer B over A (“B > A”). Each voter’s list of preferences is interpreted in terms of these pairwise contests, so a vote of “A > B > C” implies A > B, A > C, and B > C. Even if the top two candidates are B and C, it still doesn’t hurt for me to vote for A first, since my vote for B > C counts just as much as my vote for A > B. Thus, all of each voter’s preferences are used to determine the final ranking, rather than treating higher or lower-preferences separately. D. Benefits In addition to reducing the need for strategic voting, MMV provides a number of other benefits: • MMV ensures that the winning candidate is preferred by a majority of the voters over any other given alternative, no matter how many candidates are running. • MMV system allows voters to fully express their preferences among all the available candidates. • MMV also tends to discourage mudslinging in multi-candidate elections, since there is an incentive to have the other candidate’s supporters vote for you as second or third place. • MMV allows primary losers, third-parties, and other non-traditional candidates to run without fear of becoming spoilers, increasing the range of meaningful choices available to voters. Thus, in contrast to traditional Plurality voting, MMV actually becomes more effective — rather than more polarized — with more candidates and greater citizen involvement. In the place of a splintered and disenfranchised electorate, it can actually help us find and elect candidates who reflect our underlying shared values. E. Process The formal definition of Maximum Majority Voting involves six phases: 1. Voting Each voter votes for all the candidates they like, indicating order of preference if any. Consider an example in a five-candidate election, where a voter likes A most, B next, and C even less, but doesn’t care at all for D or E. In that case, their ballot would be “A > B > C > D = E”; this can be shortened to “A > B > C” since unranked candidates are considered to be at the bottom and equivalent. Similarly, if another voter liked E most, considered D and C tied for second, and ranked B over A, their ballot would be “E > D = C > B > A” (with the last “> A” being optional, since it is redundant). 2. Preferences Each ballot defines preferences based on one cycle of pairwise matchups between each of the candidates. Thus, the ballot A > B > C would be interpreted as: A > B, A > C, A > D, A > E B > C, B > D, B > E C > D, C > E while “E > D = C > B > A” would be: E > D, E > C, E > B, E > A D > B, D > A C > B, C > A B > A 3. Counting When all the balots are counted, this gives a final score for each matchup. [4]. Say we have nine voters, who voted as follows 4: A > B > C 3: E > D = C > B 2: C > A > D this gives: 6/3: A > B 4/5: A > C 6/3: A > D 6/3: A > E 4/5: B > C 4/5: B > D 4/3: B > E 6/0: C > D 6/3: C > E 2/3: D > E Note how some matchups add up to less than 9, because some candidates were given equal preference by some voters. Also, the total number of matchups for N candidates is always N * (N-1)/2, or 10 for N = 5. 4. Sorting Next, the matchups are sorted from the largest win to the smallest. If two matchups have the same number of winning votes, the one with the largest margin (weakest loser) is listed first. It is also possible for two matchups to have the exact same score; while such “same-size majorities” are extremely unlikely in public elections, they could well occur in small committees. The ballots above would thus be reordered to give: 6/0: C > D 6/3: A > B 6/3: A > D 6/3: A > E 6/3: C > E 5/4: C > A 5/4: C > B 5/4: D > B 5/4: E > B 3/2: E > D 5. Candidates ordered This list of matchups is used to rank the candidates, starting from the largest win on down. The order is important, because in rare cases [5]a later matchup may conflict with an earlier one: (i) If a matchup later in the list conflicts with the previously-determined order, the latter matchup is superseded (ignored). (ii) In the even unlikelier case where several matchups with same-size majorities conflict with each other, all such conflicting matchups are ignored (though any non-conflicting matchups of that size are still included). Stepping through the ordered list of matchups above, we find (using [X,Y] for unordered candidates): 1: C > D (C > D) 2: A > B, C > D (A > B) 3: A > B, [A,C] > D (A > D) 4: A > [B,E], [A,C] > D (A > E) 5: A > B, [A,C] > [D,E] (C > E) 6: C > A > [B,D,E] (C > A + A > B => C > B) 7: C > A > [B,D,E] (C > B already assumed) 8: C > A > [D > B, E] (D > B) 9: C > A > [D,E] > B (E > B) 10: C > A > E > D > B (E > D) 6. Winner selection Based on the above, we see that MMV typically generates a strict ranking of candidates in the order they were preferred by the most voters (unless all the pairwise matchups for multiple candidate are precisely identical). The winner is thus the top candidate, i.e. the one with the Maximum Majority. In the extraordinary case there really is a complete tie among all the top candidates, then the winner would need to be chosen by some external mechanism.[7]. F. Conclusions Since Maximum Majority Voting uses all the information available, and weights larger majorities over lesser ones (in case of conflict), it will always reflect the Maximum Majority of the electorate to the maximum extent possible – no matter how many candidates. This should allow for greater choice among candidates, and greater involvement from voters. While electoral reform may not solve all our political problems, by increasing competition it allows us to encouarge higher-quality candidates, thus opening the door to a more representative and accountable democracy. Ernest N. Prabhakar, Ph.D. Founder, RadicalCentrism.org February, 2004 RadicalCentrism.org is an anti-partisan think tank based near Sacramento, California, which is seeking to develop a new paradigm of civil society encompassing politics, economics, psychology, and philosophy. We are dedicated to developing and promoting the ideals of Reality, Character, Community & Humility as expressed in our Radical Centrist Manifesto: The Ground Rules of Civil Society. MMV can be considered a deterministic variation of Steve Eppley‘s Maximize Affirmed Majorities (MAM) system, which in turn is based of Tideman’s well-studied Ranked Pairs algorithm for finding the pairwise winner (also known as the Condorcet winner). This particular Condorcet-compatible variant was apparently first proposed by Mike Ossipoff, and recommended to me by Eric Gorr For example, another popular way of evaluating ranked ballots is called Instant Runoff Voting While a slight improvement over Plurality, IRV is not as good as MMV at using all the voters information and reducing the need for strategic voting. It also tends to choose the extremist candidates with strong support, rather than balanced candidtes with Maximum Majority (that is, the Condorcet winner). To be precise, it is mathematically impossible to have a perfect voting system free of any strategic considerations. However, with Maximize Affirmed Majorities sincere voting is usually the optimal strategy. Even with MAM it is theoretically possible for one party to attempt to vote ‘insincerely’ to prevent the election the true consensus winner; however, not only does this require significant public coordination and the risk of electing an even more undesireable candidate, but there are counter-strategies that can be used by other parties to defuse their impact. The deterministic variant used in MMV (somemtimes called MAM-d) is not as well studied as MAM, but so far appears to possess the same desireable properties. This tabulation is usually done via what is called a ‘pairwise matrix’, where the rows indicate votes -for- a candidate, and the columns indicate votes -against- a candidate. This is often used in voting systems associated with the Condorcet Criteria, which states that any candidate which is unanimously preferred to each other candidate on the basis of pairiwse matchups should win the For example, given the following results from 9 voters: 4 votes of A > B > C (over D and E) 3 votes of D > C > B (over A and E) 2 votes of B > A (over C, D and E) The pairwise matrix (sometimes called the Condorcet matrix) would be: A B C D E A - 4 6 6 6 B 5 - 6 6 9 C 3 3 - 5 7 D 3 3 3 - 3 E 0 0 0 0 - This can only happen if we have a ‘rock-paper-scissors’ situation (also called a circular tie), where A beats B, and B beats C, but C beat A. This is very unlikely in normal public elections — since each individual ballot requires a strict ranking among candidates — but is possible if, for example, a significant fraction of the population casts ballots that don’t reflect a linear Left-Right political spectrum. For example, say that the current ordering is “A > B, C > D”, and there is a same-size majority between the next two items, “B > C” and “D > A”. Since the former would imply “A > D” and the latter would imply “C > B”, they are inconsistent with each other, and would both be discarded. However, a third matchup of that same size with “D > E” would be included, not discarded. Again, this is an extremely unnatural occurence, but is included here for theoretical completeness. For example, in the U.S. Presidential Election ties are resolved via the House of Representatives. While such exact ties are extraordinarily unlikely in public elections, statistical ties (where the difference is within the margin of error) have been known to occur, and voting systems should provide some objective means of resolving them. For MMV, the default system is known as ‘Random Dictator’: one ballot in chosen at random, and its preferences are used to break the tie. If two or more top candidates are ranked equally by that ballot, then one of those doubly-tied candidates are picked at random. While such non-determinism may not be acceptable in public elections, this can be useful, e.g. in small committees, where such exact ties are more likely and there is no other way to resolve a deadlock. The reason for a random ballot, rather than simply picking among the top candidates, is to reduce the incentive for one faction to ‘stuff’ the ballot with multiple ‘clone’ candidates that would be equally ranked.
{"url":"http://radicalcentrism.org/resources/maximum-majority-voting/","timestamp":"2014-04-20T19:20:56Z","content_type":null,"content_length":"59440","record_id":"<urn:uuid:d5f1334c-3502-4871-9314-eb8d4c3d5d11>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
A Perturbation problem for U(n) up vote 10 down vote favorite Let G be a finite subgroup of U(n), the unitary group acts on $\mathbb{C}^n$. If there is a unit vector $x$ in $\mathbb{C}^n$ such that g(x) is almost orthogonal to x, for all $g\in G$ except the identity, can we perturb x so that g(x) is exactly orthogonal to x, for all $g\in G$ except the identity? More precisely, can we find a very small number $\epsilon>0$, so that if there exist a unit vector $x$ and the inner product $|(g(x),x))|<\epsilon$ for all $g\in G$ \ {1}, then we can find another unit vector $y$, such that $(g(y),y)=0$ for all $g\in G$ \ {1}? Is it possible to further require that $||x-y||$ be small too? add comment 3 Answers active oldest votes If gx is a bounded distance away from x (which in particular occurs when gx is nearly orthogonal to x), then g is a bounded distance away from the identity. Since U(n) is compact, this and the pigeonhole principle forces the group G to have bounded cardinality; in particular, the set of all such groups is compact (if one chooses closed conditions for properties such as "bounded distance away from origin") in the Hausdorff distance topology, as the limit of a sequence of finite groups with bounded cardinality in the Hausdorff metric is again a finite group with bounded cardinality. For any single group, the claim is true for some epsilon by continuity (and the compactness of the unit sphere), so the claim is true in general by compactness of the space of groups. up vote 9 down vote With a bit more effort one can extract an explicit value of epsilon by making the compactness arguments quantitative, though the bounds are likely to be somewhat poor. (More generally, for studying finite subgroups of compact linear groups, a useful fact to know here is Jordan's lemma, which says that one can always find a bounded index subgroup of such a group which is abelian (the bound can depend on the ambient dimension of the linear group). Here, of course, much more is true, because we are able to exclude group elements from getting too close to the origin, but Jordan's lemma is useful in situations in which we do not have this luxury.) This is a very nice answer. One comment: I was slightly confused by the way you alluded to Jordan's Lemma. For others like me, a precise statement is: for any positive integer $n$, there exists a positive integer $J(n)$ such that any finite subgroup of $\operatorname{GL}_n(\mathbb{C})$ has an abelian normal sub(sub)group of index at most $J(n)$. – Pete L. Clark Mar 26 '11 at 20:45 That's an equivalent formulation (if one has a bounded index abelian subgroup, one also has a bounded index normal abelian subgroup.) I've reworded a little bit to emphasise that the bound does depend on the dimension n. – Terry Tao Mar 26 '11 at 21:01 @Terry: sure, that's true. To put a finer point on what confused me (a little): in your previous version you said "finite subgroups of compact groups", so the word "linear" was missing (and also the dimension, as you say). Moreover you don't need to say "compact linear group", since every finite subgroup of a linear group is contained in a compact linear group. – Pete L. Clark Mar 27 '11 at 7:57 I was trying to restate the question in the following way: For any finite subgroup G of U(n), define $\lambda_G=$$inf_{x\in\mathbb{C}^n}$$\sum_{g\neq1}$|(gx,x)|.So you argument shows that $inf$ {$\lambda_G\neq 0$} is non-zero, but this lower bound depends on n. Am I understanding correctly? – Qingyun Mar 27 '11 at 19:43 add comment I'm in a hurry but I think that your question can be deduced as a consequence of the fact that every compact groups has the property (T) of Kazhdan. In particular finite groups have up vote 2 them. Therefore, your question holds in a much more general context. Does it make sense? down vote add comment 2nd UPDATE: Forget the old solutions. Let's assume that the representation does not contain the trivial representation, then $\sum_{g\in G}gx=0$ for every $x$. Therefore, for a norm one up vote 2 vector $x$, $$ 1=|(x,x)|=|\sum_{g\ne 1}(gx,x)|\le\sum_{g\ne 1}|(gx,x)| $$ So it will never occur, that $|(gx,x)|<\frac1{|G|-1}$ for every $g$. down vote Things may be more complicated than you thought. If the dimension is not 3, then a rotation may not has an axis (or more than one axis?). For example, consider $\mathbb{R}^6=R^2\oplus R^2 \oplus R^2$, let $g\in G$ be a rotation of the form $g_1\oplus g_2\ oplus g_3$, where $g_i$ is a rotation of $R^2$ by angle $\theta_i$, the i-th root of unity. If $x=(x_1,\dots,x_6)$, then (gx,x) is a convex combination of $cos(\theta_i)$, which can be arbitrary small without being 0. – Qingyun Mar 26 '11 at 19:04 add comment Not the answer you're looking for? Browse other questions tagged finite-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/59643/a-perturbation-problem-for-un/59672","timestamp":"2014-04-19T04:27:55Z","content_type":null,"content_length":"64513","record_id":"<urn:uuid:8a192c06-63f2-4b7d-a340-acc0507feda9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
Work on an Einstein-Hilbert type action but with the *absolute value* of scalar curvature? up vote 4 down vote favorite This is only my second question on mathoverflow, so my apologies if this would be more appropriate at a physics site. My question concerns a modification to the Einstein-Hilbert action. The standard action is given (in the absence of matter and with cosmological constant $\Lambda=0$) by $$ \mathcal{S_{EH}}(g_{\mu\nu}) = \int_M R \sqrt{-g}\mbox{ }d^4x$$ where $M$ is a (compact) differentiable 4-manifold, $g_{\mu\nu}$ is a Lorentzian metric on $M$, $R$ is scalar curvature and $\sqrt{-g}\mbox{ }d^4x$ is the standard volume form. Critical points of this action (with respect to variations in $g_{\mu\nu}$) give Lorentzian metrics which are solutions to Einstein's field equations for general relativity. QUESTION: Does anyone know of work using a similar action, but where the absolute value $|R|$ appears instead of $R$? That is, I'm interested in references to previous work concerning the action $$ \mathcal{S}(g_{\mu\nu}) = \int_M |R| \sqrt{-g}\mbox{ }d^4x.$$ Given the huge amount of interest in quantum gravity, I would assume that someone has examined this. However, I was unsuccessful in my searches. I'm not a physicist, so perhaps I'm missing some bit of terminology that is standard. Any help pointing me in the right direction would be greatly appreciated! add comment 1 Answer active oldest votes In the vacuum case this is not greatly different from the Einstein-Hilbert action. Let $(M,g)$ be a classical solution to the variational problem as you posed. Suppose $p\in M$ is such that $R(p) \neq 0$, then by continuity in a small neighborhood of $p$, the scalar curvature $R$ is signed, and hence locally in that neighborhood it is also a critical point to the Einstein-Hilbert action. But then it must be Ricci flat, contradicting the assumption that $R \neq 0$ at $p$. Conversely, if $(M,g)$ is a classical solution to the Einstein-Hilbert variational problem, then it is Ricci flat and hence scalar flat. And hence you have that all Einstein-vacuum up vote 4 down solutions are also solutions to the critical point problem you posed. vote accepted Going back forwards again, note that by definition any scalar flat 4 manifold will be a minimizer of the action. Hence you have that for the vacuum problem of your proposed action: There are no critical points which do not minimise the action; the action minimisers are precisely the scalar flat Lorentzian 4 manifolds. In any case, if you really are interested in this action, for literature searches the relevant keyword is f(R) gravity theories. @Willie Wong: Thanks, Willie! Great argument ... and exactly what I was looking for! – Aaron Trout Jul 26 '12 at 18:10 Classically the theories are the same, but quantum mechanically, think path integral, they will differ. – Kelly Davis Jul 26 '12 at 21:54 @Kelly: as I will not pretend to know how to think quantum mechanically about gravity, I cannot comment on that. :-) – Willie Wong Jul 26 '12 at 22:12 add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry general-relativity mp.mathematical-physics calculus-of-variations reference-request or ask your own question.
{"url":"http://mathoverflow.net/questions/103205/work-on-an-einstein-hilbert-type-action-but-with-the-absolute-value-of-scalar","timestamp":"2014-04-19T15:03:52Z","content_type":null,"content_length":"57960","record_id":"<urn:uuid:e2f6b6d0-907a-4c50-8bba-4b29d2eb5bf5>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Les Willis WILLISL at EM.AGR.CA Tue Apr 30 12:09:02 EST 1996 I believe you can use the probability equation from Sambrook: N = ln(1-P)/ln(1-f), where P is the desired probability, f is the fractional proportion of the genome in a single recombinant, and N is the necessary number of recombinants (Clarke and Carbon, 1976). eg. for a 99% probability of a 17 kb fragment in a mammalian genome of 3 X 10 E09 bp: N= ln(1- 0.99) = 8.1 X 10 E05 ln( 1- [1.7 X 10 E04/ 3 X10 E09]) >>> BURMAN ADLAI J <fsajb4 at aurora.alaska.edu> 04/28/96 06:54pm >>> I've been trying to figure out something that should be simple but I seem to be stuck. Perhaps someone can help. I need to figure out what the probability of NOT finding a fragment which is r bp long in a sequence which is n bp long. I can do this for individual cases but I need a purely symbolic solution which would cover all cases. This is all assuming a completely random sequence. If anyone happens to know off the top of their heads what the solution is it would be grotesquely appreciated. Adlai Burman More information about the Mol-evol mailing list
{"url":"http://www.bio.net/bionet/mm/mol-evol/1996-April/004369.html","timestamp":"2014-04-17T11:19:56Z","content_type":null,"content_length":"3129","record_id":"<urn:uuid:62fc38d1-f4d1-4757-9d10-0234021e1c90>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
Integrating control systems defined on the frame bundles of the space forms Biggs, J. and Holderbaum, W. (2006) Integrating control systems defined on the frame bundles of the space forms. In: Proceedings of the 45th IEEE Conference on Decision and Control, Vols 1-14. IEEE Conference on Decision and Control. IEEE, New York, pp. 3849-3854. ISBN 0191-2216 9781424401703 Full text not archived in this repository. This paper considers left-invariant control systems defined on the orthonormal frame bundles of simply connected manifolds of constant sectional curvature, namely the space forms Euclidean space E-3, the sphere S-3 and Hyperboloid H-3 with the corresponding frame bundles equal to the Euclidean group of motions SE(3), the rotation group SO(4) and the Lorentz group SO(1, 3). Orthonormal frame bundles of space forms coincide with their isometry groups and therefore the focus shifts to left-invariant control systems defined on Lie groups. In this paper a method for integrating these systems is given where the controls are time-independent. In the Euclidean case the elements of the Lie algebra se(3) are often referred to as twists. For constant twist motions, the corresponding curves g (t) is an element of SE(3) are known as screw motions, given in closed form by using the well known Rodrigues' formula. However, this formula is only applicable to the Euclidean case. This paper gives a method for computing the non-Euclidean screw motions in closed form. This involves decoupling the system into two lower dimensional systems using the double cover properties of Lie groups, then the lower dimensional systems are solved explicitly in closed form. Deposit Details Centaur Editors: Update this record
{"url":"http://centaur.reading.ac.uk/14358/","timestamp":"2014-04-16T04:52:19Z","content_type":null,"content_length":"24359","record_id":"<urn:uuid:b8686bb7-da52-4793-b761-cc5a34dcf6b1>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Partitions in Measure Algebras Let $(\mathcal{S},\mu)$ be a totally finite measure algebra, and write $X$ for the maximal element. Without loss of generality, we can assume that $\mu$ is normalized so that $\mu(X)=1$. We define a “partition” $\mathcal{P}$ of an element $E\subseteq\mathcal{S}$ to be a finite set of “disjoint” elements of $\mathcal{S}$ whose “union” is $E$. Remember, of course, that the elements of $\mathcal{S}$ are not (necessarily) sets, so the set language is suggestive, but not necessarily literal. That is, if $\mathcal{P}=\{E_1,\dots,E_k\}$ then $E_i\cap E_j=\emptyset$ and $\displaystyle E=\bigcup\limits_{i=1}^kE_i$ The “norm” $\lvert\mathcal{P}\rvert$ of a partition $\mathcal{P}$ is the maximum of the numbers $\{\mu(E_i)\}$. If $\mathcal{P}=\{E_1,\dots,E_k\}$ is a partition of $E$ and if $F\subseteq E$ is any element of $\mathcal{S}$ below $E$, then $\mathcal{P}\cap F=\{E_1\cap F,\dots,E_k\cap F\}$ is a partition of $F$. If $\mathcal{P}_1$ and $\mathcal{P}_2$ are partitions, then we write $\mathcal{P}_1\leq\mathcal{P}_2$ if each element in $\mathcal{P}_1$ is contained in an element of $\mathcal{P}_2$. We say that a sequence of partitions is “decreasing” if $\mathcal{P}_{n+1}\leq\mathcal{P}_n$ for each $n$. A sequence of partitions is “dense” if for every $E\in\mathcal{S}$ and every positive number $\epsilon$ there is some $n$ and an element $E_0\in\mathcal{S}$ so that $\rho(E,E_0)<\epsilon$, and $E_0$ is exactly the union of some elements in $\mathcal{P}_n$. That is, we can use the elements in a fine enough partition in the sequence to approximate any element of $\mathcal{S}$ as closely as we want. Now, if $(\mathcal{S},\mu)$ is a totally finite, non-atomic measure algebra, and if $\{\mathcal{P}_n\}$ is a dense, decreasing sequence of partitions of $X$, then $\lim\limits_{n\to\infty}\lvert\ mathcal{P}_n\rvert=0$. Indeed, the sequence of norms $\{\lvert\mathcal{P}_n\rvert\}$ is monotonic and bounded in the interval $[0,1]$, and so it must have a limit. We will assume that this limit is some positive number $\delta>0$, and find a contradiction. So if $\mathcal{P}_1=\{E_1,\dots,E_k\}$ then at least one of the $E_i$ must be big enough that $\lvert\mathcal{P}_n\cap E_i\rvert\geq\delta$ for all $n$. Otherwise the sequence of norms would descend below $\delta$ and that couldn’t be the limit. Let $F_1$ be just such an element, and consider the sequence $\{\mathcal{P}\cap F_1\}$ of partitions of $F_1$. The same argument is just as true, and we find another element $F_2\subseteq F_1$ from the partition $\mathcal{P}_2$, and so on. Now, let $F$ be the intersection of the sequence $\{F_n\}$. By assumption, each of the $F_n$ has $\mu(F_n)\geq\delta$, and so $\mu(F)\geq\delta$ as well. Since $(\mathcal{S},\mu)$ is non-atomic, $F$ can’t be an atom, and so there must be an $F_0\subseteq F$ with $0<\mu(F_0)<\mu(F)$. This element must be either contained in or disjoint from each element of each partition $\mathcal{P}_n$. We can take $\epsilon$ smaller than either $\mu(F_0)$ or $\mu(F)-\mu(F_0)$. Now no set made up of the union of any elements of any partition $\mathcal{P}_n$ can have a distance less than $\epsilon$ from $F_0$. This shows that the sequence of partitions cannot be dense, which is the contradiction we were looking for. Thus the limit of the sequence of norms is zero. 1 Comment » 1. [...] interval , let be the class of Borel sets on , and let be Lebesgue measure. If is a sequence of partitions of the maximal element of the measure algebra into intervals, and if the limit of the sequence of [...] Pingback by The Measure Algebra of the Unit Interval « The Unapologetic Mathematician | August 25, 2010 | Reply • Recent Posts • Blogroll • Art • Astronomy • Computer Science • Education • Mathematics • Me • Philosophy • Physics • Politics • Science • RSS Feeds • Feedback Got something to say? Anonymous questions, comments, and suggestions at • Subjects • Archives
{"url":"https://unapologetic.wordpress.com/2010/08/24/partitions-in-measure-algebras/?like=1&source=post_flair&_wpnonce=5b12459335","timestamp":"2014-04-18T15:47:33Z","content_type":null,"content_length":"84756","record_id":"<urn:uuid:f3ba498e-5546-46f0-853a-f2095630226a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
Cosmology & Gravitation This series consists of talks in the areas of Cosmology, Gravitation and Particle Physics. I propose late-time moduli decay as the common origin of baryons and dark matter. The baryon asymmetry is produced from the decay of new TeV scale particles, while dark matter is created from the chain decay of R-parity odd particles. The baryon and dark matter abundances are mainly controlled by the dilution factor from moduli decay, which is typically in the range 10^{-9}-10^{-7}. The exact number densities are determined by simple branching fractions from modulus decay, which are expected to be of similar order in the absence of symmetries. If the universe is a quantum mechanical system it has a quantum state. This state supplies a probabilistic measure for alternative histories of the universe. During eternal inflation these histories typically develop large inhomogeneities that lead to a mosaic structure on superhorizon scales consisting of homogeneous patches separated by inflating regions. As observers we do not see this structure directly. Rather our observations are confined to a small, nearly homogeneous region within our past light cone. In this talk I will discuss a new class of cosmological scalar fields. Similarly to gravity, these theories are described by actions linearly depending on second derivatives. The latter can not be excluded without breaking the generally covariant formulation of the action principle. Despite the presence of these second derivatives the equations of motion are of the second order. Hence there are no new pathological degrees of freedom. I will present analytic solutions to a class of cosmological models described by a canonical scalar field minimally coupled to gravity and experiencing self interactions through a hyperbolic potential. Using models and methods of solution inspired by 2T-physics, I will show how analytic solutions can be obtained including radiation and spacial curvature. Among the analytic solutions, there are many interesting geodesically complete cyclic solutions, both singular and non-singular ones. Reducing a higher dimensional theory to a 4-dimensional effective theory results in a number of scalar fields describing, for instance, fluctuations of higher dimensional scalar fields (dilaton) or the volume of the compact space (volume modulus). But the fields in the effective theory must be constructed with care: artifacts from the higher dimensions, such as higher dimensional diffeomorphisms and constraint equations, can affect the identification of the degrees of freedom. The effective theory including these effects resembles in many ways cosmological perturbation The existence of concentric low variance circles in the CMB sky, generated by black-hole encounters in an aeon preceding our big bang, is a prediction of the Conformal Cyclic Cosmology. Detection of three families of such circles in WMAP data was recently reported by Gurzadyan & Penrose (2010). We reassess the statistical significance of those circles by comparing with Monte Carlo simulations of the CMB sky with realistic modeling of the anisotropic noise in WMAP data. We show that, in a model of modified gravity based on the spectral action functional, there is a nontrivial coupling between cosmic topology and inflation, in the sense that the shape of the possible slow-roll inflation potentials obtained in the model from the nonperturbative form of the spectral action are sensitive not only to the geometry (flat or positively curved) of the universe, but also to the different possible non-simply connected topologies. For nearly the past century, the nature of dark matter in the Universe has puzzled astronomers and physicists. During the next decade, experiments will determine if a substantial amount of the dark matter is in the form of non-baryonic, Weakly-Interacting Massive Particles (WIMPs). In this talk I will discuss and interpret modern limits on WIMP dark matter from a variety of complementary methods. I will show that we are just now obtaining sensitivity to probe the parameter space of cosmologically-predicted WIMPs created during the earliest epoch in the Universe. The availability of high precision observational data in cosmology means that it is possible to go beyond simple descriptions of cosmic inflation in which the expansion is driven by a single scalar field. One set of models of particular interest involve the Dirac-Born-Infeld (DBI) action, arising in string cosmology, in which the dynamics of the field are affected by a speed limit in a manner akin to special relativity. In this talk, I will introduce a scalar-tensor theory in which the matter component is a field with a DBI action.
{"url":"http://perimeterinstitute.ca/video-library/collection/cosmology-gravitation?page=12","timestamp":"2014-04-18T16:18:47Z","content_type":null,"content_length":"65074","record_id":"<urn:uuid:f29064b5-4d08-49cf-9339-a63fdfa3dbbb>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
The Bible, History, and Bayes' Theorem (part 2) Posted on: April 14, 2011 - 4:11pm The Bible, History, and Bayes' Theorem (part 2) The Bible, History, and Bayes' Theorem (part 2) (continued from part 1) There are several issues you've brought up, and which I've also thought of, which remain to be addressed, so I thought I'd recap what I see as the major remaining issues here: 1. How to apply Bayes' Theorem to specific problems in the Jesus Historicity debate (e.g. Paul's reference to James as 'brother of the Lord'.) 2. How can we choose between different hypotheses? How can our personal assumptions (such as that it is likely that Jesus existed) bias our evaluation of alternative hypotheses? How can we test two hypotheses without succumbing to the problem of making up ad hoc 'predictions' that simply mould the hypothesis to the new evidence? 3. The 'numerical' problem of Bayesian analysis. Is it not appropriate to place numbers where they are unwarranted? Why not just rank probabilities or even simply subjective judgments of confidence? 4. Grounding evidence and estimates of probability ratios. How do we know our probability estimates are actually good? 5. Dealing with unknown probabilities. The outcomes of our analysis may crucially depend on the probabilities we input as assumptions. But if we only have a vague idea of those probabilities in the first place, how can Bayesian theory help us narrow down the likely ranges of those vague first-estimates? 6. Assessing reliability of witnesses. How should we treat statements of fact from possibly (probably) unreliable witnesses? How do we incorporate ideas about what people 'believe' vs. what they actually 'know'? 7. Importance of prior information. How sensitive is Bayesian analysis to our initial estimates of probabilities, and what strategies and methods can help eliminate strong personal biases? 8. How the accumulation of evidence can increase the confidence in the overall-best hypothesis. This is only a rough list, and I've probably missed stuff, etc. But it's a decent starting point. I will say right now that I'm not the one who's going to be able to answer such specific questions as your question about James. I don't have enough knowledge of history and existing historical methods to give a competent model for that specific of a situation. However, I can definitely come up with analogous problems and show how, given the assumptions of a problem, Bayes' theorem can be applied very flexibly, like an all-purpose tool, to work on whatever problem you can specify. I just don't have the right background knowledge to properly specify the problem of James' relationship to a historical Jesus. But if some historian can specify that problem well-enough, then Bayes' theorem will apply to it. Given my personal limitations on the subject of history, I've decided to focus more on the general issue of how Bayes theorem can be applied to any subject -- history included -- and how, even more generally, learning more and understanding more about how Bayes' theorem works in a practical way, can help just about anyone to improve their skill in rational, plausible, evidence-based reasoning. Therefore, I will focus on making Bayesian tools more understandable to the average reader, and also on how we could apply Bayesian reasoning to more general problems that may have some analogous relationship to the specific question of Jesus' historicity. The first, most crucial topic to explore now comes clearly to the fore: The Importance of Prior Information In the previous example of the birthday present from Alice, we had allowed for the assumption that the likelihoods of getting a watch or getting a keychain should be of equal weight. Thus, they should have a 1 to 1 relationship, which translates to 50% chance of a watch and 50% chance of a keychain. But what if we knew, again from 100% reliable information from trusty Bob, that Alice happened to work for a watch factory? Furthermore, Bob tells us, Alice has a long-run history of getting people watch-related presents for their birthdays 90% of the time! Clearly, if we were to ignore this prior information about the situation, we might end up with quite bad probability estimates. It surely no longer seems reasonable to start of with an assumption of equal likelihood, 50% watch, 50% keychain. As a reminder, when we assumed the probability of a keychain, P(K), was initially 50%, we were able to correctly calculated the probability of a keychain after hearing the box rattle, P(K|R), which turned out to be 75%, because key chains tend to rattle more often than watches (60% vs. 20%). But now, we should instead apply our prior information from trusty Bob that the initial estimate of P(K) should be more like 10%, since it is 90% likely that Alice would get us a watch-related present, as she usually does. So, we adjust our initial prior probabilities: P(W) = 90% = 0.9 P(K) = 10% = 0.1 Now, let's look at how this change in prior probabilities affects the posterior probabilities of K or W, after hearing a rattle sound (R). We just apply Bayes' theorem as usual. After practicing this a few times, it will begin to become natural and obvious to us. P(K|R) = P(K) x P(R|K) / [ P(K) x P(R|K) + P(W) x P(R|W) ] = 0.1 x 0.6 / [ 0.1 x 0.6 + 0.9 x 0.2 ] = 0.06 / ( 0.06 + 0.18 ) = 0.06 / 0.24 = 6/24 = 1/4 = 0.25 = 25% So, Bayes theorem tells us that, given our assumptions, the original prior P(K) of 10% should be updated to a new P(K|R) of 25%, which incorporates the new clue of the rattle sound, which is more likely in the case of a keychain than a watch. Notice that P(K|R) is larger than 10%, but it is not very close to 75%, which was the result we got from the first example. So, even after rattling the box, we should still expect that there's only a 25% chance of a keychain, and it's still 75% likely that the present is a watch. Clearly, the prior probability that we assume about the likelihood of getting each of the different kinds of presents has a large, dramatic effect on the post probabilities. The clue of the rattling sound still gives us some information, updating 10% to 25%, but it is not enough to overwhelm the initial very low prior probability to boost it above 50% or anywhere close to 75%. The rattle is a good clue, but it's not that good. There's still too much of a chance that Alice got you a watch and it just happened to be one of the 20% of watches which happen to rattle when shaken. The Strength of Evidence What kind of clue could overcome such a low prior probability? The strength of evidence is linked to the conditional probabilities, or likelihoods that they predict for various outcomes. The rattling clue (R) had a likelihood of happening 60% of the time when there's a keychain, and 20% of the time when there's a watch. Presumably, if you had put 'Pebbles' on your wish list, they would have something like a 95% chance of rattling when shaken, and so rattling would have even more strongly favoured pebbles in the box than a watch. Now, imagine you had a third friend, Cindy, who was pretty trustworthy, but not quite as trusty as Bob. She tells the truth 90% of the time, and only 10% of the time is she wrong (or lying). Bob has let you know ahead of time that Cindy definitely does know what's in the box. While Bob distracts Alice by pointing out the importance of prior probabilities in Bayesian calculations, you quickly whisper to Cindy, "What's in the box?" Cindy whispers back, "It's a keychain!" When Alice turns back to you, you've already straightened your face, but perhaps are smirking a little bit. The question now is: a) If Cindy whispered her "keychain" (let's call this 'C[k]') before you shook the box, what is the actual probability of a keychain, given that Cindy has said it's a keychain, P (K|C[k])? b) If Cindy had whispered "keychain" after you shook the box, what is the actual probability of a keychain, given both R and C[k], P(K|R and C[k])? c) If, after hearing Cindy's answer (from part a), you then shake the box and it rattles, is P(K|C[k] and R) the same as P(K|R and C[k])? Extraordinary Claims Require Extraordinary Evidence Let's explore this by working through the three-part question above. Part a) is a quite straight-forward Bayesian calculation. Since Cindy tells the truth 90% of the time, then the probability that she would say "keychain", in the case when it's actually a keychain, is 90%. Likewise, the case when it's a watch, she would still say "keychain" 10% of the time, presumably because she's lying. So, P(C[k]|K) is 90% and P(C[k]|W) is 10%. Plug these in and we get our P(K|C[k]) = P(K) x P(C[k]|K) / [ P(K) x P(C[k]|K) + P(W) x P(C[k]|W) ] = 0.1 x 0.9 / [ 0.1 x 0.9 + 0.9 x 0.1 ] = 0.09 / ( 0.09 + 0.09 ) = 0.09 / 0.18 = 9/18 = 1/2 = 0.5 = 50% Thus, before shaking the box, Cindy's answer of "keychain" improves our estimated probability of a keychain from a mere 10% to 50%. It's almost as if Cindy's 90% chance of telling the truth 'cancels out' the prior 90% chance that Alice had bought us a watch, rather than a keychain. In fact, the math is exactly like that. You need strong evidence in favour of something to overcome strong prior implausibility of some claim. This is reminiscent of Sagan's motto that "extraordinary claims require extraordinary evidence". Cindy's claim that the present is a keychain is an extraordinary claim, but her high degree of trustworthiness makes her report count as extraordinary evidence in favour of the keychain hypothesis. Not enough to tip the scales, but enough to bring it back as a valid contender against the watch hypothesis. Cumulative Evidence Accumulates Cumulatively Answering part b) requires us to take the prior information/evidence of the rattling box (R) into account. Essentially, the posterior probability from one piece of evidence (R) becomes the prior probability for the next piece of evidence (C[k]). Again, this is actually a rather straight-forward application of Bayes' theorem. You basically just apply it twice: First for the evidence of R, and then for the evidence of C[k]. Just take P(K|R) as your prior probability for C[k] (rather than the usual P(K). P(K|R and C[k]) = P(K|R) x P(C[k]|K and R) / [ P(K|R) x P(C[k]|K and R) + P(W|R) x P(C[k]|W and R) ] This equation is asking us for P(C[k]|K and R) and P(C[k]|W and R), which are the probabilities of Cindy saying "keychain" given that it is a keychain (or watch) and the box rattled. But Cindy's answer doesn't depend on whether or not the box rattled. Regardless of a rattle, she will answer truthfully 90% of the time, according to our assumptions. So, P(C[k]|K and R) is actually exactly the same as P(C[k]|K), which is 90%. Likewise, P(C[k]|W and R) is P(C[k]|W), which is 10%. So, we'll simplify the equation and just plug in the numbers: P(K|R and C[k]) = P(K|R) x P(C[k]|K) / [ P(K|R) x P(C[k]|K) + P(W|R) x P(C[k]|W) ] = 0.25 x 0.9 / [ 0.25 x 0.9 + .75 x 0.1 ] = 1/4 x 9/10 / [ 1/4 x 9/10 + 3/4 x 1/10 ] = 9/40 / [ 9/40 + 3/40 ] = 9 / ( 9 + 3) = 9 / 12 = 3/4 = 0.75 = 75% So, Cindy's "keychain" statement raises our estimate from 25% (itself improved from 10% after hearing the rattling) all the way up to 75%, which is respectable. Even though we initially would expect that Alice's predilection for watch-related gifts made a keychain unlikely at 10%, the combined evidence of hearing a rattle sound, and getting Cindy to confess that it's a "keychain" has boosted our confidence that it really is a keychain, and not a watch. It could still be the case that the rattle came from a watch, and Cindy was just lying to us, but both of these combined are so unlikely as to outweigh the initial unlikeliness of getting a non-watch-related gift in the first place. Just to confirm that Bayesian calculations don't introduce weirdness or inconsistencies, let's check what would have happened if we heard Cindy's "keychain" first, and then rattled the box second. Would the answer have been different??? Remember, that Cindy's "keychain" resulted in P(K|C[k]) = 50%, and the proper way to combine evidence is to use the posterior probabilities from one as the prior probabilities for the next. So, the calculation uses a prior probability of 50%, which is exactly the same as the very first example in the previous post. And since the rattling evidence is independent of anything Cindy might say, we don't have to consider any dependencies between the evidence (this is not always the case in the real world, but this can be handled; it's just more complex than I want for this example). P(K|C[k] and R) = P(K|C[k]) x P(R|K and C[k]) / [ P(K|C[k]) x P(R|K and C[k]) + P(W|C[k]) x P(R|W and C[k]) ] = P(K|C[k]) x P(R|K) / [ P(K|C[k]) x P(R|K) + P(W|C[k]) x P(R|W) ] = 0.5 x 0.6 / [ 0.5 x 0.6 + 0.5 x 0.2 ] = 0.3 / ( 0.3 + 0.1 ) = 0.3 / 0.4 = 3/4 = 0.75 = 75% Exactly the same as P(K|R and C[k]), as we should expect. If you keep things straight in the calculations, each independent piece of evidence modifies the final posterior probability independently of the other pieces, giving the same final answer regardless of the order you examine/process the evidence. The most important thing in Bayesian probability is to include all the relevant evidence and background information in the overall model of the situation. The more thorough your evidence, the more confident you can be in the answers that come out. I'm working on developing more examples which may be more closely relevant to Jesus' historicity, but I think I'll just post this part first, since it's very important to understand how prior information works in Bayesian probability calculations. If there's some issue you'd like a more-direct answer to, which I haven't addressed yet, please remind me, and I'll try to give a brief answer in the meantime. Wonderist on Facebook — Support the idea of wonderism by 'liking' the Wonderism page — or join the open Wonderism group to take part in the discussion! Gnu Atheism Facebook group — All gnu-friendly RRS members welcome (including Luminon!) — Try something gnu! Bookmark/Search this post with: Posted on: April 15, 2011 - 11:16pm #1 Not historical but still pertinent Another useful result of Bayes' theorem. Here God is presumed to be omnipotent Hypothesis 1: God exists. Hypothesis 2: God does not exist. These form a dichotomy, so we can apply normalization to demand P(H1) + P(H2) = 1 before and after we apply any evidence. Now I'm going to pray. What are the odds that God will fail to answer my prayer? (A) By hypothesis 2, God does not exist and thus cannot answer my prayer: P(A|H2) = 1 By hypothesis 2, God is capable of answering my prayer and thus there must be some chance, however small, that he will: P(A|H1) < 1 Oh look, it turned out God didn't answer my prayer. Now we apply Bayes' Theorem to see if this lack of evidence favors one of the hypotheses. Remember, we don't care what the final result is (and thus we don't care what the priors are). We only care about the change in the probabilities assigned to each hypothesis. Does the lack of God answering prayers favor H1 or H2? For conveniency, denote the denominator (which is the same for each hypothesis) as C. P(H1|A) = P(H1)*P(A|H1)/C = P(H1)*(Something less than 1)/C < P(H1)/C P(H2|A) = P(H1)*P(A|H1)/C = P(H1)*1/C = P(H2)/C By normalization, 1 = P(H1|A) + P(H2|A) < P(H1)/C + P(H2)/C = 1/C So C < 1. Since C < 1, P(H2|A) = P(H2)/C > P(H2), so the hypothesis 2 gains favor By normalization, hypothesis 2 gaining favor implies that hypothesis 1 loses favor. Note that everything is a variable here. It doesn't matter what your priors are. It doesn't matter if your prior probability of God existing is 99.99999999% and your probability of him ignoring my prayer is 99.9999999999%. In such a biased scenario the shift will be slight, but still present. A lack of evidence is evidence of a lack. This is why the burden of proof is on the theist. Questions for Theists: I'm a bit of a lurker. Every now and then I will come out of my cave with a flurry of activity. Then the Ph.D. program calls and I must fall back to the shadows. Posted on: April 15, 2011 - 11:50pm #2 Zaq wrote:Another useful Zaq wrote: Another useful result of Bayes' theorem. Here God is presumed to be omnipotent Hypothesis 1: God exists. Hypothesis 2: God does not exist. These form a dichotomy, so we can apply normalization to demand P(H1) + P(H2) = 1 before and after we apply any evidence. Now I'm going to pray. What are the odds that God will fail to answer my prayer? (A) By hypothesis 2, God does not exist and thus cannot answer my prayer: P(A|H2) = 1 By hypothesis 2, God is capable of answering my prayer and thus there must be some chance, however small, that he will: P(A|H1) < 1 Oh look, it turned out God didn't answer my prayer. Now we apply Bayes' Theorem to see if this lack of evidence favors one of the hypotheses. Remember, we don't care what the final result is (and thus we don't care what the priors are). We only care about the change in the probabilities assigned to each hypothesis. Does the lack of God answering prayers favor H1 or H2? For conveniency, denote the denominator (which is the same for each hypothesis) as C. P(H1|A) = P(H1)*P(A|H1)/C = P(H1)*(Something less than 1)/C < P(H1)/C P(H2|A) = P(H1)*P(A|H1)/C = P(H1)*1/C = P(H2)/C By normalization, 1 = P(H1|A) + P(H2|A) < P(H1)/C + P(H2)/C = 1/C So C < 1. Since C < 1, P(H2|A) = P(H2)/C > P(H2), so the hypothesis 2 gains favor By normalization, hypothesis 2 gaining favor implies that hypothesis 1 loses favor. Note that everything is a variable here. It doesn't matter what your priors are. It doesn't matter if your prior probability of God existing is 99.99999999% and your probability of him ignoring my prayer is 99.9999999999%. In such a biased scenario the shift will be slight, but still present. A lack of evidence is evidence of a lack. This is why the burden of proof is on the theist. What if what you pray for, coincidentally does happen? Or what if you can rationalize or perceive said prayer as being answered? My point in part one was that the flaw of the application of theorem rests in the lack of an objective value to each piece of evidence. I understand how it can be applied, but each side of the argument would weigh the evidence subjectively. "Don't seek these laws to understand. Only the mad can comprehend..." -- George Cosbuc Posted on: April 16, 2011 - 3:17pm #3 Zaq wrote:Oh look, it turned Zaq wrote: Oh look, it turned out God didn't answer my prayer. Hi Zaq. Actually, you are presuming one or the other hypothesis to interpret whether or not 'God answered the prayer'. This is not a violation of Bayes' theorem. If you did know for sure whether or not God answered a prayer, then you could use your model to modify the probabilities as you suggest. However, you don't know for sure if God answered the prayer or not. All you can know in this circumstance is that the evidence conforms or does not conform to the 'prayer being answered as asked'. Imagine you ask "for a $5 bill by tomorrow at 9:00am". Time passes, no money, 9:00am rolls around. Okay: Prayer was not answered. However, what if by some coincidence: Time passes, a friend slips you $5 to pay back an old debt, 9:00am rolls around. Prayer answered? Or not? Hard to say. All you can say is "The conditions of the prayer were satisfied as specified in the prayer, whether or not a God exists, and whether or not this supposed God answered the prayer." So, you would have to assess the probability of these prayed-for conditions to be satisfied, under each specific circumstance. H2: God doesn't exist, so cannot answer the prayer. The conditions will be met according to random chance. Say, with probability p. H1: God exists and adds a positive increase to the probability of whatever is prayed for, above and beyond pure random chance. Therefore, probability is p + g, where g is the additional 'god' factor. So, whenever a coincidence occurs, you'd have to give the prayer theory at least a little tiny bit of extra boost. However, whenever the coincidence doesn't happen, the prayer theory would automatically get penalized. Over time, as events play out basically randomly, the prayer theory would get more (and heavier) penalties than bonuses, and would creep inevitably towards disconfirmation. The bigger the hoped-for 'god factor' g, the faster the disconfirmation will happen. These days theists have to basically say that their god has a completely undetectable effect in order to protect their superstitions. Essentially, they concede that g=0. Here's how some such calculations might play out if a theist foolishly claimed that prayed-for events would have 10% greater odds of occurring than non-prayed-for events. Starting with 50%/50% prior probabilities (very heavily biased in favour of prayer working). After 100 patients tested in each group (prayed for, not prayed for), with no increase for those prayed for: 47% for prayer, 53% against. After 1000 patients: 24% for prayer, 76% against. After 2000: 9% to 91% After 5000: 0.3% to 99.7% After 10,000: 0.0012% to 99.9988% And much worse after that. The more you test it, the worse the case for prayer would get, as the margin for error would shrink and shrink making it clearer and clearer that there's no observable advantage. As the saying goes: Nothing fails like prayer. Posted on: April 19, 2011 - 9:44am #4 natural wrote:The more you natural wrote: The more you test it, the worse the case for prayer would get, as the margin for error would shrink and shrink making it clearer and clearer that there's no observable advantage. As the saying goes: Nothing fails like prayer. This is a fine example for prayer, or anything testable. Any apologist worth his salt would take a Calvinistic approach to this in saying god just hates you, and he's not answering your prayers for that reason. Where this does fail, is in evaluating evidence that cannot be ruled out statistically. That's why the majority of "evidence" provided by theists is so vague, or can be interpreted so many ways. I find this formula to be a good mathematical representation of how our mind works in calculating odds. In your example in part one, with the watch and bracelet, it is obvious that one is more probable than other right off the bat. The mind evaluates and we intuitively 'know' which is more likely to occur. However, if you were taught since birth that watches rattle more than bracelets, and you were indoctrinated by the church of the Watch. You would most likely conclude that it was a watch in the box. "Don't seek these laws to understand. Only the mad can comprehend..." -- George Cosbuc Posted on: April 19, 2011 - 1:42pm #5 I have known about Bayes' I have known about Bayes' theorem for quite a while, but only in recent years have I really tried to look into it, and realized just how significant it is. To me it goes a long way to make "induction" and reasoning with probabilities as rigorous as standard logic. IOW, where we don't have clear yes/no, true/false data to work with. Our brains have very limited ability to juggle probabilities, and draw accurate conclusions from fuzzy data, in all but the simplest cases. Favorite oxymorons: Gospel Truth, Rational Supernaturalist, Business Ethics, Christian Morality "Theology is now little more than a branch of human ignorance. Indeed, it is ignorance with wings." - Sam Harris The path to Truth lies via careful study of reality, not the dreams of our fallible minds - me From the sublime to the ridiculous: Science -> Philosophy -> Theology Posted on: May 7, 2011 - 9:42am #6 Thanks, natural, and I Thanks, natural, and I apologize for getting back to you so late. Just today finished up two courses in grad school. I certainly don't deny the expansive utility of Bayes' Theorem. So, to make your thought experiment sufficiently analogous for problems of history, here is what you will need to do: integrate evidence from subjective sources. By, "subjective sources," I mean written and spoken language. The evidence of ancient history, in fact, is almost nothing but written language. So, a better thought experiment is where you have nothing but subjective evidence. Let's say you have no idea about probabilities from shaking the box, you don't know how often anyone tells the truth, and you have these lines of evidence: (1) The card on the note says, "I hope you enjoy wearing this to your graduation party." (2) Cindy tells you, "Alice has worn keychains for necklaces before." (3) Alice's little brother Gilbert says, "A dude wearing a keychain for a necklace would look dorky." (4) Cindy says to Gilbert, "Obviously, Alice doesn't care what you think is dorky." (5) Gilbert says, "It don't matter what I think. Dad wears watches, not keychains." (6) Cindy says, "He wore a keychain once. Your mom told me that he told her that he did!" Now, come up with a few probability input values for each of this evidence, and use Bayes' Theorem to estimate the odds of whether the box contains a watch or a keychain. This problem has a small fraction of the complexity of the simplest problems of New Testament history, such as the matter of James, the brother of Jesus. If you need to, then start even smaller and cut out a few of these lines of evidence. It is my opinion that the attempted applications of Bayes' Theorem to problems of completely-subjective evidence are DOA. Richard Carrier wrote a long article encouraging Bayes' Theorem to be applied to problems of New Testament history, without ever solving a practical example problem. Richard Carrier is a hack and a fraud, in my opinion--he is a smart guy and his proposal seems damned stupid, designed merely for mythers, and he has delayed publishing his book on the topic, but you can prove me wrong by solving a problem like this, and I wish you the best of luck. Posted on: May 10, 2011 - 4:26pm #7 Thanks, Abe. That looks like Thanks, Abe. That looks like an interesting challenge. I'll see what I can do with it. However, you should know that it is not possible to draw any inferences if all the evidence is subjective (well, I guess you could draw inferences about what people subjectively believe, but that's hardly going to help in the case of Jesus historicity). We need to have some grounding in objective, publicly available facts, knowledge, and evidence. I think a good place to start with that would be archaeology, anthropology, linguistics, psychology, sociology, etc. All of this foundational knowledge is usually called 'background evidence' or 'background knowledge' in Bayesian inference. I left it out of my first posts because it can make things seem a bit more complicated (though it's not really that complicated in practice), but the links I gave earlier to Richard Carrier's papers on this topic deal with the issue of background evidence right from the An example of how you would need some good background evidence is to estimate the initial (prior) probability of some average person stating something factual vs. something imagined vs. something intentionally fabricated. Basically, you would need at least a simple background hypothesis of the reliability of witnesses, given various parameters (such as the prevailing culture, the mode of speech (prophecy vs. retelling a story), etc.). I don't claim to know how to tackle that. Probably it would require evidence/theory from psychology, archaeology (to check that statements are factual or not, based on objective evidence), anthropoloy, and history, among other fields. But my main point is that while it may be rather difficult to develop a truly thorough theory of witness reliability for any/all situations, we can start simple and build up as needed. This is one of the additional advantages of Bayesian reasoning. If new evidence/hypotheses come to light, you don't have to throw out all your previous work, you just incorporate it into a more sophisticated model and you'll get better and better predictions. So, I'll try out your challenge, but I will necessarily have to make a few simplifying background assumptions, based on my limited knowledge of the requisite science/evidence. But if you disagree with my simplifying assumptions, just note that it won't topple the model I build on them. We would simply have to find better evidence to support or refute my assumptions, and then hook them in to the model to see how they modify the results. It may drastically change the results, of course, but that's how these models are supposed to work: If you dramatically change the underlying assumptions, you will probably get dramatically different output. The model remains valid, it's just the inputs which are refined. (That paragraph may be unclear, so I'll try a different tack: A Bayesian model is not necessarily like a logical argument. Well, it IS like a logical argument, but more flexible. In a logical argument, the premises must be true for the argument to be sound and valid. In a Bayesian model, premises can be probabilities, not strictly true or false, and the validity of the argument rests upon the correct application of the axioms/theorems of probability, not so much whether the premises are 'true' or 'false'. I may end up proposing some weak, over-simplified premises, but it would be a mistake to focus on those as showing my overall model as invalid.) Can't say when I'll get back to this, but it is pretty interesting to me, so I think probably within a week I imagine. Richard Carrier is a hack and a fraud Uhhh, what do you base that on? His preliminary work on Bayes?! Can you point to any of his published scholarly work as being exemplary of hackery and fraudulence? Seems a wee bit uncharitable, I must say, and it certainly seems to indicate a prior bias on your part.
{"url":"http://www.rationalresponders.com/bible_history_bayes_theorem_2","timestamp":"2014-04-16T17:29:20Z","content_type":null,"content_length":"82147","record_id":"<urn:uuid:e20c3908-2405-4e01-8445-4951a0421417>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
limits and conjectures September 27th 2009, 10:07 AM limits and conjectures Let P and Q be polynomials and consider the limit of P(x)/Q(x) as x approaches infinity. State and prove a conjecture for the value of this limit that depends on the degrees of P and Q and their leading coefficients. September 27th 2009, 08:46 PM Let $P=p_n x^n+p_{n-1}x^{n-1}+...+p_1 x+p_0$ and let $Q=q_m x^m+q_{m-1}x^{m-1}+...+q_1 x+q_0$. If $deg(P)>deg(Q)$, then $\lim_{x\to\infty}\frac{P(x)}{Q(x)}=\infty$. If $deg(P)<deg(Q)$, then $\lim_{x\to\infty}\frac{P(x)}{Q(x)}=0$. If $deg(P)=deg(Q)$, then $\lim_{x\to\infty}\frac{P(x)}{Q(x)}=\frac{p_n}{q_m}$. The proofs are fairly straightforward. If $k=\min\{n,m\}$, just factor out $x^k$ from both polynomials and see what you have left.
{"url":"http://mathhelpforum.com/calculus/104604-limits-conjectures-print.html","timestamp":"2014-04-20T18:42:57Z","content_type":null,"content_length":"6440","record_id":"<urn:uuid:0a671484-1e70-4956-b4fa-aa6bb532d510>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Testing the Final SHA-3 Hashing Algorithms The Run-Bit Test A good hash function must also distribute its output bits uniformly. That means the first half of the hash output must have the same number of 1s and 0s as the second half. So for an ideal 256-bit hash, each of its halves should have exactly 64 1s and 0s. Too many 1s or 0s on one half of the hash means the hash is severely skewed. This could be a sign of poor statistical randomness, making the function open to pre-image attacks. Listing Five: The testbed for the run-bit test. - (void)testRunsWith:(NSString *)aMsg forType:(NSInteger)aTyp NSMutableString *tStr, *tSub; NSData *tTst; NSInteger tIdx, tRun, tLen, tALf, tARt; NSInteger tHln, tHdx, tBdx, tBon, tLft, tRgt; NSRange tPos; unichar tChr; char *tBuf, tSmp; // perform the test tLen = [aMsg length]; tLft = 0; tRgt = 0; for (tRun = 0; tRun < tLen; tRun++) // create a copy of the message text tStr = [NSMutableString stringWithString:aMsg]; for (tIdx = 0; tIdx < 255; tIdx++) // read a character byte tChr = [tStr characterAtIndex:tRun]; tChr = tChr + tIdx + 1; tChr %= 255; // update the string tSub = [NSMutableString stringWithCharacters:&tChr length:1]; tPos = NSMakeRange(tRun, 1); [tStr replaceCharactersInRange:tPos withString:tSub]; // generate a test hash tTst = [tStr sha3Hash:aTyp]; // analyse the hash stream tHln = [tTst length]; tBuf = (char *)[tTst bytes]; // counting ones on the left half for (tHdx = 0; tHdx < (tHln / 2); tHdx++) // extract a hash byte tSmp = tBuf[tHdx]; for (tBdx = 0; tBdx < 8; tBdx++) // check the hash bit tBon = (tSmp & 0x1); if (tBon == 1) tSmp >>= 1; // counting ones on the right half for (tHdx = (tHln / 2); tHdx < tHln ; tHdx++) // extract a hash byte tSmp = tBuf[tHdx]; for (tBdx = 0; tBdx < 8; tBdx++) // check the hash bit tBon = (tSmp & 0x1); if (tBon == 1) tSmp >>= 1; // calculate the average run-bit count tALf = tLft / (tLen * 255); tARt = tRgt / (tLen * 255); // store the test results Listing Five shows the routine that does a run-bit test. This routine, testRuns:forType:, uses the same two arguments as testTiming:forType:. And it uses the same two nested loops to copy and modify the message data (lines 13-34). After the routine hashed the modified message, it extracts the raw bytes from the NSData object (lines 37-38). It counts the number of 1s present in the first half (lines 41-53), then the number of 1s in the second half (lines 56-68). Finally, the routine calculates the average bit count per change in message byte (lines 73-74). In Table 3 are the results of the run-bit test. All the hashes showed a slight skew in bit distribution, but the skew never exceeds a dozen bits. MD-5, for instance, was off its ideal by just two bits. And both halves of its hash output have the same average number of 1s. SHA-2 is a bit more interesting. Both its halves are no more than two 1s less than its ideal. Plus, its left half has one more 1 than its right. This implies a slight big-endian skew in the SHA-2 function. The five SHA-3 hashes are even more interesting. All but one showed a slight big-endian skew. Keccak and BLAKE have the largest skew, their left half having 9 more 1s than their right. Grøstl, on the other hand, has only two 1s more on its right half, implying a slight little-endian skew. Moreover, its skew is the smallest of the five. All five SHA-3 hashes deviate slightly from the ideal distribution of 64 1s and 0s. BLAKE has the largest deviation on its left half having six 1s more than ideal. Keccak has the largest for its right half, six 1s less than ideal. Both Grøstl and JH were spot on with their right halves, while their left halves were off by two to three 1s.
{"url":"http://www.drdobbs.com/security/testing-the-final-sha-3-hashing-algorith/231900574?pgno=4","timestamp":"2014-04-16T20:39:53Z","content_type":null,"content_length":"95347","record_id":"<urn:uuid:cf55fdd8-39cf-4e2a-a8a1-33eef1dd196a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Adding a main function 01-07-2013 #1 Registered User Join Date Jan 2013 Adding a main function Okai here is the Code so far, It's not bad #include <stdio.h> int sum(int x, int y); int x,y; printf(" x y result\n\n"); int sum(int x,int y) int result; That main() function does call the sum() function. I'm guessing your teacher wants to see things that are more readable and clear, rather than aspiring to do multiple things in single statements. some_variable = sum(x,y); /* lots of other similar but distinct statements */ and then separate printf("%4d%4d%4d\n", x,y, some_variable); /* other similar statements to print out other variables */ Simply replace "some_variable" with variable names that are meaningful to your program. Bear in mind that, if you write multiple effects into single statements, the code is harder to read, header to get right, and harder for someone else (such as a teacher marking your homework) to Teachers are often unwilling to put effort into understanding code that is unnecessarily hard to read, such as yours. View that as a preparation for real-life programming - I once sacked a programmer who insisted on writing code that was hard to read. Last edited by grumpy; 01-07-2013 at 01:43 AM. Right 98% of the time, and don't care about the other 3%. This program when run is supposed to make these answers... So if I do what your saying it would give me these answers? x y z Sure, if you do it right. Not if you do it wrong, obviously. My answer is generic. You need to flesh things out with some specifics - based on whatever is required based on the homework question you have been asked. Right 98% of the time, and don't care about the other 3%. It's no homework. we have this book here in UNI that we can do these exercises in preparation for our exams next week. And I'm training to write programs so that I won't screw up in the exam. So I want my program to exactly match the questions in the book and actually be prefect flawless ones If that's true, why not do printf("%d %d %d\n", 7, 2, sum(7,2)); You can place similar statements to get the other results. And they are all called from main, fulfilling the requirements. It's not necessary to nest the function calls as in sum(sum(sum(... OOh I actually get what you mean... But can I like do a *x in the function to change the value so it would change in the main program so I don't have to change it again and again over and over. So basically I can change it from there and like this.. So in the end how would it look like>? The &x while calling the function introduces a pointer, which can be useful sometimes, but if the functions only returns a single value you probably don't need it. If you want an example, suppose you want to find both the sum and the product of two numbers a and b using a single function. You might write the following function: void find_sum_prod(int *sum, int *prod, int a, int b); This function should calculate a+b and place it into the integer that `sum' points to. Then it should calculate a*b and place it into the integer that `prod' points to. Functions that accept pointers generally follow this principle. But in your sum(a,b) example it makes more sense to return the sum using the return value, not to return it via a pointer. I see thanks I will try that now 01-07-2013 #2 Registered User Join Date Jun 2005 01-07-2013 #3 Registered User Join Date Jan 2013 01-07-2013 #4 Registered User Join Date Jun 2005 01-07-2013 #5 Registered User Join Date Jan 2013 01-07-2013 #6 Registered User Join Date Nov 2012 01-07-2013 #7 Registered User Join Date Jan 2013 01-07-2013 #8 Registered User Join Date Nov 2012 01-07-2013 #9 Registered User Join Date Jan 2013
{"url":"http://cboard.cprogramming.com/c-programming/153589-adding-main-function.html","timestamp":"2014-04-18T19:01:41Z","content_type":null,"content_length":"74168","record_id":"<urn:uuid:094b23ca-7fd4-4eed-af0b-7e4ffcf8184e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: 1. Uniform convergence. Suppose X is a set and (Y, ) is a metric space. We let B(X, Y ) be the set of bounded functions from X to Y ; that is, f B(X, Y ) if f : X Y and diam rng f < . For each f, g B(X, Y ) we set (f, g) = sup{(f(x), g(x)) : x X}. Proposition 1.1. is metric on B(X, Y ). Proof. Suppose f, g B(X, Y ) and a X. Then (f(x), g(x)) (f(x), f(a)) + (f(a), g(a)) + (g(a), g(x)) diam rng f + (f(a), g(a)) + diam rng g for any x X. Thus (f, g) < . It is evident that (g, f) = (f, g) and that if (f, g) = 0 then f = g. Suppose f, g, h B(X, Y ). Then (f(x), h(x)) (f(x), g(x)) + (g(x), h(x)) (f, g) + (g, h) for any x X from which we conclude that (f, g) (f, g) + (g, h). Example 1.1. Suppose Y is a vector space normed by |·| and is the corresponding metric. Note that B(X, Y ) is then the set of functions f : X Y such that sup{|f(x)| : x A} < .
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/221/2154094.html","timestamp":"2014-04-18T09:55:35Z","content_type":null,"content_length":"7944","record_id":"<urn:uuid:7a2face5-d567-4b34-91cb-b7ea0182fdd0>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
The n-Category Café Posted by John Baez Just as a “group with many objects” is a groupoid, a “ring with many objects” is called a ringoid. Gregory Muller has emailed me a question about ringoids. I don’t want to get into the habit of posting emailed questions on this blog, because I already burnt myself out years ago helping moderate a newsgroup. But just this once, I will. (Famous last words…) Gregory Muller writes: I was excited recently when I learned how to categorify the notion of rings. I’ve long thought of groups as categories, and up to a few days ago, it had bothered me that I lacked a parallel notion for rings, especially given the significance of category-theoretic notions in algebraic geometry. However, my current method of categorifying rings is ugly; specifically, it relies on a definition that uses underlying sets and elements, which is obviously distasteful. What is further frustrating is that this method generalizes in a way that allows Lie groups and other common mathematical objects to be categorified. The method I am using is as follows. Let X be a category with a forgetful functor F to Set. Then define an “X-valued category” to be a category in the usual sense, except that the Hom(A,B) is an object in X instead of a set, and satisfying the following compatibility with composition: The composition function is o: F(Hom(A,B)) × F(Hom(B,C)) → F(Hom(A,C)), with the map from F(Hom(B,C)) → F(Hom(A,C)) given by ‘plugging in’ is the image of a morphism in X, and the similar statement for plugging in on the right. This is a useful definition, since: 1) A ring (with unit) is the same as an AbGrp-valued category with one 2) A k-algebra (associative, with unit) is the same as a k-Vect-valued category with one element. 3) An R-algebra (associative, with unit) is the same as an R-Mod-valued category with one element. 4) A Lie Group is the same as a SmoothManifold-valued groupoid with one …and likely others that I haven’t noticed yet. The problem is the use of sets and elements in the defintion. I have tried to clean things up and use a category-theoretic definition, but (at least in the case of AbGrp-valued categories) it seems to be related to the problem of expressing the formula “ab+cd” as a product of sums in an arbitrary ring, which I think is about as hard and convoluted a question as any you are likely to find in math. Another question is how to philosophically reconcile this notion of a “category with arrows in category X”, with the other notion of such a category coming from n-categories. Specifically, a 2- category can be thought of as a category with Hom(A,B) taking its values in Cat, instead of Set, subject to some very different notions of “compatibility with I would be very thankful for any insight into this stuff, Your concept of an “X-valued category” is usually called an X-enriched category, or X-category for short. The idea is to fix a category X and define a X-category to have a set of objects and, for any pair of objects a and b, not a set but an object of X called hom(a,b). We can write down the whole definition of category this way as long as X is a monoidal category, that is, a category with tensor product. This allows us to say that for any objects a,b,c in our X-category, composition is a morphism in X: o: hom(a,b) ⊗ hom(b,c) → hom(a,c) By this means we can avoid referring to “elements” of hom(a,b). Enriched categories were invented by Max Kelly, and you will enjoy reading his book, since he gives a very clean treatment: Kelly was the first to bring category theory to Australia, and enriched category theory has been a mainstay of Australian category theory ever since he invented it. In addition to X-enriched categories, he defined X-enriched functors and X-enriched natural transformations. He then went ahead and redid all of category theory - well, lots of it anyway! - in this X-enriched setting. The basics are straightforward; things get more tricky when you reach the theory of limits and colimits. One of Kelly’s most famous students is Ross Street, and you can read about the history of enriched category theory near the beginning of Street’s Australian conspectus of higher categories. People usually denote the category of abelian groups by Ab instead of AbGrp. With the usual tensor product of abelian groups, Ab becomes a monoidal category, and an Ab-category is sometimes called a As you note, a one-object ringoid is a ring, just as a one-object groupoid is a group. These are not “categorifications” of the concept of ring and group, not in the technical sense anyway. A group is already a category; when we go to groupoids we are just letting it have more objects. Similarly for rings and ringoids. So, instead of categorification, one should call this process many-object-ification, or maybe oidization. To categorify the concepts of group and ring, we need to go up to 2-categories. The resulting concepts are called “ring categories” and “categorical groups” (or “2-groups”). Ring categories were introduced by Kelly and Laplaza. As you note, we can also many-object-ify the concept of algebra. The category of R-modules is called R-Mod, and it’s a monoidal category with the usual tensor product of R-modules whenever R is commutative. An R-Mod-category is called an R-algebroid or simply an algebroid. As you note, a one-object algebroid is an associative algebra with unit. Note that when R = Z, R-Mod is just Ab, so a Z-algebroid is just a ringoid. Similarly, as you note, a Cat-category is a 2-category. Lately we’ve been talking about symmetric monoidal closed categories, for example cartesian closed categories. Any such category is enriched over itself! (Being symmetric is actually irrelevant There’s been a lot of work on all these subjects. But don’t feel bad that you’re reinventing the wheel a bit here - it’s a very good wheel, and you can roll quite far with it. One thing fans of category theory enjoy is how sufficiently general concepts can bend back, bite their own tails, and swallow themselves. This happens in the case of ringoids. Whenever R is any ring, R-Mod is a ringoid. But, this is also true when R is a ringoid! We define a module of a ringoid R to be an Ab-enriched functor F: R → Ab and define a homomorphism between these to be an Ab-enriched natural transformation. These notions reduce to the standard ones when R is a ring. So, we get a category R-Mod of modules for any ringoid R… and R-Mod is again a ringoid! For a very practical text on algebroids try this: • P. Gabriel and A. V. Roiter, Representations of Finite-Dimensional Algebras, Enc. of Math. Sci., 73, Algebra VIII, Springer, Berlin 1992. The terminology is a bit quirky, but there’s some amazing stuff in here. Posted at September 2, 2006 2:35 AM UTC Gregory Muller described the concept of composition in $X$-enriched categories this way: define an “$X$-valued category” to be a category in the usual sense, except that the $\mathrm{Hom}(A,B)$ is an object in $X$ instead of a set, and satisfying the following compatibility with The composition function is $\circ : F(\mathrm{Hom}(A,B)) \times F(\mathrm{Hom}(B,C)) \to F(\mathrm{Hom}(A,C))\,,$ with the map from $F(\mathrm{Hom}(B,C)) \to F(\mathrm{Hom}(A,C))$ given by ‘plugging in’ is the image of a morphism in $X$, and the similar statement for plugging in on the right. He noticed that various familiar algebraic concepts can hence nicely be understod as $X$-enriched categories of various sorts. In this context he also remarks that Specifically, a 2- category can be thought of as a category with $\mathrm{Hom}(A,B)$ taking its values in $\mathrm{Cat}$, instead of $\mathrm{Set}$, subject to some very different notions of “compatibility with compositions”. I am not fully sure what this last comment on a “different notion” of compatibility is addressing. But I’ll say this: Abstractly, the “compatibility of composition” is always the same, namely always given by the monoidal structure of the category $X$ that we enrich over. The only subtlety to be aware of is that there may be quite different monoidal structures on one and the same category $X$. There is a standard monoidal structure $\times$ on $\mathrm{Cat}$. For categories enriched over $\mathrm{Cat}$ this implies that composition (1)$\array{ \mathrm{Hom}(A,B) \times \mathrm{Hom}(B,C) &\stackrel{F}{\to}& \mathrm{Hom}(A,C) }$ is a functor from a product category, which implies that (2)$\array{ x && x' \\ f_1\downarrow\;\; && \;\f'_1\downarrow \\ y && y' \\ f_2\downarrow\;\; && \;\f'_2\downarrow \\ z && z' } \; \mapsto \; \array{ F(x,x') \\ F(f_1,f'_1)\downarrow\;\;\;\; \\ F (y,y') \\ F(f_2)\downarrow\;\;\;\; \\ F(z,z') }$ does not depend on whether we first apply $F$ on $f_1, f'_1$ and on $f_2,f'_2$ seperately, and then compose the result “vertically”, or if we first compose vertically and apply $F$ (the “horizontal composition”) to the result. This compatibility condition is called the exchange law in 2-categories. It is implied by the standard monoidal structure on $\mathrm{Cat}$. I am assuming that what Gregory had in mind is that this looks different from the “distributive” compatibility condition which we have for $\mathrm{Ab}$-categories. But the reason for that is just the choice of monoidal product in $\mathrm{Ab}$. One choice would be the cartesian product of abelian groups. Using that for enriching over $\mathrm{Ab}$ does not produce the expected distributivity of composition over addition. Instead, this can in fact be understood as a special case of the above “exchange law” in 2-categories, namely if we think of an abelian group as a special case of a category (with a single object) with addition being the composition of morphisms. But there is another monoidal structure on $\mathrm{Ab}$, namely the tensor product obtained by regarding abelian groups as $\mathbb{Z}$-modules. Using this monoidal structure when enriching produces the expected distributive compatibility condition. This is more or less obvious, but maybe it doesn’t hurt saying it. In fact, the only reason why I am making this comment is that I was myself mixed up about this at one point. Posted by: urs on September 2, 2006 1:04 PM | Permalink | Reply to this Re: Ringoids John wrote: An $R-\mathrm{Mod}$-category is called an $R$-algebroid or simply an algebroid. What is puzzling, though, is that nobody seems to be aware of an equally nice characterization of the concept of Lie algebroid. (Or is it maybe the same, and I just don’t see it?) A Lie group has a Lie algebra. A Lie groupoid has a Lie algebroid. But while groups and groupoids have nice arrow-theoretic definitions, the definition of a Lie algebroid ($\to$) is a mess, comparatively. Posted by: urs on September 2, 2006 1:10 PM | Permalink | Reply to this Re: Ringoids Urs writes: But while groups and groupoids have nice arrow-theoretic definitions, the definition of a Lie algebroid ($\to$) is a mess, comparatively. Indeed! On page 43 here I show a hypothesized “periodic table” of Lie n-groupoids, and on page 44 a corresponding periodic table of Lie n-algebroids. The first row of the periodic table of Lie n-algebroids is funny, because they have a smooth space of objects instead of a vector space. It’s possible that someone good at making things elegant can polish up the existing theory of Lie algebroids… but there seems to be something we don’t understand about this stuff, which may only become clear when we study the whole periodic table of Lie n-algebroids. Posted by: John Baez on September 3, 2006 1:33 AM | Permalink | Reply to this Re: Ringoids It’s possible that someone good at making things elegant can polish up the existing theory of Lie algebroids… Maybe it would be helpful to understand the semi-discrete case first. Assume our groupoid has just a set of objects (instead of a manifold of them). Just a finite set, say. But assume that the automorphism group of each object is a Lie group $G$. Can one associate a nice algebroid $A$ to such a groupoid? $\mathrm{Hom}_A(x,x)$ should probably be the enveloping algebra of the Lie algebra of $G$. What is $\mathrm{Hom}_A(x,y)$ for $xeq y$? Posted by: urs on September 19, 2006 11:59 AM | Permalink | Reply to this Re: Ringoids John wrote: It’s possible that someone good at making things elegant can polish up the existing theory of Lie algebroids… Urs wrote: Maybe it would be helpful to understand the semi-discrete case first. Assume our groupoid has just a set of objects (instead of a manifold of them). Just a finite set, say. But assume that the automorphism group of each object is a Lie group $G$. Can one associate a nice algebroid $A$ to such a groupoid? $\mathrm{Hom}_A(x,x)$ should probably be the enveloping algebra of the Lie algebra of $G$. What is $\mathrm{Hom}_A(x,y)$ for $xeq y$? I’ve always been a bit confused when you say “algebroid” to mean “Lie algebroid”. To me they are different things, which reduce to “algebras” and “Lie algebras”, respectively, in the one-object case. (For me an algebra with no adjectives in front means an associative unital algebra. An algebroid with no adjectives in front is a $\mathrm{Vect}$-enriched category. A one-object algebroid is then an But now I’m even more unhappy because you seem to be trying to construct an honest algebroid, rather than a Lie algebroid, from a Lie groupoid! Anyway, let’s follow your suggestion and take a Lie groupoid with a discrete set (= 0-manifold) of objects. This is just a disjoint union (= coproduct) of Lie groups $\coprod_i G_i .$ And, its Lie algebroid will be just the disjoint union of Lie algebras $\coprod_i \mathrm{Lie}(G_i).$ And you’re right that in this case, we can go a further step and form a “universal enveloping algebroid” of our Lie algebroid, namely just the disjoint union of universal enveloping algebras: $\coprod_i \mathrm{U}(\mathrm{Lie}(G_i)).$ But, does anyone ever try to form a “universal enveloping algebroid” for a general Lie algebroid??? By the way, here is a paper that we should read: • Chenchang Zhu, Lie n-groupoids and stacky Lie groupoids. It talks about getting Lie n-groupoids from Lie n-algebroids, and it cites our paper on 2-groups from loop groups. Alissa Crans pointed it out to me. Posted by: John Baez on September 21, 2006 2:22 AM | Permalink | Reply to this Re: Ringoids I’ve always been a bit confused when you say “algebroid” to mean “Lie algebroid”. I’ll stop doing that. Bad habit. you seem to be trying to construct an honest algebroid, rather than a Lie algebroid, from a Lie groupoid! Yes, I was trying to get a handle on the question what a nice arrow-theoretic description of Lie-algebroid would be by passing from Lie algebras to their enveloping algebras. Maybe it’s not fruitful. It was just a suggestion for how to possibly make progress. take a Lie groupoid with a discrete set (= 0-manifold) of objects. This is just a disjoint union (= coproduct) of Lie groups Wait, not necessarily. That case is not interesting enough to be of value here. We can have a discrete set of objects and still have morphisms $x \to y$ for $x eq y$. Consider a group $G_x = \mathrm{Hom}(x,x)$ and a group $G_y = \mathrm{Hom}(y,y)$. Then morphisms $x \stackrel{f}{\to} y$ would be labeled by group isomorphisms $G_y \to G_x$, because (1)$x \stackrel{f}{\to} y \stackrel{g_y}{\to } y \stackrel{f^{-1}}{\to} x := x \stackrel{g_x = f(g_y)}{\to} x \,.$ So, I thought, since I know how to associate an algebra $\mathrm{Hom}_A(x,x) = U(\mathrm{Lie}(G_x))$ to $G_x$ and analogously for $G_y$, can I maybe consistently find a vector space $\mathrm{Hom}_A (x,y)$ such that $A$ becomes an algebroid? Posted by: urs on September 21, 2006 12:58 PM | Permalink | Reply to this Re: Ringoids Urs wrote: We can have a discrete set of objects and still have morphisms $x \to y$ for $x e y$. Whoops - can I blame my mistake on jet lag? I think I got confused because a category is said to be discrete when there are no morphisms of the sort you mention… but you clearly were talking about the other sort of discreteness, namely the objects forming a discrete space. So, I thought, since I know how to associate an algebra $\mathrm{Hom}_A(x,x) = U(\mathrm{Lie}(G_x))$ to $G_x$ and analogously for $G_y$, can I maybe consistently find a vector space $\mathrm{Hom} _A(x,y)$ such that $A$ becomes an algebroid? Yes, and you can just use $U(\mathrm{Lie}(G_x))$, with composition of morphisms being multiplication. The reason is that if a Lie groupoid $C$ has a discrete space of objects, it’s smoothly equivalent to any skeleton $\mathrm{Sk}(C)$, and the latter is just a coproduct of Lie groups. This means we can just do stuff in the trivial case I mentioned and transport it over to your case using the smooth equivalence. The result will be as I said, if $x$ lies in the skeleton. A couple of words of explanation in case any lurking readers want them: A topological category is a category internal to $\mathrm{Top}$. These are different than categories in certain ways: in particular, they aren’t always equivalent to a skeletal subcategory. Given a category $C$ we can form a skeletal subcategory $\mathrm{Sk}(C)$ by taking one representative of each isomorphism class of objects, and all the morphisms between these. This skeleton will then be equivalent to $C$. But, this doesn’t hold internal to $\mathrm{Top}$, since constructing the equivalence uses the axiom of choice, and the axiom of choice fails in $\mathrm{Top}$. In other words, a skeleton automatically comes with an inclusion $\mathrm{Sk}(C) \to C$ but finding a map going the other way: $C \to \mathrm{Sk}(C)$ requires picking for each object of $C$ an object in the skeleton, and this choice can’t usually be done in a continuous way. However, this choice can be done continuously when the space of objects of $C$ is discrete - since then continuity becomes vacuous. All this is also true for “smooth categories”, i.e. categories internal to your favorite category of smooth spaces, e.g. smooth manifolds. Posted by: John Baez on September 21, 2006 4:09 PM | Permalink | Reply to this Re: Ringoids This means we can just do stuff in the trivial case I mentioned and transport it over to your case using the smooth equivalence. Oh, of course, sure. This amounts operationally to picking a fixed isomorphism $G_x \to G_y$ and then using that to map all such isomorphisms to elements of $G_x$, say. Hm, let’s see. Suppose now I have a proper Lie groupoid, with a smooth manifold of objects. I wanto to build an algebroid $A$, such that $\mathrm{Hom}_A(x,x) = U(\mathrm{Lie}(G_x))$ for all objects Then I might take $\mathrm{Hom}_A(x,y)$ to be the collection of pairs (1)$(t,k) \in U(\mathrm{Lie}(G_x)) \times \mathrm{Iso}(G_x,G_y)$ divided out by the equivalence relation (2)$(t,k) \sim (t',k') \;\; \Leftrightarrow t' = k'^{-1}(k(t)) \,.$ On the right is the induced action of a group automorphism on the Lie algebra. The linear structure would be (3)$c_1(t,k) + c_2(t',k) = (c_1 t + c_2 t', k) \,.$ Composition would be defined as (4)$x \stackrel{(t_1,k_1)}{\to} y \stackrel{(t_2,k_2)}{\to} z = x \stackrel{(t_1 k_1^{-1}(t_2),k_2 \circ k_1)}{\to} z \,.$ Do I get an algebroid this way? Or did I make a mistake? Posted by: urs on September 21, 2006 5:17 PM | Permalink | Reply to this from algebroids to Lie algebroids Above, I tried to associate to any Lie groupoid $\mathbf{G}$ a smooth algebroid $\mathbf{A}$ in such a way that if $\mathbf{G}$ has a single obect then $\mathbf{A}$ is the universal enveloping algebra of the Lie algebra of $\mathbf{G}$. I haven’t checked this carefully, but let me assume for a moment a construction along the above lines does work. Then it seems to be rather straightforward to pass from $\mathbf{A}$ to a Lie algebroid, in a manner that generalizes how we would pass from $U(\mathrm{Lie}(G))$ to $\mathrm{Lie}(G)$ - namely by restricting to generators and taking the Lie product to be the commutator. It’s obvious what to do in the case that $\mathbf{G}$ is the transport groupoid of a trivial $G$-bundle $P$ (for $G$ some Lie group), i.e. (1)$\mathbf{G} = P \times P /G \,.$ So let’s look at how it works in this case an then try to reduce the general case to this trivial case. Since in the trivial case all vertex groups $\mathrm{Hom}_\mathbf{G}(x,x) = G$ are canonically identified, we may forget all the isomorphism gymnastics that I mentioned above and simply identify all $\mathrm{Hom}$-sets as $G$. Let’s then say that a section of $\mathbf{G}$ is a vector field on the manifold of objects equipped with a smooth assignment of an element of $\mathrm{Lie}(G)$ to each vector of the vector field. On these sections, we naturally have a Lie bracket operation obtained by a slight adaption of the Lie bracket on vector fields. To compute $[v,w]$ we flow, at each point, a little along $v$, then a little along $w$, then back along $v$ and back along $w$. Only that we now accompany this process by the corresponding trajectory on $G$, which is at each step tangent to the correcponding Lie algebra element associated to our vector field. I guess this does indeed reproduce the Lie algebroid structure of the Lie algebroid associated to $P \times P/G$. For the case of a general groupoid, I guess we simply add to the definition of section given above the additional data consisting of, for each object, a small neighbourhood of that object and a choice of isomorphisms of all vertex groups in that neighbourhood. Then we compute the brackets of sections as above, by doing the computation pointwise in one of these neighbourhoods, using the locally chosen isomorphisms to get us back to the trivial case, locally. Posted by: urs on September 21, 2006 9:01 PM | Permalink | Reply to this Re: Ringoids and categories An earlier, useful textbook with contents and references relevant to this question is also: – Abelian Categories with Applications to Rings and Modules – by N. Popescu, Academic Press: New York and London, 1973. Posted by: I.C. Baianu on September 3, 2006 2:17 AM | Permalink | Reply to this Re: Ringoids This page now linked under “horizontal categorification” in the $n$Lab. Posted by: Urs Schreiber on December 1, 2008 7:33 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2006/09/ringoids.html","timestamp":"2014-04-19T22:14:36Z","content_type":null,"content_length":"76623","record_id":"<urn:uuid:c07418de-4f17-4b39-8871-f9fc9d070486>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
Gravity (application) We discuss problems, which highlight certain aspects of the study leading to gravity. The questions are categorized in terms of the characterizing features of the subject matter : • Acceleration at a Height • Acceleration at a Depth • Comparison of acceleration due to gravity • Rotation of Earth • Comparison of gravitational acceleration • Rate of change of gravity
{"url":"http://cnx.org/content/m15088/latest/?collection=col10322/1.175/","timestamp":"2014-04-21T02:13:35Z","content_type":null,"content_length":"164119","record_id":"<urn:uuid:06e428fb-12e4-448a-909e-791a83d0cac4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
Welcome to Mathinary.com Mathinary is a educational math tool for students and parents who help their children with homework. It contains simple descriptions of the different math topics. The descriptions are easy to understand and does not contain any difficult words. Many of the topics contain calculators, which can calculate the most common math problems and at the same time show intermediate results. It explains how the calculation is done in the way a person would do it showing every single step of the calculation. Mathinary is based on the Danish project "Regneregler", which is used daily by more than 30.000 people. The project was nominated to a World summit Award in 2012 and has received a lot of attention from media and politicians. The Danish prime minister is receiving a demonstration of Mathinary. World Summit Award Mobile Nominee 2012 About Mathinary Math made simple Mathinary is an educational math tool for students and parents who help their children with math. Mathinary contains formulas, explanations and calculators. The calculators give both explanations, intermediate results and final results. The calculators are simulating how a person would calculate the result, by showing every step of the calculation including descriptions.
{"url":"http://www.mathinary.com/index.jsp","timestamp":"2014-04-16T04:11:48Z","content_type":null,"content_length":"32411","record_id":"<urn:uuid:39389963-65c7-4fb0-b5cc-4b9c976e7470>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Zycus Aptitude paper Resources Awards & Gifts Active MembersTodayLast 7 » Daysmore... Articles/Knowledge Sharing Placement Papers Zycus Aptitude paper Posted Date: 10-Aug-2009 Last Updated: Category: Placement Papers Author: vedatrayi Member Level: Bronze Points: 2 Zycus Infotech Aptitude question paper (wording not exact) 1. 2 persons start from a point & go in opposite directions. After going 3km each, they turn left & walk 4 km. How far are they now from each other? Ans : 10 km (Pythagoras theorem) 2. In an objective test, a correct answer scores 4 marks, and a wrong answer scores 2 marks. A student scores 480 marks from 150 questions. how many answers are correct? Answer: 120 3. With which smallest no should 2880 be divided by, to make it a perfect square. Ans: 20 4. Which common no should be added or substracted to 17/24 to make it 1/2. Ans : -10 5. If each side of a rectangle is increased by 100%, by wat % the area increases? Ans: 100% 6. Father is 30 years older than the son. He will only be thrice as old as his son after 5 years. Fathers present age? Ans: 40 years 7. If on an item a company gives 25% discount, they earn 25% profit. If they now give 10% discount, then wat % profit they make?(3 marks) Ans: 35% 8. Successive discounts of 20% and 15% are equal to a single discount of ? Ans: 17.5% 9. Sum of digits of a 2 digit number is 8. When 18 is added to the no, the digits gets reversed. Wat is the no? 10. If a & b are 2 positive integers and (a-b)/3.5 = 4/7 then 1) ba 3) b=a 4) b>=a Ans: Option 1) b 11. A's population is 68000 and it decreases by 80 per year. B's population is 42000 and it increases by 120 per year. After how many years both cities will have same Ans: 130 12. An exam consists of 200 questions to be solved in 3 hours, out of which 50 are maths questions. It is suggested that twice as much time be spent on each math question as for each other question. How many minutes should be spent on maths problems? Ans: 1hr 72 mins Did you like this resource? Share it with your friends and show your love! Responses to "Zycus Aptitude paper" Guest Author: Pratik Roy 28 Jan 2012 Rectification of Th question 4. Which common no should be added or subtracted to 17/24 to make it 1/2. 17/24 - n = ½ 17-12/24 = n 5/24 = n Ans: 5/24 Guest Author: Vinay 22 May 2012 The answer for question No. 2 is wrong.... No. of correct answers= x No.of incorrect answers = 150-x solving it we get Answer should be 90! Guest Author: Maddy 01 Aug 2013 ans to 5th question (i.e rectangle question) will be 300% as per condition l=2l & b=2b hence A will b A=2l*2b=4*lb. so by 300% d area increses.. Post Comment: Do not include your name, "with regards" etc in the comment. Write detailed comment, relevant to the topic. No HTML formatting and links to other web sites are allowed. This is a strictly moderated site. Absolutely no spam allowed. Name: Sign In to fill automatically. Email: (Will not be published, but required to validate comment) Type the numbers and letters shown on the left. Submit Article Return to Article Index
{"url":"http://www.indiastudychannel.com/resources/80588-Zycus-Aptitude-paper.aspx","timestamp":"2014-04-16T16:10:32Z","content_type":null,"content_length":"26274","record_id":"<urn:uuid:dce5954e-7bc1-4fd4-9414-69f80786f315>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
class (Symmetric k (Product k), Monoidal k (Product k)) => Cartesian k whereSource (&&&) :: (a `k` b) -> (a `k` c) -> a `k` Product k b cSource bimapProduct :: Cartesian k => k a c -> k b d -> Product k a b `k` Product k c dSource free construction of Bifunctor for the product Bifunctor Product k if (&&&) is known braidProduct :: Cartesian k => k (Product k a b) (Product k b a)Source free construction of Braided for the product Bifunctor Product k associateProduct :: Cartesian k => Product k (Product k a b) c `k` Product k a (Product k b c)Source free construction of Associative for the product Bifunctor Product k disassociateProduct :: Cartesian k => Product k a (Product k b c) `k` Product k (Product k a b) cSource free construction of Disassociative for the product Bifunctor Product k class (Monoidal k (Sum k), Symmetric k (Sum k)) => CoCartesian k whereSource (|||) :: k a c -> k b c -> Sum k a b `k` cSource bimapSum :: CoCartesian k => k a c -> k b d -> Sum k a b `k` Sum k c dSource free construction of Bifunctor for the coproduct Bifunctor Sum k if (|||) is known braidSum :: CoCartesian k => Sum k a b `k` Sum k b aSource free construction of Braided for the coproduct Bifunctor Sum k associateSum :: CoCartesian k => Sum k (Sum k a b) c `k` Sum k a (Sum k b c)Source free construction of Associative for the coproduct Bifunctor Sum k disassociateSum :: CoCartesian k => Sum k a (Sum k b c) `k` Sum k (Sum k a b) cSource free construction of Disassociative for the coproduct Bifunctor Sum k
{"url":"http://hackage.haskell.org/package/categories-1.0.6/docs/Control-Category-Cartesian.html","timestamp":"2014-04-19T07:12:02Z","content_type":null,"content_length":"16886","record_id":"<urn:uuid:ae00f180-4157-43da-9062-8805ffb7c422>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Open 24/7_Window Film - September-October 2010 Open 24/7 Using Microsoft Excel as a Sales Tool By Manny Hondroulis In my last column I discussed the impact of Microsoft PowerPoint. Now we turn to Microsoft Excel, another application included in Microsoft Office software. Excel is a spreadsheet application made up of cells that are displayed in rows and columns. At first glance, an Excel spreadsheet can be quite intimidating, with 256 columns and 65,536 rows for a grand total of 16,777,216 cells. Each cell can contain data, such as a user-inputted number or string of text, or formula that produces a number or string of text. What does that mean for us? If you’re calculating an installation’s square footage using pencil, paper and a calculator, then you’re going to love what Excel can do for you. Gone are the days when you have to multiply a window’s height by its width, divide by 144, and then multiply by the quantity of windows and scribble the results on note paper. A simple spreadsheet, however, can do this work in a fraction of the Know the Details Before we get into the specifics, let me explain one part of Excel terminology. We’ve already established that a spreadsheet is made of rows and columns of cells. The cell located at the intersection of Column A and Row 1 is referred to as Cell A1. In this simple exercise we’re going to input a window’s dimensions (height and width in inches) and the quantity of windows. For each window type, we’re going to use a new row in the spreadsheet. Type the word Quantity in Cell A1. Then type the words Width, Height, Square Foot and Total Project in Cells B1, C1, D1, and E1 respectively. Columns A, B, and C will require inputted data from you while Columns D and E will automatically populate with the window’s (Column D) or project’s (Column E) square footage (see table 1). We’re going to ask Excel to calculate the square footage of each window set. So in Cell D2, type the following formula: =A2*B2*C2/144, and in Cell D3 type =A3*B3*C3/144. In typing this formula, we’re asking Excel to multiply the window’s height by width, divide by 144 to convert from square inches to square feet, and multiply by the number of windows in this set. Next we need to create a formula that will tell us the total square footage of the project. So in Cell E2 enter the following formula: =SUM(D:D). This formula will create a running total of the square footage of each window set for each row. Save your spreadsheet on your desktop as “Takeoff Template.” Putting it All Together In Cell A2, enter 10 as the quantity of windows and 40 and 60 as the window’s width and height, respectively. In Cell A3, enter 35 as the quantity of windows and 57 and 72 as the window’s width and height, respectively. Cell D2 will automatically calculate the total square footage of ten windows that have a width of 40 inches and a height of 60 inches, while Cell D3 will automatically calculate the total square footage of 35 windows that have a width of 57 inches and a height of 72 inches. The total square footage is 1164.17 as shown in Cell E2 (see table 2). For every different window type, enter all of the vital information (quantity, width and height) in Columns A, B, and C, beginning with Row 2 and working your way down. You will need to copy the formula used in Column D for each new row. Select Cell D2, press the down arrow key while holding down the shift key until you get to the row where you want to stop. Then press CTRL D to paste the formula in D2 to the selected area. This is a very basic spreadsheet and Excel can do much more, such as calculate the perimeter of your windows, which may be relevant if you’re applying an attachment system. If you have trouble creating your own template, download the one available here: www.windowfilmmag.com/documents/takeoff_template.xls. Using Excel, you’ll spend less time calculating square footage and more time Manny Hondroulis is marketing manager for Energy Performance Distribution in Baltimore. Mr. Hondroulis’ opinions are solely his own and not necessarily those of this magazine. © Copyright 2010 Key Communications Inc. All rights reserved. No reproduction of any type without expressed written permission.
{"url":"http://www.usglassmag.com/Window_Film/BackIssues/2010/SeptemberOctober2010/Open24_7.htm","timestamp":"2014-04-20T03:13:36Z","content_type":null,"content_length":"6562","record_id":"<urn:uuid:d89d7c51-6d8f-448b-91f9-77e4e821fb30>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Cyl(E) = Borel(E) for E non-reflexive Grothendieck Banach space up vote 3 down vote favorite This is sort of a follow-up to Borel(X) = \sigma(X') for X non-separable PROBLEM: Given a Banach space $E$ over $\mathbb{K} \in \{\mathbb{C}, \mathbb{R}\}$ that has the Grothendieck property. Does $\hat{C}(E) = \mathcal{B}(E)$ imply $E$ is reflexive? (This would in turn imply that $E$ is separable). Some definitions: • A Banach space is a Grothendieck space if a sequence in $E'$ which is $\sigma(E', E)$-convergent is automatically $\sigma(E', E'')$-convergent. Or equivalently: every $\sigma(E', E)$ zero sequence has subsequence which is $\sigma(E', E'')$-convergent, or equivalently: every linear, bounded operator from $E$ to $c_0$ (or any separable Banach space) is automatically weakly compact. • The $\sigma$-algebra $\hat{C}(E)$ is the $\sigma$-algebra generated by sets of the form $\mathcal{C}_{u_1, \cdots, u_n; C} := \{x \in E : (u_1(x), \cdots, u_n(x)) \in C\}$ where $u_1, \cdots, u_n \in E'$, $C \in \mathcal{B}(\mathbb{K}^n)$ and $n \in \mathbb{N}$. • The $\sigma$-algebra $\hat{C}(E)$ equals the $\sigma$-algebra of weak Baire sets $\mathcal{B}_0(E, \sigma(E, E'))$ for every locally convex space $E$ (see [2], Theorem 2.3). • The inclusion $\hat{C}(E) \subset \mathcal{B}(E)$ is trivially true. If $E$ is separable then $\hat{C}(E) = \mathcal{B}(E)$. [To see this use the Hahn-Banach theorem to show that $\mathcal{B}_E \ in \hat{C}(E)$. As translations and scalar multplications are measurable with regard to the cylindrical $\sigma$-algebra the other inclusion follows.] • A reflexive space is automatically Grothendieck. • For a separable Grothendieck space $E$ we have that the identity is weakly compact so $E$ becomes reflexive • A reflexive space $E$ with $\hat{C}(E) = \mathcal{B}(E)$ is automatically separable ([1], Prop. 2.6, p.19). Without reflexivity the equality $\hat{C}(E) = \mathcal{B}(E)$ does not imply $E$ is separable in general. • The example $E = \ell^2(\mathbb{R})$ shows that there is a reflexive and non-separable space with $\mathcal{B}(E) \not= \hat{C}(E)$ • Edgar's example below or $E = \ell^{\infty} = C(\beta \mathbb{N})$ gives a non-reflexive Grothendieck space with $\mathcal{B}(E) \not= \hat{C}(E)$. The question therefore: does every non-reflexive Grothendieck space have that property? • There are non-reflexive Grothendieck spaces which do not contain $\ell^{\infty}$ (cf. [3]). So we can't simply reduce to this case. I don't know much more about Grothendieck spaces though or characterizations of them that might be helpful. [1] N. N. VAKHANIA, V. I. TARIELADZE, S. A. CHOBANYAN, Probability Distributions on Banach Spaces, Mathematics and its applications (D. Reidel Publishing Company), 1987 [2] http://www.iumj.indiana.edu/IUMJ/FULLTEXT/1977/26/26053 [3] R. HAYDON, A non-reflexive Grothendieck Space that does not contain $\ell^{\infty}$, Israel Journal of Mathematics, Vol 40, No. 1, 1981 EDIT: I rephrased the question and added some information. Clarification needed. According to en.wikipedia.org/wiki/Grothendieck_space , every reflexive space is Grothendieck. – Gerald Edgar Jul 21 '10 at 11:56 Yes. I better edit the question title to "non-reflexive" Grothendieck space if that makes it clearer. – santker heboln Jul 21 '10 at 13:40 add comment 1 Answer active oldest votes Space $C(K)$ of continuous functions on a Stone space $K$ is Grothendieck, right? So take $K$ so large that countably many continuous functions do not separate points in $K$. Then (as in the $l^2(I)$ answer to Question 24432 cited) the weak Baire sets (= the cylindrical sigma-algebra) is not equal to the weak Borel sets, and certainly not equal to the norm Borel sets. up vote 2 Since the closed unit ball is not a weak Baire set. down vote I'm not sure I understand you correctly, but I wanted an example where the cylindrical algebra is equal to the Borel sigma algebra. In the $\ell^2(I)$ case and the $C(K)$ case for e.g. $K = \beta \mathbb{N}$ it is not, as we know from the other question. Is there some characterisation of Grothendieck spaces I'm not aware of you are using here? As in all non-reflexive Grothendieck spaces are of $C(K)$ type (where $K$ is Stonean)? – santker heboln Jul 22 '10 at 6:08 add comment Not the answer you're looking for? Browse other questions tagged fa.functional-analysis banach-spaces pr.probability measure-theory open-problem or ask your own question.
{"url":"http://mathoverflow.net/questions/32785/cyle-borele-for-e-non-reflexive-grothendieck-banach-space/32830","timestamp":"2014-04-20T16:43:32Z","content_type":null,"content_length":"57105","record_id":"<urn:uuid:771f49a2-0268-42f4-b092-f0488d28c8f6>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
An expression or an equation that contains the variable squared, but not raised to any higher power. For instance a quadratic equation in x contains x^2 but not x^3. Similarly a quadratic expression, or a quadratic form, contains its variable(s) squared but not raised to any higher power. If there is more than one variable (say, x and y), quadratic can mean that they are multiplied together in pairs (xy) but not in threes (such as x^2y). The graph of a quadratic equation is known as a quadratic curve; the curve of the general quadratic equation y = ax^2 + bx + c is a parabola. Related category
{"url":"http://www.daviddarling.info/encyclopedia/Q/quadratic.html","timestamp":"2014-04-18T03:13:16Z","content_type":null,"content_length":"5943","record_id":"<urn:uuid:cdb05ae4-e4fe-4e92-b96e-657621bb298d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistical learning of peptide retention behavior in chromatographic separations: a new kernel-based approach for computational proteomics • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMC Bioinformatics. 2007; 8: 468. Statistical learning of peptide retention behavior in chromatographic separations: a new kernel-based approach for computational proteomics High-throughput peptide and protein identification technologies have benefited tremendously from strategies based on tandem mass spectrometry (MS/MS) in combination with database searching algorithms. A major problem with existing methods lies within the significant number of false positive and false negative annotations. So far, standard algorithms for protein identification do not use the information gained from separation processes usually involved in peptide analysis, such as retention time information, which are readily available from chromatographic separation of the sample. Identification can thus be improved by comparing measured retention times to predicted retention times. Current prediction models are derived from a set of measured test analytes but they usually require large amounts of training data. We introduce a new kernel function which can be applied in combination with support vector machines to a wide range of computational proteomics problems. We show the performance of this new approach by applying it to the prediction of peptide adsorption/elution behavior in strong anion-exchange solid-phase extraction (SAX-SPE) and ion-pair reversed-phase high-performance liquid chromatography (IP-RP-HPLC). Furthermore, the predicted retention times are used to improve spectrum identifications by a p-value-based filtering approach. The approach was tested on a number of different datasets and shows excellent performance while requiring only very small training sets (about 40 peptides instead of thousands). Using the retention time predictor in our retention time filter improves the fraction of correctly identified peptide mass spectra significantly. The proposed kernel function is well-suited for the prediction of chromatographic separation in computational proteomics and requires only a limited amount of training data. The performance of this new method is demonstrated by applying it to peptide retention time prediction in IP-RP-HPLC and prediction of peptide sample fractionation in SAX-SPE. Finally, we incorporate the predicted chromatographic behavior in a p-value based filter to improve peptide identifications based on liquid chromatography-tandem mass spectrometry. Experimental techniques for determining the composition of highly complex proteomes have been improving rapidly over the past decade. The application of tandem mass spectrometry-based identification routines has resulted in the generation of enormous amounts of data, requiring efficient computational methods for their evaluation. There are numerous database search algorithms for protein identification such as Mascot [1], Sequest [2], OMSSA [3] and X!Tandem [4], as well as de-novo methods like Lutefisk [5] and PepNovo [6]. Furthermore, there are a few methods like InsPecT [7] which use sequence tags for pruning the possible search space using more computationally expensive and more accurate scoring functions afterwards. Database search algorithms generally construct theoretical spectra for a set of possible peptides and try to match these theoretical spectra to the measured ones to find the candidate(s) which match(es) best. In order to distinguish between true and random hits, it is necessary to define a scoring threshold, which eliminates all peptide identifications with scores below the scoring threshold. This threshold value is chosen quite conservatively to get very few false positives. Consequently, there is a significant number of correct identifications below the threshold that are not taken into account, although these spectra often correspond to interesting (e.g. low abundance) proteins. One of the goals of this work was to increase the number of reliable identifications by filtering out false positives in this 'twilight zone', below the typical threshold. There are various studies addressing this issue [8-10] by calculating the probability that an identification is a false positive. Standard identification algorithms are based on MS/MS data and do not use the information inherent to the separation processes typically used prior to mass spectrometric investigation. Since this additional experimental information can be compared to predicted properties of the peptide hits suggested by MS/MS identification, false positive identifications can be identified. In SAX-SPE, it is important to know whether a peptide binds to the column or flows through. This information can also be incorporated into the identification process to filter out false positive identifications. Oh et al. [11] elaborated several chemical features such as molecular mass, charge, length and a so-called sequence index of the peptides. These features were subsequently used in an artificial neural network approach to predict whether a peptide binds to the SAX column or not. The sequence index is a feature reflecting the correlation of pI values of consecutive residues. Strittmater et al. [12] included the experimental retention time from an ion-pair reversed-phase liquid chromatographic separation process into a peptide scoring function. They used a retention time predictor based on an artificial neural network [13] but a number of other retention time predictors exist [14,15]. If the deviation between observed and predicted retention time is large, then the score of the scoring function becomes small. Since they only considered the top scoring identifications (rank = 1), they missed correct identifications of spectra where a false positive identification had a larger score than the correct one. We also address these cases in our work, demonstrating that filtering out identifications with a large deviation between observed and predicted retention time significantly improves the classification rate of identifications with small maximal scores. Only recently, Klammer et al. [16] used support vector machines (SVMs) [17] to predict peptide retention times. Nevertheless, they used standard kernel functions and stated that they needed at least 200 identified spectra with high scores to train the learning machine. When applying of machine learning techniques to the prediction of chromatographic retention, a concise and meaningful encoding of the peptide properties is crucial. The features used for this encoding must capture the essential properties of the interaction of the peptide with the stationary and the mobile phases. These properties are mostly determined by the overall amino acid composition, by the sequence of the N-and C-terminal ends, and by the sequence in general. One of the most widely applied machine learning techniques are SVMs. SVMs use a kernel function which is used to encode distances between individual data points (in our case, the peptides). There are numerous kernel functions described in the literature which can be applied to sequence data. Some of them are totally position-independent, like the spectrum kernel [18] which basically just compares the frequencies of patterns of a certain length. Other kernels like the locality-improved kernel [19 ] or the weighted-degree kernel [20] account for patterns at a certain position. Since patterns could occur shifted by a particular amount of characters, the oligo kernel [21] and the weighted-degree kernel with shifts [22] also account for these signals in a range controlled by an additional parameter. All of these kernels (except the spectrum kernel) were introduced for sequences of the same length. However, the length of peptides typically encountered in computational proteomics experiments varies significantly, ranging roughly from 4–40 amino acids. Because it can be assumed that the local-alignment kernel [23], which can also handle sequences of different lengths, does not suit this kind of problem perfectly, we elaborated a new kernel function, which can be applied to sequences of different lengths. Consequently, this new kernel function is applicable to a wide range of computational proteomics experiments. In 2006 Petritis et al. [14] evaluated different features like peptide length, sequence, hydrophobicity, hydrophobic moment and predicted structural arrangements like helix, sheet or coil for the prediction of peptide retention times in reversed-phase liquid chromatography-MS. They used an artificial neural network and showed that the sequence information, together with sequence length and hydrophobic moment yield the best prediction results. In their study, they used only the border residues of the peptide sequences; their evaluation showed that a border length of 25 worked best for their dataset. Since they used one input node for every position of the borders of the peptide, they needed a very large training set, which means that they trained their learning machine on 344,611 peptide sequences. Since one cannot routinely measure such an amount of training sequences before starting the actual measurements, it is reasonable to apply a sort of gaussian smoothing effect to the sequence positions. This means that in our representation, not every amino acid at every position is considered but rather regions (consecutive sequence positions) where the amino acid occurs. The distance of the amino acids of two sequences is scored with a gaussian function. The size of this region modeled by our kernel function can be controlled by the kernel parameter σ of the kernel function and is learned by cross validation. By this and because we use support vector machines in combination with our kernel function, the number of necessary training sequences can be decreased dramatically. By just using the amino acid sequence, we do not rely on features which are important for certain separation processes. This means that we learn the features (i.e. composition (using a large sigma in the kernel function), sequence length, hydrophobic regions ...) which are important for the prediction process within the data because they are reflected in the amino acid sequence. This is why our kernel function can be used for retention time prediction in IP-RP-HPLC as well as for fractionation prediction in SAX-SPE. When applied to the same dataset as Oh et al. [11] used, our kernel function in conjunction with support vector classification predicts 87% of the peptides correctly. This is better than for all reported methods. Furthermore, our retention time prediction model is based on a new kernel function in conjunction with support vector regression [24], which allows us to predict peptide retention times very accurately, requiring only a very small amount of training data. This method has a better performance on a comparative test set than the artificial neural network method used by Strittmater et al. [12], even with a much smaller training set. Additionally, our method outperforms the methods introduced by Klammer et al. [16]. In the first part of the paper, we demonstrate that our new kernel function, in combination with support vector classification, achieves better results in SAX-SPE fractionation prediction than any published method. Next, we show that our kernel function also performs very well in peptide retention time prediction in IP-RP-HPLC with very few training data required. This allows us to train our predictor on a dataset acquired in one run to predict retention times for two further runs, and to filter the data by deviation in observed and predicted retention time. This leads to a huge improvement in the classification rate of the identifications of spectra for which only identifications with small scores can be found, and also improves the classification rate of high scoring identifications. The "Methods" section briefly gives an introduction to support vector classification and support vector regression. Then our new kernel function is introduced and we explain our p-value based filtering approach. Finally, there is an explanation of the datasets used in this study. Results and Discussion In this section, we present the results for two different application areas of our new kernel function. The first one is peptide sample fractionation prediction in SAX-SPE, and the second one is peptide retention time prediction in IP-RP-HPLC experiments. For peptide sample fractionation prediction, we demonstrate that our method performs better than the established method. In retention time prediction, we show that we perform very well with just a fractional amount of training data required. This allows us to train our predictor with a dataset measured in one run to predict retention times of the next runs very accurately. The peptide identifications are improved afterwards by filtering out all peptides which have a large deviation between observed and predicted retention time. Performance of Peptide Sample Fractionation Prediction To be able to compare our results with existing methods, we used the same dataset and the same setup as Oh et al. [11]. This means that we randomly partitioned our data into a training set and a test set, having 120 peptides for training and 30 peptides for testing. The performance was measured by classification success rate (SR), which is the number of successful predictions divided by the number of predictions. The whole procedure was repeated 100 times to minimize random effects. The training was conducted by a five-fold cross-validation (CV) and the model was trained using the best parameters from the CV and the whole training set. To compare our new kernel function with established kernels, we used the best four feature combinations of Oh et al. [11] and trained an SVM with the polynomial and the RBF kernel for each feature combination. Feature number one is molecular weight, the second is sequence index, the third is length and the fourth feature is the charge of the peptide. We used the same evaluation setting as described above and in the five-fold CV the SVM parameter C ^-4·2^i|i σ parameter of the RBF kernel, σ ^-15·2^i|i d of the polynomial kernel, d Table1.1. It seems as if the fourth feature (i.e. the charge of the peptide) is the most important factor but molecular weight also seems to improve the prediction performance. Peptide sample fractionation prediction using standard SVMs. This table shows the classification success rates of the different feature combinations for SVMs with the polynomial and the RBF kernel on the dataset of Oh et al. [11]. The features are (1) ... An independent approach which just uses the sequence information of the peptides was performed using the local-alignment kernel by Vert et al. [23]. Using the same setup as described above, we used the BLOSUM62 matrix [25] and the kernel function parameters were the following: β d e et al. [11]. Therefore more appropriate kernel functions are needed, like our new paired oligo-border kernel ( POBK), which is explained in the "Methods" section. The kernel function has a kernel parameter b which is the border length of the peptide. A small b means that only few border residues of the peptides contribute to the kernel function, and a border length equal to the sequence length would mean that all residues contribute to the kernel function value. To determine the best border length of the POBK, we performed the evaluation for all b b depicted in Fig. Fig.11 shows that for a b greater than 19, the SR does not change significantly, with a slight improvement for b = 22. This is why in the following, only the POBK for b = 22 is considered. Border length evaluation of the POBK. This figure shows the evaluation of SR using different border lengths b for the POBK on the dataset of Oh et al. [11]. A comparison of the SR for different methods can be found in Fig. Fig.2.2. The first two bars represent the SR performance of the best SVMs using standard kernels of Table Table1.1. The third bar demonstrates the performance of an SVM with the local-alignment kernel. The fourth bar shows the performance of the best predictor in Oh et al., which is 0.84. The last bar represents the SR of the POBK, which is introduced in this paper, for peptide sample fractionation and retention time prediction. The SR of this method is 0.87, which is significantly better than all other approaches. Performance comparison for peptide sample fractionation prediction. Comparison of classification success rates for different methods predicting peptide adsorption on the dataset of Oh et al. [11]. Correctly Predicted Peptides in Peptide Sample Fractionation Prediction In Oh et al. [11] the prediction process with 100 random partitionings was done for the best four predictors, and for every peptide, the whole predictions were stored. These authors then classified a peptide by the majority label which had been assigned to the peptide. By this method, they were able to assign 127 of the 150 peptides correctly, which corresponds to an SR of 0.8467. To be able to compare this procedure with our method, we made the assumption, that for a particular peptide, the SVM would make a correct assignment more often. Furthermore, we assumed that if we also stored the predictions for each peptide and each run, we could also get a majority predictor which yields good performance. The evaluation of this procedure shows that we are able to predict 134 peptides correctly in this setting, which is an SR of 0.8933. Fig. Fig.33 shows a histogram of the SRs for the different peptides for the method by Oh et al. [11] and the SVM with the POBK. Histogram of classification success rate. This figure shows a histogram of the SR of particular peptides using the majority classifier on the dataset of Oh et al. [11]. This is compared to the ensemble prediction of Oh et al. Evaluation of Model Performance for Peptide Retention Time Prediction For peptide retention time prediction, we had several goals. The first one was to elaborate a retention time predictor showing equivalent performance as established methods but requiring just a fraction of the training set size. To demonstrate that our retention time predictor fullfills the desired constraints, we performed a two-deep CV on the Petritis dataset [14] described in the "Methods" section. This means that we partitioned the data randomly into ten partitions and performed a CV with the data from nine of the ten partitions to find the best parameters. Later, we trained our model with the best hyperparameters and the data of the nine partitions to evaluate the performance of the predictor on the omitted tenth partition. This was done for every possible combination of the ten partitions and the whole procedure was repeated ten times to minimize random effects. A plot of the observed normalized retention time against the predicted normalized retention time can be seen in Fig. Fig.44 for one of the ten two-deep CV runs. Since the standard deviation over the ten runs was 0.0007, this plot is quite representative for the model performance. Petritis et al. [14] showed that their method performs better than those of Meek [26], Mant et al. [27], Krokhin et al. [28] and Kaliszan et al. [29], using this dataset for validation. Thus, in Table Table2,2, we only compare the performance of our method with the work of Petritis et al. [14]. This comparison is somewhat biased since we only had a fraction of the original validation set for training, which means that our training set size was 300 times smaller than that of the other methods. Nevertheless, our method performs better than the model [13] which is used by Strittmater et al. [12] in their filtering approach. The only model with a better performance is the artificial neural network with 1052 input nodes and 24 hidden nodes [14]. It is obvious that a model like this needs a very large amount training data. Petritis et al. [14] trained their model with more than 344,000 training peptides. Therefore, this type of model is not suitable for retention time prediction for measurements under different conditions or with different machines because it is very time consuming to acquire identification and retention time data for more than 344,000 training peptides before starting the actual measurements. To demonstrate that our method is robust enough for training on verified data of one single run, we constructed a non-redundant dataset out of datasets vds1 (available as Additional file 1) and vds2 (available as Additional file 2). A detailed description of these datasets can be found in the "Methods" section. For different training sizes s Fig.55 indicates that for the POBK, 40 verified peptides are enough to train a predictor which has a squared correlation coefficient between observed and predicted normalized retention time greater than 0.9 on the test set. This number is much smaller than the number of verified peptides we get for one run since vds1 has 144 peptides, vds2 has 133 peptides and vds3 (available as Additional file 3) has 116. This evaluation shows that with our predictor, it is possible to measure one calibration run with a well defined and easily accessible peptide mixture prepared from real biological samples to train a predictor, which can then be used to predict retention times for the peptides very accurately. Furthermore, Fig. Fig.55 shows a comparison of the POBK to the methods introduced by Klammer et al. [16] and Petritis et al. [13,14] as described in the "Methods" section. Our method needs significantly less training data for a good prediction and has also superior performance if all training sequences of our dataset are used. One possible explanation for the low performance of the models from Petritis et al. is that their models need a larger amount of training data. This is supported by the fact that they used about 7000 [13] and about 345,000 [14] training peptides in their studies. To compare our method with the work by Krokhin [30], we used our verified datasets. This means that we e.g. trained our model on vds1 and predicted the retention times for peptides of the union of vds2 and vds3, which were not present in vds1. This means that if a peptide occured in vds2 and in vds3, we only kept the peptide identification with the biggest score. For the POBK, we performed a five-fold CV with SVM parameters C ^i|i v ^i|i σ ^i|i Comparison of different retention time predictors. This table shows the squared correlation coefficient between observed and predicted normalized retention time of retention time prediction methods of Petritis et al. [13, 14] on the Petritis test set ... Example figure for peptide retention time prediction. This plot shows the observed normalized retention time against the predicted normalized retention time for one of ten two-deep CV runs on the Petritis test set [14]. Since every peptide occurs exactly ... Learning curve for peptide retention time prediction. This plot demonstrates the squared correlation coefficient depending on the number of training samples for the union of vds1 and vds2. For every training sample size, we randomly selected the training ... Afterwards we trained our model with the whole training set and the best parameters and measured the squared correlation between observed and predicted retention time on the test set. This procedure was repeated ten times to minimize random effects. Since there exists a web-server for the method by Krokhin [30], we could also compare the observed retention times with the predicted ones on our test sets with this method. To calculate the hydrophobicity parameters a and b of this method, we used our two standard peptides introduced in the "Methods" section. Furthermore, we used the 300 Å column since the other coloumns lead to inferior results. As can be seen in Table Table3,3, the model by Krokhin performs quite well even though it had been elaborated on another type of sorbent. Nevertheless the POBK achieves a significantly higher squared correlation coefficient. It should be noted that the web-server by Krokhin is restricted to three different coloumns. The advantage of our method is that there is not any restriction to a certain type of experimental setup. One only needs a small amount of training peptides and can train a model which can immediately be used for retention time prediction. It should be mentioned that the POBK has a higher squared correlation between observed and predicted retention time on our datasets than on the testset by Petritis et al. This could be due to the fact that Petritis et al. performed shotgun proteomics peptide identification [14]. It is commonly accepted that shotgun proteomics peptide identification has a significant false positive rate. Evaluation of prediction performance for retention time prediction using the POBK. This table shows the performances of the POBK using our verified datasets (introduced in the "Methods" section). The other columns contain the squared correlation coefficient ... Improving Peptide Identifications by Using Retention Time Prediction The second goal for retention time prediction was to elaborate a retention time filter which could be used for improving peptide identifications. In this setting, we trained our learning machine on one of the vds (i.e. vds1) and predicted the retention times for the remaining ds (i.e. ds2 and ds3). The peptides of the training and test sets were made disjoint by removing all identifications of the test set which belonged to spectra having an identification which was also present in the training set. On every training set, we performed a five-fold CV with SVM parameters C ^i|i v ^i|i σ ^i|i POBK for all three datasets in Table Table33 show nearly the same very good squared correlation coefficient of about 0.95 between observed and predicted normalized retention times, we restricted ourselves in the following to training our learning machine on vds3 and evaluated the filtering capability of our filtering approach on ds1 and ds2. The performance evaluation of our filter model was done by a two-step approach. In the first step, we measured the number of true positives and the number of false positives for the identifications returned by the Mascot [1] search engine. This was conducted for different significance values. Mascot provides a significance threshold score for the peptide identification at a given significance level. This significance level was 0.05 in all our studies. To be able to compare the identification performance for different levels of certainty we chose different fractions of the significance threshold score. This means for example, that for a fraction of 0.5, all identifications have to have a score which is equal to or greater than half of the significance threshold score. The evaluation was accomplished for varying threshold fractions t Fig. Fig.6a6a demonstrates the good CR for identifications with high Mascot scores since a threshold fraction equal to one means that all identifications have a score equal or larger than the significance threshold score given by the Mascot search engine. Nevertheless, even for these identifications, filtering with the retention time filter improves the CR from 89–90%. An even greater improvement can be achieved for identifications with smaller scores. If all identifications are constrained to have a score equal or larger than 60% of the significance threshold score, the CR improves from 55–77% by using our filter. A CR of 0.77 is still quite good and, as can be seen in Table Table4,4, the number of true positives increases from 350 to 557. This means that many more spectra can be identified with an acceptable number of false positives by applying our retention time filtering approach. Fig. Fig.6b6b shows that our model is valuable for removing false identifications since many false positives are outside the trapezoid and are removed by our filter for a threshold fraction of 0.95. Figure Figure6c6c shows this even more drastically for a threshold fraction of 0.6. The whole evaluation shows that our retention time prediction can be used to improve the level of certainty for high-scoring identifications and also to allow smaller thresholds to find new identifications with an acceptable number of false positives. Evaluation of filter performance. This table presents the classification rates of the identified spectra for varying fractions of the significance threshold with and without retention time filtering. The model was trained using the vds3 dataset and the ... Visualization of filter performance. This plot shows the improvement in classification rate one can get by using our retention time filter for a) varying fractions of the significance threshold value, b) all predictions of spectra having a score equal ... In this paper, we introduced a new kernel function which was successfully applied to two problems in computational proteomics, namely peptide sample fractionation by SAX-SPE and high resolution peptide separation by IP-RP-HPLC. Furthermore, we demonstrated that the predicted retention times can be used to build a p-value based model which is capable of filtering out false identifications very accurately. Our method performs better than all previously reported peptide sample fractionation prediction methods and for retention time prediction, our method is (to our knowledge) the only learning method which can be trained with a small training size of 40 peptides but still achieving a high correlation between observed and predicted retention times. This small required training set allows us to imagine the following application which would be very helpful for proteomic experiments. One could identify a well defined protein mixture before starting the experiments and use the verified peptides for training the predictor. Next the predictor can be used to predict retention times for all identifications of the following runs. This predicted retention time can then be applied to improve the certainty of the predictions. It can also be used to identify a much larger number of spectra with an acceptable number of false positives. This is achieved by lowering the significance threshold and filtering the identifications by our p-value-based retention time filter. Since all our methods are integrated into the OpenMS [31] library, which is open source, every researcher is able to use the presented methods free of charge. Also, we offer the prediction models as tools which are part of the OpenMS proteomics pipeline (TOPP) [32]. These tools can be easily combined with other tools from TOPP, allowing wide-range research applications in computational proteomics. Algorithmical Methods In this work, we introduce a new kernel function which can be used to predict peptide properties using support vector classification and v-support vector regression (v-SVR) [24]. We apply this kernel function to predict fractionation of peptides in SAX-SPE as well as peptide retention times in IP-RP-HPLC. To show the superior performance of the new kernel function, we provide comparisons to established kernel functions and the latest approaches of other working groups [11,14,16]. Support Vector Machines In binary classification, the task is to find a function f: $X$ → $Y$, $Y$ = {-1, 1} from n labelled training samples (x[i], y[i]) x[i], y[i])|x[i ]$X$, y[i ]$Y$, i = 1,..., n}, such that unlabelled data samples x $X$ from the same data source can be classified by this function. The idea is to learn something about the distribution of the training samples so that unseen test examples that belong to the same underlying distribution can be predicted very accurately by the function. In support vector classification [17], the task is to find a discriminating hyperplane in a certain space. Therefore, one normally maximizes subject to: The C is chosen beforehand and the optimal weights α[i ]are searched. With the α[i]s the discriminant function is: To be able to learn non-linear discriminant functions it is possible to apply a mapping function to the input variables Φ: $X$ → $ℱ$ as stated in [24]. Since computing the inner product x[i]), Φ(x[j ])k can be used instead k: $X$^2 → $Y$: k(x[i], x[j]) = x[i]), Φ(x[j]) and the discriminant is and the x[i]: α[i ]> 0 are called support vectors. Support Vector Regression In regression, the task is to find a function f: $X$ → $Y$, $Y$n labelled training samples (x[i], y[i]) x[i], y[i])|x[i ]$X$, y[i ]$Y$, i = 1,..., n} such that unlabelled data samples x $X$ from the same data source can be assigned a label y $Y$ by this function. The idea is, as in the binary case, to learn something about the distribution of the training samples so that unseen test examples which belong to the same underlying distribution can be predicted very accurately by the function. In v-SVR [24], the regression function is learnt by maximizing subject to: In this term, the v bounds the amount of training errors and support vectors of the function. To be able to learn non-linear discrimination functions, it is again possible to apply a mapping function to the input variables Φ: $X$ → $ℱ$ and a kernel function which corresponds to the inner product of the mapped feature vectors. Consequently, the regression function is learnt by maximizing Kernel Function The oligo kernel introduced by Meinicke et al. in [21] is a kernel function which can be used to find signals in sequences for which the degree of positional uncertainty can be controlled by the factor σ of the kernel function. The standard oligo kernel was introduced for sequences of fixed length. Since there are many problems like peptide retention time prediction in which the length of the sequences varies significantly, this kernel function cannot be applied to them directly. Petritis et al. [14] predicted peptide retention times very accurately by encoding the border residues directly. As stated in [33], the oligo kernel can be used as a motif kernel. This motivated us to construct a kernel which only considers the border residues of a peptide for a fixed border length b. Consequently, the kernel function is called oligo-border kernel (OBK). Here, a motif is a certain k-mer at a position inside the b residue border at each side where b k-mer at the leftmost b residues contributes to its oligo function as well as every k-mer at the rightmost b ones. For the peptide sequence s $A$^n, the left border L is defined as L = {1, 2,..., min(n, b)} and R = {max(0, n - b + 1),..., n}. The set $SωL$ = {p[1], p[2],...} contains the positions where the k-mer ω $A$^k occurs inside the left border and $SωR$ ={p[1], p[2],...} the k-mer positions for the right border. This means that $SωL$ ∩ L = $SωL$ and $SωR$ ∩ R = $SωR$. In [21] the feature space representation of a sequence is a vector containing all of its oligo functions. These oligo functions are the sums of gaussians for every particular k-mer. This means that Consequently, the oligo-border function is: where M L, R}. This leads directly to the feature map: Let U = L R and let $SωUi$ be the set $SωU$ of sequence s[i]. Let ind(p, q) = [[(p L[i ]∧ q L[j])|(p R[i ]∧ q R[j])]] for p U[i ]and q U[j ]in which [[condition]] is the indicator function. This function equals one if condition is true and zero otherwise. Similar to [21], the kernel function is then A further variant of the OBK is to consider similarities between opposite borders. This means that there is only one oligo function for a certain oligo and the occurrence positions of signals in the right border are numbered from one to min(n, b) from right to left. In this way, a high similarity between the right border of a peptide and the left border of another peptide can also be detected. Throughout the paper, this kernel is called the paired oligo-border kernel (POBK) and the kernel function is: This kernel function can be computed as efficiently as the oligo kernel by appropriate position encoding. The kernel matrix is positive definite which follows directly from [33]. Since preliminary experiments showed that the POBK works better than the OBK, we used only the POBK in this study. Furthermore, the preliminary experiments showed that the best performance of the k-mer length is one which is quite reasonable, since the peptides are very short compared to the number of different amino acids. This is also supported by the study [34] on protein sequences, in which histograms of monomer distances performed better than distance histograms of longer k-mers. A combination of different lengths as in [33] also led to inferior results, which could be due to the normalization of the single kernel functions. Consequently, in this study, we only used k-mer length one. P-value Calculation and Filtering As stated earlier, the retention time prediction is used in this work to improve the certainty of peptide identifications found by search engines like Mascot and to filter out false identifications. This is done by fitting a linear model to the prediction data in the training set. The model reflects the fact that retention times of late eluting peptides show a higher deviation than early ones. The poorer performance in retention time prediction for longer peptides was also observed in [14] supporting this fact. For our predictions, we therefore match an area to the prediction data of the training set which contains ≥95% of the points and is the wider the bigger the corresponding retention time is. An application of the model can be found in Fig. Fig.6b6b and Fig. Fig.6c.6c. We call the smallest distance in the model γ[0 ]at normalized retention time (NRT) equal to zero, and γ[max ]is the biggest gamma at NRT = 1. We can consequently calculate a corresponding gamma for every normalized retention time t[nor ]by γ = γ[0 ]+ t[nor ]·(γ[max ]-γ[0]). Since we assume gaussian error distribution gamma corresponds to 2·standard deviation of the normal distribution such that a p -value can be calculated for every retention time prediction by calculating the probability that a correct identification has a bigger deviation between observed and predicted normalized retention time. The null hypothesis is that the identification is correct. For filtering identifications, we use these p-values in the following way. Since we do not want to filter out correct identifications, the probability of filtering out a correct identification can be controlled by a significance level. In the experiments, we set the significance level to 0.05. This means that the probability that a correct identification has a deviation between observed and predicted retention time equal or greater than the allowed deviation is 0.05. Consequently, the probability of filtering out correct identifications is 0.05. Concerning the p-values mentioned above, this means that p has to be bigger than 0.05. Basically, for significance level 0.05, this means that every identification outside the fitted model is filtered out and the identifications inside are kept. Computational Resources All methods elaborated in this work were integrated by us into OpenMS, a software platform for shotgun proteomics [31] which has a wrapper for the libsvm [35]. This library was used for the support vector learning. Furthermore, we integrated the prediction models into TOPP [32]. Some additional evaluations for peptide sample fractionation prediction were performed using shogun [36]. Experimental Methods and Additional Data Sets For peptide sample fractionation prediction, we used the data from Oh et al. [11] to show the superior performance of our method. For peptide retention time prediction, we used different datasets. The first one is a validation dataset which was used by Petritis et al. in 2006 [14] to predict peptide retention times using artificial neural networks. In their experiment, they measured more than 345,000 peptides, and chose 1303 high confident identifications for testing and the remaining peptides for training. Since they only published the 1303 test peptides, we could only use this small number of peptides. The dataset was used in our study to be able to show the performance of our methods compared to other well established methods for peptide retention time prediction. Further datasets for retention time prediction were measured in our labs to show that training on the data of one run suffices to predict retention times on the next runs very accurately and to improve spectrum identifications significantly. Experimental Setup The datasets for training and evaluation of the retention time predictor had to fulfill two basic requirements. First, the identity of the studied peptides had to be known with high certainty in order to avoid incorrect sequence annotations for the training dataset, and second, retention times had to be measured with high reproducibility. Altogether, we measured 19 different proteins, which were purchased from Sigma (St. Louis, MO) or Fluka (Buchs, Switzerland). To avoid excessive overlapping of peptides in the chromatographic separations, the proteins were divided into three artificial protein mixtures and subsequently digested using trypsin (Promega, Madison, WI) using published protocols [37]. The protein mixtures contained the following proteins in concentrations between 0.4 – 3.2 pmol/μl: Mixture 1: β-casein (bovine milk), conalbumin (chicken egg white), myelin basic protein (bovine), hemoglobin (human), leptin (human), creatine phosphokinase (rabbit muscle), α1-acid-glycoprotein (human plasma), albumin (bovine serum). Mixture 2: cytochrome C (bovine heart), β-lactoglobulin A (bovine), carbonic anhydrase (bovine erythrocytes), catalase (bovine liver), myoglobin (horse heart), lysozyme (chicken egg white), ribonuclease A (bovine pancreas), transferrin (bovine), α-lactalbumin (bovine), albumin (bovine serum). Mixture 3: thyroglobulin (bovine thyroid) and albumin (bovine serum). Adding albumin to each protein mixture was performed because in each run, there had to be an identical set of peptides to normalize the retention times. The resulting peptide mixtures were then separated using capillary IP-RP-HPLC and subsequently identified by electrospray ionization mass spectrometry (ESI-MS) as described in detail in [37,38]. The separations were carried out in a capillary/nano HPLC system (Model Ultimate 3000, Dionex Benelux, Amsterdam, The Netherlands) using a 50 × 0.2 mm monolithic poly-(styrene/divinylbenzene) column (Dionex Benelux) and a gradient of 0–40% acetonitrile in 0.05% (v/v) aqueous trifluoroacetic acid in 60 min at 55°C. The injection volume was 1 μl, and each digest was analyzed in triplicate at a flow rate of 2 μl/min. On-line ESI-MS detection was carried out with a quadrupole ion-trap mass spectrometer (Model esquire HCT, Bruker Daltonics, Bremen, Germany). Identification of Spectra Peptides were identified on the basis of their tandem mass spectra (maximum allowed mass deviations: precursor ions: ± 1.3 Da, fragment ions: ± 0.3 Da) using Mascot [1] (version 2.1.03). The database was the Mass Spectrometry Database, MSDB (version 2005-02-27) restricted to chordata (vertebrates and relatives). We allowed one missed cleavage as well as charges 1+, 2+ and 3+. The mass values were monoisotopic. The significance level of the significance threshold score for the peptide hits was 0.05. Since the amino acid sequences of the 19 proteins of our mixtures are known, we could verify the identifications by sequence comparison with the protein sequences. To avoid random verifications, we restricted the peptide length to be equal or greater than six. The whole process led to two datasets for each protein mixture – one which only contained the verified peptides and the other one with all Mascot identifications. In this paper, we call the datasets containing the verified peptide sequences vds and the datasets with all Mascot identifications ds. The vdss are used to train the predictors and the dss are used to access the classification performance of the identification process. Normalization of Retention Times We chose two standard peptides which were identified in all of the runs. One of these peptides, which had the amino acid sequence TCVADESHAGCEK, eluted very early and the other one, which had the amino acid sequence MPCTEDYLSLILNR, eluted very late. We scaled the retention times linearly so that the early eluting peptide got an NRT of 0.1 and the late eluting peptide an NRT of 0.9. All peptides with an NRT below zero or above 1 were removed. The lists of identified peptides of vds1, vds2 and vds3, together with their respective retention times, are available as Additional files 1, 2 and 3 in the supplementary material. Reimplementation of Existing Methods for Comparison Purposes For retention time prediction we compared our method with several methods. Therefore we had to reimplement the methods by Klammer et al. [16] as well as the methods by Petritis et al. [14]. For the methods by Klammer et al., we implemented the same encoding as described in the literature and used the RBF kernel of the libsvm [35]. The cross validation was performed with the same parameter ranges as described in the paper (C ^-3, 10^-2,..., 10^7} and σ ^-6, 10^-7, 10^-8}). For comparison with the models by Petritis et al. we reimplemented the models as described in the literature using Matlab R2007a (The MathWorks, Inc., United States) and the neural networks toolbox version 5.0.2 (The MathWorks, Inc.). This means that for the first model of Petritis et al. [13] we had a feedforward neural network with 20 input nodes, two hidden nodes and one output node. The frequencies of the amino acids of the peptides served as input. For the second model of Petritis et al. [14] we had 1052 input nodes, 24 hidden nodes and one output node. The amino acids at the 25 leftmost and the 25 rightmost residues served as input as well as the length and the hydrophobic moment of the peptide as described in [14]. Both models were trained using a backpropagation algorithm. Authors' contributions OK and CH designed the experiment and the study. AL was responsible for the experimental data generation. NP developed and implemented the theoretical methods and performed the data evaluation. All authors contributed to the writing of the manuscript. Supplementary Material Additional file 1: Verified data set one (vds1). vds1.csv lists the identified peptides of vds1 with normalized retention time, observed retention time, precursor mass, charge, score and significance threshold score (at significance level p = 0.05). Additional file 2: Verified data set two (vds2). vds2.csv lists the identified peptides of vds2 with normalized retention time, observed retention time, precursor mass, charge, score and significance threshold score (at significance level p = 0.05). Additional file 3: Verified data set three (vds3). vds3.csv lists the identified peptides of vds3 with normalized retention time, observed retention time, precursor mass, charge, score and significance threshold score (at significance level p = 0.05). We thank Marc Sturm for fruitful discussions on integrating our methods into OpenMS, and Andreas Bertsch and Torsten Blum for proofreading the manuscript. • Perkins DN, Pappin DJ, Creasy DM, Cottrell JS. Probability-based protein identification by searching sequence databases using mass spectrometry data. Electrophoresis. 1999;20:3551–3567. [PubMed] • Eng JK, McCormack AL, Yates JR., 3rd An approach to correlate MS/MS data to amino acid sequences in a protein database. J Am Soc Mass Spectrom. 1994;5:976–989. [PubMed] • Geer LY, Markey SP, Kowalak JA, Wagner L, Xu M, Maynard DM, Yang X, Shi W, Bryant SH. Open mass spectrometry search algorithm. J Proteome Res. 2004;3:958–964. [PubMed] • Craig R, Beavis RC. TANDEM: matching proteins with tandem mass spectra. Bioinformatics. 2004;20:1466–1467. [PubMed] • Taylor JA, Johnson RS. Sequence database searches via de novo peptide sequencing by tandem mass spectrometry. Rapid Commun Mass Spectrom. 1997;11:1067–1075. [PubMed] • Frank A, Pevzner P. PepNovo: de novo peptide sequencing via probabilistic network modeling. Anal Chem. 2005;77:964–973. [PubMed] • Frank A, Tanner S, Bafna V, Pevzner P. Peptide sequence tags for fast database search in mass-spectrometry. J Proteome Res. 2005;4:1287–1295. [PubMed] • Dworzanski JP, Snyder AP, Chen R, Zhang H, Wishart D, Li L. Identification of bacteria using tandem mass spectrometry combined with a proteome database and statistical scoring. Anal Chem. 2004;76 :2355–2366. [PubMed] • MacCoss MJ, Wu CC, Yates JR. Probability-based validation of protein identifications using a modified SEQUEST algorithm. Anal Chem. 2002;74:5593–5599. [PubMed] • Moore RE, Young MK, Lee TD. Qscore: an algorithm for evaluating SEQUEST database search results. J Am Soc Mass Spectrom. 2002;13:378–386. [PubMed] • Oh C, Zak SH, Mirzaei H, Buck C, Regnier FE, Zhang X. Neural network prediction of peptide separation in strong anion exchange chromatography. Bioinformatics. 2007;23:114–118. [PubMed] • Strittmatter EF, Kangas LJ, Petritis K, Mottaz HM, Anderson GA, Shen Y, Jacobs JM, Camp DG, Smith RD. Application of peptide LC retention time information in a discriminant function for peptide identification by tandem mass spectrometry. J Proteome Res. 2004;3:760–769. [PubMed] • Petritis K, Kangas LJ, Ferguson PL, Anderson GA, Pasa-Tolic L, Lipton MS, Auberry KJ, Strittmatter EF, Shen Y, Zhao R, Smith RD. Use of artificial neural networks for the accurate prediction of peptide liquid chromatography elution times in proteome analyses. Anal Chem. 2003;75:1039–1048. [PubMed] • Petritis K, Kangas LJ, Yan B, Monroe ME, Strittmatter EF, Qian WJ, Adkins JN, Moore RJ, Xu Y, Lipton MS, Camp DG, Smith RD. Improved peptide elution time prediction for reversed-phase liquid chromatography-MS by incorporating peptide sequence information. Anal Chem. 2006;78:5026–5039. [PMC free article] [PubMed] • Gorshkov AV, Tarasova IA, Evreinov VV, Savitski MM, Nielsen ML, Zubarev RA, Gorshkov MV. Liquid chromatography at critical conditions: comprehensive approach to sequence-dependent retention time prediction. Anal Chem. 2006;78:7770–7777. [PubMed] • Klammer AA, Yi X, MacCoss MJ, Noble WS. Peptide Retention Time Prediction Yields Improved Tandem Mass Spectrum Identification for Diverse Chromatography Conditions. In: Speed T, Huang H, editor. Research in Computational Molecular Biology. Vol. 4453. LNBI, Springer; 2007. pp. 459–472. • Burges CJC. A Tutorial on Support Vector Machines for Pattern Recognition. Data Min Knowl Discov. 1998;2:121–167. • Leslie C, Eskin E, Noble WS. The spectrum kernel: a string kernel for SVM protein classification. Pac Symp Biocomput. 2002:564–575. [PubMed] • Zien A, Rätsch G, Mika S, Schölkopf B, Lengauer T, Müller KR. Engineering support vector machine kernels that recognize translation initiation sites. Bioinformatics. 2000;16:799–807. [PubMed] • Rätsch G, Sonnenburg S. Accurate Splice Site Prediction for Caenorhabditis Elegans. MIT Press. Kernel Methods in Computational Biology; 2004. pp. 277–298. • Meinicke P, Tech M, Morgenstern B, Merkl R. Oligo kernels for datamining on biological sequences: a case study on prokaryotic translation initiation sites. BMC Bioinformatics. 2004;5:169. [PMC free article] [PubMed] • Rätsch G, Sonnenburg S, Schölkopf B. RASE: recognition of alternatively spliced exons in C.elegans. Bioinformatics. 2005;21 Suppl 1:i369–i377. [PubMed] • Vert JP, Saigo H, Akutsu T. Local alignment kernels for biological sequences. MIT Press. Kernel Methods in Computational Biology; 2004. pp. 131–154. • Schölkopf B, Smola AJ, Williamson RC, Bartlett PL. New Support Vector Algorithms. Neural Computation. 2000;12:1207–1245. [PubMed] • Henikoff S, Henikoff JG. Amino acid substitution matrices from protein blocks. Proc Natl Acad Sci USA. 1992;89:10915–10919. [PMC free article] [PubMed] • Meek JL. Prediction of Peptide Retention Times in High-Pressure Liquid Chromatography on the Basis of Amino Acid Composition. PNAS. 1980;77:1632–1636. [PMC free article] [PubMed] • Mant CT, Burke TW, Black JA, Hodges RS. Effect of peptide chain length on peptide retention behaviour in reversed-phase chromatography. J Chromatogr. 1988;458:193–205. [PubMed] • Krokhin O, Craig R, Spicer V, Ens W, Standing KG, Beavis RC, Wilkins JA. An Improved Model for Prediction of Retention Times of Tryptic Peptides in Ion Pair Reversed-phase HPLC: Its Application to Protein Peptide Mapping by Off-Line HPLC-MALDI MS. Mol Cell Proteomics. 2004;3:908–919. [PubMed] • Kaliszan R, Baczek T, Cimochowska A, Juszczyk P, Wisniewska K, Grzonka Z. Prediction of high-performance liquid chromatography retention of peptides with the use of quantitative structure-retention relationships. Proteomics. 2005;5:409–415. [PubMed] • Krokhin OV. Sequence-specific retention calculator. Algorithm for peptide retention prediction in ion-pair RP-HPLC: application to 300- and 100-A pore size C18 sorbents. Anal Chem. 2006;78 :7785–7795. [PubMed] • Sturm M, Bertsch A, Gröpel C, Hildebrandt A, Hussong R, Lange E, Pfeifer N, Schulz-Trieglaff O, Zerck A, Reinert K, Kohlbacher O. OpenMS – An Open-Source Framework for Mass Spectrometry. 2007. • Kohlbacher O, Reinert K, Gropl C, Lange E, Pfeifer N, Schulz-Trieglaff O, Sturm M. TOPP-the OpenMS proteomics pipeline. Bioinformatics. 2007;23:e191–197. [PubMed] • Igel C, Glasmachers T, Mersch B, Pfeifer N, Meinicke P. Gradient-based optimization of kernel-target alignment for sequence kernels applied to bacterial gene start detection. IEEE/ACM Trans Comput Biol Bioinform. 2007;4:216–226. [PubMed] • Lingner T, Meinicke P. Remote homology detection based on oligomer distances. Bioinformatics. 2006;22:2224–2231. [PubMed] • Chang CC, Lin CJ. LIBSVM: a library for support vector machines. 2001. http://www.csie.ntu.edu.tw/~cjlin/libsvm • Sonnenburg S, Rätsch G, Schäfer C, Schölkopf B. Large Scale Multiple Kernel Learning. Journal of Machine Learning Research. 2006;7:1531–1565. • Schley C, Swart R, Huber CG. Capillary scale monolithic trap column for desalting and preconcentration of peptides and proteins in one- and two-dimensional separations. J Chromatogr A. 2006;1136 :210–220. [PubMed] • Toll H, Wintringer R, Schweiger-Hufnagel U, Huber CG. Comparing monolithic and microparticular capillary columns for the separation and analysis of peptide mixtures by liquid chromatography-mass spectrometry. J Sep Sci. 2005;28:1666–1674. [PubMed] Articles from BMC Bioinformatics are provided here courtesy of BioMed Central • PubMed PubMed citations for these articles • Substance PubChem Substance links Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2254445/?tool=pubmed","timestamp":"2014-04-17T16:53:15Z","content_type":null,"content_length":"162849","record_id":"<urn:uuid:52a700e8-1fd1-41ee-b71f-73fd1295e2ab>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
Zoran Škoda Zoran Škoda I am a mathematical physicist/mathematician from Zagreb. My Ph.D. is from University of Wisconsin-Madison, 2002. My thesis title was Coset spaces for quantum groups. Here is an abstract. My mathematical interests include geometric aspects of mathematical physics (TQFT, geometric quantization, coherent states etc.), noncommutative algebraic geometry and noncommutative localization, Hopf algebras, categories, cohomology, in particular nonabelian cohomology/descent theory. My other scientific interests include historical linguistics (general principles and Indo-European), interface between language and computation (computational linguistics, construction of compilers and computer language design), and medical herbs (herbal tea) but all these past hobbies are latent now for a long time. I used to play piano accordion. My native tongue is the kajkavian dialect of Croatian. In Croatian we use diacritics for the sch-sound: Škoda. View a list of some of my mathematical/physical articles and talks. My other web page is here I have a (low-activity) blog here. I have also a personalized part of $n$lab here with some coursepages for students, and other pages not in close accord with purposes and collaborative nature of $n$lab proper. Revised on July 20, 2010 09:49:38 by Zoran Škoda
{"url":"http://ncatlab.org/nlab/show/Zoran+%C5%A0koda","timestamp":"2014-04-17T07:29:43Z","content_type":null,"content_length":"16379","record_id":"<urn:uuid:a095d27f-249a-42ef-a41d-7255277df122>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
Kids.Net.Au - Encyclopedia > Stephen Wolfram Stephen Wolfram ) is a known for his work in particle physics cellular automata computer algebra , and is the author of the computer program Wolfram's father was a novelist and his mother a professor of philosophy. Often described as a child prodigy, he published an article on particle physics at age 15 and entered Oxford (St John's College) at age 17. He received his Ph.D. in particle physics from Caltech at age 20 and joined the faculty there. At age 21, Wolfram won the MacArthur "Genius" award. He developed a computer algebra system at Caltech, but the school's patent rules denied him ownership of the invention. He left for the physics department of Princeton University, where he studied cellular automata, mainly with computer simulations. He claimed that cellular automata processes are ubiquitous and underlie much of nature. Wolfram left for the University of Illinois at Urbana-Champaign[?] and started to develop the computer algebra system Mathematica in 1986, to be released in 1988. He founded a company, Wolfram Research[?], which continues to extend the program and market it with considerable success. Wolfram Research also pays Eric Weisstein[?] to work on his math encyclopedia MathWorld[?], which is hosted at the company's web site. From 1992 to 2002, Wolfram worked on his book A New Kind of Science, whose central thesis is that some simple cellular automata can exhibit very complex behavior, and that these cellular automata underlie much of nature. All Wikipedia text is available under the terms of the GNU Free Documentation License
{"url":"http://encyclopedia.kids.net.au/page/st/Stephen_Wolfram","timestamp":"2014-04-17T07:26:09Z","content_type":null,"content_length":"15436","record_id":"<urn:uuid:8493bedf-a306-453d-b226-7f1ec1926766>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
Research and Publications at Previous REU Programs The REU Program In Mathematics Research and Publications at Previous REU Programs This page contains a summary of the research and publications that have resulted from the GVSU REU, from 2000 to 2013. Note that there may also be papers currently submitted or forthcoming that are not listed here. 2013 ~ 2012 ~ 2011 ~ 2010 ~ 2009 ~ 2008 ~ 2007 ~ 2006 ~ 2005 ~ 2004 ~ 2003 ~ 2002 ~ 2000 Equal Circle Packing: This team (Madeline Brandt, Hanson Smith and Prof. William Dickinson) found all optimally dense packings of four of equal circles on any flat tori. There turn out to be several two parameter regions in the moduli space of flat tori where there are two or more optimal arrangements (one globally dense and the others locally dense). This is the first example where there have been multiple optimal packings on a single torus with packing graphs that are not homeomorphic as subsets of the torus. The behavior of the optimally dense packings agrees the work of A. Heppes from 1999. A manuscript is under preparation. Voting Theory: This team (Beth Bjorkman, Sean Gravelle, and Prof. Jonathan Hodge) investigated graph theoretic representations of multidimensional binary preferences associated with referendum elections. We specifically studied preferences that can be represented by Hamiltonian paths in cubic graphs with the Gray Code labeling. We characterized the algebraic structure of the sets of preferences that can be generated in this manner, and we also proved results about the interdependence structures that result from such preferences. Further research is in progress, and we anticipate submitting a manuscript for publication in 2014. Mathematics and 3D Printing: At the beginning of the summer, the mathematics department at GVSU acquired a Makerbot Replicator 2 3D-printer, and research of this team (Melissa Sherman-Bennett, Sylvanna Krawczyk, and Prof. Edward Aboufadel) revolved around this technology. The goal was to develop novel techniques using mathematics to design objects appropriate for the printer. The team developed techniques to “print” algebraically-defined surfaces, manifolds defined from real data (such as elevation data from geography), and friezes based on data collected by the Kinect camera. Using ideas from linear algebra, the team then developed a method to identify depth data for an object based on two photographs, and “print” these objects, such as a human hand. At the end of the summer, the team wrote a primer, “ 3D Printing for Math Professors and Their Students ,” that was made available for free on the Internet. A second manuscript, based on the depth data/photography project, is under preparation. Extended Outer Billiards in the Hyperbolic Plane: This team (Sanjay Kumar, Austin Tuttle and Prof. Filiz Dogru) focused on analyzing the extended outer polygonal billiard map in the hyperbolic plane. We classified polygonal tables with respect to their rotation numbers, whether rational or irrational, and we wrote programs to investigate our conjectures for periodic orbits of this special circle map generated from polygonal tables. A manuscript is under preparation. Arrow Path Sudoku: This team (Ellen Borgeld, Elizabeth Meena, and Professor Shelly Smith) explored a variation of a Sudoku puzzle that uses an arrow in each cell, pointing to the cell containing the subsequent number, in additional to only a small number of numerical clues. We use inclusion-exclusion and computer programs that we wrote to count 2x2, 2x3, and 3x3 number blocks that admit valid arrow paths, Arrow Path blocks that are solvable, and the number of solutions for each block. We developed an equivalence relation on the set of blocks of each size, then partitioned the sets into equivalence classes to facilitate combining blocks to form 4x4, 6x6, and 9x9 Arrow Path Sudoku boards. We described and counted all possible 4x4 boards that are solvable, and determined the maximum number of numerical clues required to create Arrow Path puzzles of each size with a unique solution. A manuscript is under preparation. Combinatorial Sums and Identities: This team (Sean Meehan, Michael Weselcouch and Professor Akalu Tefera) explored, conjectured and formulated various challenging and interesting combinatorial sums and identities. To do this the team spent a great deal of time studying powerful combinatorial (counting) methods and computer assisted proof techniques of Wilf-Zeilberger and other symbolic computation techniques. Using various computer summation algorithms the team was able to discover and prove challenging and interesting old and new combinatorial identities. Equal Circle Packing on a Flat Square Klein Bottle: This team (Matthew Brehms, Alexander Wagner and Professor William Dickinson) explored one and two equal circle packings on a square flat Klein bottle. To do this using the same methods as in packings on flat tori, we had to explicitly calculate the identity component of isometry group of any flat Klein bottle and we rewrote a program to compute the possible packing graph structures on a Klein bottle. Along the way, we proved a theorem that every flat Klein bottle is isometric to a flat Klein bottle where the generating vectors are orthogonal (i.e. a rectangular flat Klein bottle). This enabled us to discover and prove the optimality of the one and two equal circle packings on a square flat Klein bottle and to conjecture the optimal packing arrangement for three equal circles on a square flat Klein bottle. A manuscript is planned in the future. Single-Peaked Preferences in Multiple-Question Elections: This team (Lindsey Brown, Hoang Ha, and Professor Jonathan Hodge) applied the concept of single-peaked preferences to the multidimensional binary alternative spaces associated with a variety of multiple-criteria decision-making problems, including referendum elections. They generalized prior work on cost-conscious preferences in referendum elections, showing that single-peaked binary preferences are nonseparable except in the most trivial cases, and that electorates defined by single-peaked preferences always contain weak Condorcet winning and losing outcomes. They also developed a general method for enumerating single-peaked binary preference orders, finding exact counts for 2, 3, and 4-dimensional alternative spaces. A manuscript, "Single-peaked preferences over multidimensional binary alternatives," has been accepted for publication in Discrete Applied Mathematics. Applications of Wavelets: This team (Nathan Marculis, SaraJane Parsons, and Professor Edward Aboufadel, with help from Clark Bowman) worked on the following problem: using accelerometer, GPS, and other data collected by smartphones while driving, how can this data be used to identify the location and severity of potholes? The team made use of wavelet filters, Kruskal’s algorithm, and other mathematical tools to develop an algorithm to solve the problem. In February 2012, the City of Boston announced that they would be using the wavelet-Kruskal solution, along with algorithms from other researchers, in their Street Bump app. A manuscript is under preparation. Equal Circle Packing: This team (AnnaVictoria Ellsworth, Jennifer Kenkel and Professor William Dickinson) found all optimally dense packings of three of equal circles on any flat tori. For all but a two parameter region in a moduli space of tori there is exactly one optimally dense arrangement. Inside this region there are two optimally dense packings (one globally dense and the other locally dense). The behavior of the optimally dense packings agrees with the previous summer's work and the work of Heppes. A manuscript is under preparation. Outer Billiards in the Hyperbolic Plane: This team (Neil Deboer, Daniel Hast, and Professor Filiz Dogru) analyzed the orbit structures and the geometric properties of the outer (dual) billiard map in the hyperbolic plane. We geometrically constructed the 3-periodic orbit for small triangles. This construction led us to define a new term to describe the strong criterion, "triangle-small polygon" to classify polygons in the hyperbolic plane. As a result, we have discovered the special class of polygons which have at least one 3-periodic orbit inside the hyperbolic plane. Voting Theory: This team (Clark Bowman, Ada Yu, and Professor Jonathan Hodge) developed an iterative voting method for referendum elections. Our method allows voters to revise their votes as often as they would like during a fixed voting period, with the current results of the election displayed in real time. Through extensive computer simulation, we showed that our method yields significant improvements from standard simultaneous voting and in many cases solves the separability problem, a phenomenon that is known to yield undesirable and even paradoxical outcomes in referendum elections. A paper based on this work, "The potential of iterative voting to solve the separability problem in referendum elections," was published in Theory and Decision. 2010 Higher Dimensional Rook Polynomials: This team (Professor Alayont, Moger-Reischer, Swift) focused on generalizations of 2-dimensional rook polynomials to three and higher dimensions. The theory of 2-dimensional rook polynomials is concerned with counting the number of ways of placing non-attacking rooks (no two in a row or a column) on a 2-dimensional board. The theory can be generalized to three and higher dimensions by letting rooks attack along hyperplanes. In 2 dimensions, the rook numbers of certain families of boards correspond to known number sequences, including Stirling numbers, number of derangements, number of Latin rectangles and binomial coefficients, and provide other combinatorial interpretations of these sequences. Our focus this summer was exploring similar correspondences for three and higher dimensional rook numbers. Building upon research conducted in 2009 funded by a GVSU S3 Grant, we found a family of boards in higher dimensions generalizing the 2-dimensional boards with Stirling numbers as their rook numbers. The rook numbers of these higher dimensional boards and those of their complements resulted in generalized central factorial numbers and the generalized Genocchi numbers. We also found a family of boards in higher dimensions that generalize the staircase boards in 2 dimensions. The rook numbers of these boards are binomial coefficients as are those of the 2-dimensional staircase boards. These examples provide new combinatorial interpretations of these sequences. An manuscript based on this work has been submitted for Orthogonality in the space of compact sets: This team (Professor Schlicker, Sanchez, Jon VerWys) focused on the topic of Pythagorean orthogonality in the space H of all nonempty compact subsets of n dimensional real space. Our ultimate goal was to develop a trigonometry on H. The space H is a metric space using the Hausdorff metric h, and previous REU groups have learned much about line segments in H. If A, B, and C are elements of H, we defined the segments AB and AC to be orthogonal if their lengths satisfy the Pythagorean identity, that is the square of h(B,C) is the sum of the squares of h(A,B) and h(A,C). When this happens we say that A, B, and C form the vertices of a right triangle in H with segment BC as hypotenuse and segments AB and AC as legs. This group made progress on a characterization of exactly when a segment BC can be the hypotenuse of a right triangle in H ( this is not always possible) and when a segment AB can be a leg. This group also discovered many different ways that orthogonality in H is different than orthogonality in n dimensional real space. Progress was made on defining the concept of spread in H which may lead us to an interesting and useful notion of trigonometry in H.2-dimensional staircase boards. These examples provide new combinatorial interpretations of these sequences. We continue to work on these problems to complete some of our conjectures and, if successful, will submit a paper for publication in the future. Equal Circle Packing on Flat Tori: This team (Professor Dickinson, Tries, Watson) focused on equal circle packings of small numbers of circles in a one-parameter family of flat tori. The family of tori that we worked on were those that are the quotient of the plane by a lattice generated by two unit vectors with an angle of between 60 and 90 degrees. (A packing of circles on a torus is an arrangement of circles that do not overlap and are contained in the torus. The density of a packing is the ratio of the area covered by the circles divided by the area of the torus.) We focused on finding all the optimally dense (both locally and globally) arrangements of 1 to 4 circles on this one-parameter family of tori. Roughly speaking, a packing is locally optimally dense if it has equal or greater density than all near by arrangements. A packing is globally maximally dense if it is the most dense locally optimally dense packing. For 1 and 2 circles packed on any torus in the one-parameter family of tori, we proved there is a unique optimally dense arrangement for each torus. For 3 circles packed on a torus, the number of and type of arrangement depend on the angle of the torus. For any angle strictly between 60 and 90 degrees, we proved that there are exactly two locally maximally dense arrangements. When the angle is 90 degrees the two arrangements are identical and at 60 degrees, the one of the arrangements is no longer locally maximally dense. The behavior of the optimally dense packings at the extreme angles of 60 and 90 degrees agrees with previous REU [2007 - DMS 0451254] and other research. Rank Disequilibrium in Multiple-Criteria Evaluation Schemes: This group (Professor Hodge, Stevens, Woelk) developed a mathematical model of the concept of rank disequilibrium, which occurs when individuals are evaluated over multiple criteria and have different perceptions of the relative value of each criterion. Rank disequilibrium has been shown to be a significant source of organizational conflict, and this project was the first attempt to formally model and investigate rank disequilibrium from within a mathematical framework. The main objects of study were rank aggregation functions, which assign overall rankings to combinations of rankings on individual criteria. The group defined several desirable properties for rank aggregation functions to satisfy and proved necessary and sufficient conditions for the existence of rank aggregation functions with these properties. The group proved that while it is nearly impossible to avoid all forms of inequity, certain forms of inequity can be avoided by limiting the number of possible rankings on each criterion, restricting the set of possible ranking profiles, or by exploiting known information about the evaluees’ preferences. A manuscript based on this work is in preparation. Wavelets and Diabetes : This group (Aboufadel, Olsen, Castellano) created a new measurement developed to quantify the variability or predictability of blood glucose in type 1 diabetics. Using continuous glucose monitors (CGMs), this measurement -- called a PLA index -- is a new tool to classify diabetics based on their blood glucose behavior and may become a new method in the management of diabetes. The PLA index was discovered while taking a wavelet-based approach to study the CGM data. This wavelet-based approach emphasizes the shape of a blood glucose graph. Their article, " Quantification of the variability of continuous glucose monitoring data algorithms ," appeared in Greater Than Sudoku: This group (Smith, Burgers, Varga) investigated aspects of a variation of a Sudoku puzzle that uses inequalities between adjacent cells rather than numerical clues. They showed that the cells of an m by n inequality block are a partially ordered set, and that an inequality block is solvable if and only if it is acyclic. They also defined an equivalence relation on the set of solvable 2 by 2 inequality blocks and proved that there are 224 Greater Than Shidoku (4 by 4) boards with unique solutions. A manuscript is in preparation. Hypergeometric sums and identities: This group (Tefera, Dahlberg, Ferdinands) studied computerized proof techniques, specifically Gosper's and Zeilberger's algorithms and the Wilf-Zeilberger proof method, and counting (combinatorial) proof techniques. Using computerized and counting proof techniques the team investigated several hypergeometric sums, found closed forms of various challenging and interesting hypergeometric sums and proved identities involving binomial coefficients. Using counting techniques, the team was able to find a beautiful proof for one of the challenging problems proposed in the June 2009 issue of Mathematics Magazine (the solution is submitted for publication). The team also gave an elementary and elegant approach, using the WZ method, for sums of Choi, Zoring and Rathie. Their article, "A Wilf-Zeilberger Approach to Sums of Choi, Zornig and Rathie," appeared in Quaestiones Mathematicae. Cost-conscious voters: This group (Hodge, Golenbiewski, Moats) developed an axiomatic model of cost-conscious voters in referendum elections. They used this model to prove a variety of useful results, including: (i) that the probability of cost-consciousness approaches zero as the number of questions grows without bound; (ii) that, under certain conditions, elections in which voters are cost-conscious will always contain at least a weak Condorcet winning outcome (and a weak Condorcet losing outcome); and (iii) that the interdependence structures within the preferences of cost-conscious voters can be varied and unpredictable. Their article, "Cost-conscious voters in referendum elections," appeared in Involve. : This group (Aboufadel, Boyenger, Madsen) developed a new method to create portraits that imitate the style of artist Chuck Close. Wavelets were used for detecting and classifying edges. While Chuck Close uses a diamond tiling of the plane for his portraits, this new method can use regular tilings by triangles and other objects. Their article, " Digital creation of Chuck Close block-style portraits using wavelet filters ", appeared in 2010 in the Journal of Mathematics and the Arts. In addition, the research group created a digital "Chuck Close-like" print of Ramanujan that was featured at the art exhibition at the 2011 Joint Mathematics Meetings. Hausdorff Metric Geometry: This group (Schlicker, Montague) obtained some fascinating results on finite betweenness in the Hausdorff metric geometry. More specifically, they found necessary and sufficient conditions under which there can be a finite set between two sets A and B. One unexpected consequence of this characterization is that for some sets A and B, there can be finite sets at some locations between A and B but not at others. This group also found some interesting preliminary results on convexity in this geometry. Their paper, "Betweenness of Compact Sets" is in The Geometry of Polynomials: This group (Schlicker, Nuchi, Shatzer) investigated several problems related to the Sendov conjecture: a two-circle type theorem for polynomials with all real zeros, finding polynomials with minimal deviation from their roots to their critical points, and maximizing the deviation from the roots of a polynomial with real roots (and its derivative) to their centroids. The most interesting results were obtained in the latter problem where the group has made significant progress in proving its conjecture of the polynomials with maximal deviation. Gerrymandering: This group (Hodge, Marshall, Patterson) explored the notion of convexity as it relates to the problem of detecting gerrymandering and producing optimal congressional redistrictings. They defined the convexity coefficient of a district or region, and used Monte Carlo simulation to approximate the convexity coefficient of each of the 435 congressional districts in the United States. They explored several theoretical questions pertaining to approximation of convexity coefficients and the effect of subdividing regions using straight-line cuts. They also explored ways to modify the convexity coefficient to account for population density and irregularities in state boundaries. Their article,"Gerrymandering and Convexity" appeared in the September 2010 issue of the College Mathematics Journal, and the convexity coefficient introduced therein has its own entry on Wolfram Mathworld. Convexity and Gerrymandering, from the 2008 REU Group A (Schlicker, Honigs, Martinez) defined and investigated three different types of connectedness in the geometry of the Hausdorff metric and obtained some preliminary results about sets that satisfy each type of connectedness. Each finite configuration in this geometry has an associated bipartite graph. Several results obtained about edge covers of graphs by this group gave insight into the possible number of sets at each location between the end sets in any configuration. A paper based on this research, "Missing edge coverings of bipartite graphs and the geometry of the Hausdorff metric," has been accepted for publication by the Journal of Geometry. Group B (Hodge, Lahr, Krines) explored questions pertaining to the notion of separability in voter preferences over multiple questions. Specifically, they generalized methods for counting preseparable extensions, which are used to build larger preference orders from smaller ones, and developed a characterization of the types of preferences that can be constructed via preseparable extensions. They also characterized the algebraic structure of the sets of permutations that preserve the separability of a given separable preference order, and discovered a connection between separable preference orders and Boolean term orders, which have applications to abstract algebra and comparative probability theory. Their article, "Preseparable extensions of multidimensional preferences" appeared in 2009 in the journal Order. Group C (Dickinson, Guillot, Castelaz) focused on finding all the locally and globally dense packings of 1 to 5 circles on the standard triangular torus. Roughly, a packing of n equal circles is locally maximally dense if there exists a positive epsilon, so that if each circle center is moved by less than epsilon, the density must decrease (i.e. the smallest pairwise distance between the centers must decrease). A packing of n equal circles is globally maximally dense if it is the most dense locally maximally dense packing. Two published articles resulted from this research: "Optimal Packings of up to Five Equal Circles on a Square Flat Torus" (with Sandi Xhumari), which appeared in Beiträge zur Algebra und Geometrie; and "Optimal Packings of up to Six Equal Circles on a Triangular Flat Torus" (with Sandi Xhumari), which appeared in the Journal of Geometry. Group D (Dogru, Komlos, Gorski) investigated the dynamics of the dual billiard map in the Euclidean plane and the hyperbolic plane. The orbit behavior of the map changes with respect to the shape and size of the table. Their work concentrated on regular polygonal tables which tessellate the Euclidean or hyperbolic plane. Group A (Aboufadel, Armstrong, Smietana) investigated position coding and invented two new position codes -- one based on binary wavelets, while the other uses a base-12 system combined with binary matrices. A manuscript has been posted to the ArXiV Group B (Wells, Lerch, Leshin) addressed the problem of finding an optimal seating strategy to maximize acquaintances made at successive events. Group C (Schlicker, Schultheis, Morales) created a computer program to automate the computation of the number of elements at each location on a Hausdorff segment between two finite sets. They also created new integer sequences which arise from special types of Hausdorff configurations. Their paper, "Polygonal chain sequences in the space of compact sets," appeared in 2008 in the Journal of Integer Sequences. Group D (Dickinson, Hediger, Taylor) solved San Gaku geometry problems from Japan, ones that involved parallel lines. They generalized these problems and solutions to spherical and hyperbolic Group A (Boelkins, From and Kolins) investigated several problems in the geometry of polynomials focused on the relationship between the set of zeros and the set of critical numbers. The article, " Polynomial Root Squeezing " appeared in Mathematics Magazine in February 2008. Group B (Fishback, DeMore, Bachman) investigated various problems involving "least squares derivatives" associated with various random variables and corresponding families of orthogonal polynomials. Group C (Aboufadel, Lytle, and Yang) applied wavelets and statistics to match handwriting samples and determine forgeries. The article, " Detecting Forged Handwriting with Wavelets and Statistics ", written by these researchers, appeared in the Spring 2006 issue of the Rose-Hulman Undergraduate Mathematics Journal Group D (Schlicker, Blackburn, Zupan) investigated the geometry the Hausdorff metric imposes on the collection of non-empty compact subsets of n-dimensional real space. A major finding for this group was that, although there are configurations in this space that allow k elements at each location between two sets for infinitely many different values of k (including examples for k between 1 and 18), there is NO possible configuration that allows exactly 19 elements at each location. The paper “A Missing Prime Configuration in the Hausdorff Metric Geometry,” with Chantel Blackburn, Kristina Lund, Patrick Sigmon, and Alex Zupan, appeared in 2009 in the Journal of Geometry. A second paper is in preparation. Group A (Sorensen, Morris, and VanHouten) created "Bubble Bifurcations" in dynamical systems. Group B (Dickinson, Katschke, and Simons) solved San Gaku geometry problems from Japan, and generalized these problems and solutions to spherical and hyperbolic geometry. Group C (Aboufadel, Brink, and Colthorp) applied wavelets and other tools to the problem of finding airplanes in aerial photographs. Group D (Schlicker, Lund, and Sigmon) investigated the geometry the Hausdorff metric imposes on the collection of non-empty compact subsets of n-dimensional real space. In particular, they found many interesting occurrences of Fibonacci and Lucas numbers in this geometry. The paper “Fibonacci sequences in the space of compact sets,” with Kris Lund and Patrick Sigmon, appears in Involve, Vol. 1 (2008), No. 2, 197-215. The paper “A Missing Prime Configuration in the Hausdorff Metric Geometry,” with Chantel Blackburn, Kristina Lund, Patrick Sigmon, and Alex Zupan, has been accepted in the Journal of Geometry. Group A (Schlicker, Bay and Lembcke) investigated the geometry of the Hausdorff space, in particular the lines of that space. The article "When Lines Go Bad in Hyperspace," written by these researchers, appeared in Demonstratio Mathematica in 2005. Group B (Aboufadel, Olsen and Windle) applied wavelets and other tools to the problem of breaking CAPTCHAs. The article "Breaking the Holiday Inn Priority Club CAPTCHA," written by these researchers, appeared in the March 2005 College Mathematics Journal. Group C (Boelkins, Miller and Vugteveen) investigated polynomial root dragging. The article, "From Chebyshev to Bernstein: A Tour of Polynomials Large and Small", written by these researchers, appeared in the May 2006 College Mathematics Journal. Group D (Wells, Bromenshenkel and Hogg) conducted work in Lie Theory and its relations to rotations in 3-dimensional space. Group A (Schlicker, Mayberry and Powers) investigated the geometry of the collection of non-empty compact subsets of n-dimensional real space. The article, "A Singular Introduction to the Hausdorff Metric Geometry", written by these researchers and D. Braun from the 2000 REU, appeared in 2005 in the Pi Mu Epsilon Journal. Group B (Aboufadel, Driskell and Dailey) worked on problems involving wavelets and steganography. The article "Wavelet-Based Steganography," written by Lisa Driskell, appeared in the Cryptologia in Group C (Sorensen, Mikkelson and Armel) investigated the complete bifurcation diagram in dynamical systems. This work was noted in the article, "Sprinkler Bifurcations and Stability," by Jody Sorensen and Elyn Rykken, which appeared in the November 2010 issue of the College Mathematics Journal. Group D (Wells, Fagerstrom and DeLong) conducted work in Lie Theory and its relations to rotations in 3-dimensional space. There was no REU program during 2001. Group A (Schlicker and Braun) investigated the geometry of the Hausdorff space. Group B (Aboufadel, Cox and Oostdyk) developed bivariate Daubechies scaling functions. Group C (Sorensen, Ashley and Van Spronsen) conducted work in "The Real Bifurcation Diagram." The article "Symmetry in Bifurcation Diagrams," written by these researchers, appeared in the Fall 2002 Pi Mu Epsilon Journal. Group D (Fishback and Horton) researched Mandlebrot sets for ternary number systems. The article "Quadratic Dynamics in Matrix Rings: Tales of Ternary Number Systems," written by these researchers, appeared in Fractals, Volume 13, 2005. Group E (Wells, Pierce and Taylor) investigated rotations that arise from chemistry. These publications are based upon work supported by the National Science Foundation under Grants Nos. DMS-1262342, DMS-1003993, DMS-0451254, DMS-0137264, and DMS-9820221. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF). Page last modified March 21, 2014
{"url":"http://www.gvsu.edu/mathreu/research-and-publications-at-previous-reu-programs-5.htm","timestamp":"2014-04-19T00:16:14Z","content_type":null,"content_length":"58440","record_id":"<urn:uuid:60b972fc-151b-4910-b6c1-1c6feca39710>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Hoffman Estates Calculus Tutor Find a Hoffman Estates Calculus Tutor ...While I specialize in high school and college level mathematics, I have had success tutoring elementary and middle school students as well. I have experience working with ACT College Readiness Standards and have been successful improving the ACT scores of students. In first tutoring sessions wi... 19 Subjects: including calculus, geometry, statistics, algebra 1 ...I am also doing one-on-one tutoring for high school students for the past three years. After my tutoring all of my students see the hike in their grades. I am able to help in pre-algebra, algebra, Geometry, College-algebra, Trigonometry and Calculus. 12 Subjects: including calculus, geometry, statistics, algebra 1 ...I've helped design GRE apps for Android and IOS operating systems, and I revised the content of online GRE study tools to reflect the new structure of the exam. In the past 5 years, I've written proprietary guides on SAT strategy for local companies. These guides have been used to improve scores all over the midwest. 24 Subjects: including calculus, physics, geometry, GRE ...I strive to find creative ways to make subject matter interesting and relatable for all individuals while presenting material in an easy-to-follow manner. I've helped both high school students and fellow college classmates master complex material with patience and encouragement. I plan to become an optometrist and will be beginning a Doctor of Optometry program in the Fall. 25 Subjects: including calculus, chemistry, biology, physics ...My senior year, I took two intensive statistic classes as my core math classes. In addition, I took one semester worth of Organic Chemistry and one semester worth of Molecular and Cellular Biology. In addition to my classes, I have had experience researching at Northwestern University in Evanston, IL and Feinberg School of Medicine in Chicago, IL. 16 Subjects: including calculus, chemistry, statistics, geometry
{"url":"http://www.purplemath.com/hoffman_estates_il_calculus_tutors.php","timestamp":"2014-04-18T19:16:31Z","content_type":null,"content_length":"24474","record_id":"<urn:uuid:105d7476-33de-4c02-8982-b3d763dabc41>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00260-ip-10-147-4-33.ec2.internal.warc.gz"}