content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: § 424 Actual Infinity: We never get it - but
we get it!
Replies: 3 Last Post: Feb 5, 2014 4:36 PM
Messages: [ Previous | Next ]
Re: § 424 Actual Infinity: We never get it - but
we get it!
Posted: Feb 5, 2014 2:07 PM
WM <wolfgang.mueckenheim@hs-augsburg.de> writes:
> Am Mittwoch, 5. Februar 2014 17:48:42 UTC+1 schrieb Ben Bacarisse:
>> WM <wolfgang.mueckenheim@hs-augsburg.de> writes:
>> > Am Mittwoch, 5. Februar 2014 15:49:43 UTC+1 schrieb Ben Bacarisse:
>> >
>> >> There either is or there is not a bijection between N and the
>> >> set of paths.
>> >
>> > If we find that an uncountable set like all real numbers is a subset
>> > of a countable set like all definitions of numbers, then we have a
>> > contradiction.
>> Is or there is not a bijection between N and the set of paths?
> Of course there is a bijection between |N and all finite
> definitions. The set of paths is a subset of the latter.
So another question you won't answer. You are not fooling anyone with
this tactic.
>> So there is not a bijection between N and the set of paths? Which is
>> it? Can you construct one or not?
> The simplest construction to show that every antidiagonal is in a
> bijection is the set of all rational numbers.
So no, you can't. You just say stuff like this and hope the everyone
else goes "oh, that must be a bijection -- Prof Mueckenheim say so -- no
need for an actual proof".
>> I never forget it. You have no proof that the set of paths is a subset
>> of anything since you have not yet even defined it!
> Definition: Every path *is* a finite definition.
<flashing lights; sound effect: woop! woop! woop!>
Almost an actual definition. So there is a bijection between N and some
set of paths that meet this rather vague criterion. I'll buy that
(despite the issues raised by William Hughes). Let's call them the
>> >> Here is the definition of the path set to which you have agreed:
>> >>
>> >> || [...] The infinite binary tree, B = (N, T) is a graph
>> >> || whose noes are the natural numbers (1, 2, 3...) and whose edges are the
>> >> || pairs T = { (i, j) | j = 2i or j = 2i+1 }. The set if infinite rooted
>> >> || paths in B are the set of sequences p(n) with p(1) = 1 and p(n+1) =
>> >> || 2p(n) or p(n+1) 2p(n)+1. Is this set of paths countable? No.
>> >
>> > You have not defined any path.
> I have defined three sets of paths, namely leading from the root node
> to each node and thereafter being completed by a tail of 000..., or
> 111..., or 010101...
Why are you answering yourself?
> You can add any of your favourite completeions by using any of a
> countable set of finite definitions.
Yes, I know how to make the set of WMpaths.
>> No, I have defined the set of paths
> That has cardinal number 1.
>> Don't play silly games. You know what that "or" means. Do you want to
>> be write it out in simpler terms for you?
> Please write every path in as simple terms as you can. Perhaps that
> will remove your block.
Funny. Do you still claim not to understand the definition of the set
of paths or do you want me to write it out in some other form?
Date Subject Author
2/5/14 Re: § 424 Actual Infinity: We never get it - but Ben Bacarisse
we get it!
2/5/14 Re: � 424 Actual Infinity: WM never got it - but we get it! Virgil | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2618398&messageID=9380163","timestamp":"2014-04-19T00:16:19Z","content_type":null,"content_length":"21060","record_id":"<urn:uuid:9f89b522-c28d-4efd-9e0d-5436ac7e2a4c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relativistic Wavelength of Electron in Transmission Electron Microscop
Solving for [itex]\lambda[/itex], I got:
\lambda = \sqrt{\frac{h^2 c^2}{E^2-m_{0}^{2} c^4}}
This is correct, so far.
Now, this expression is similar, but not identical to the one I stated at the beginning. My energy term E is expressed in joule, whereas the V in the first equation is expressed in volt.
E is the
energy of the electron (rest-energy plus kinetic energy). V is the potential difference through which the electron is accelerated in the TEM.
When you accelerate an electron with charge q, from rest through a potential difference V, it ends up with
energy equal to qV.
This should give you enough information to complete the derivation.
Also, your first equation (the one you found somewhere) is incorrect. The units are inconsistent. In the denominator, m
has units of energy (joules), and V has units of volts. You can't add them.
Nevertheless, that equation is close to the correct one. If you finish your derivation correctly, you'll see what's missing. There's a clue in my statement about accelerating the electron, three
paragraphs above this one. | {"url":"http://www.physicsforums.com/showthread.php?t=736897","timestamp":"2014-04-18T00:26:09Z","content_type":null,"content_length":"48894","record_id":"<urn:uuid:ec01654a-b7c2-4c27-99ad-f3994b290c7e>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
Questioning Euclid and Understanding Escher
Story posted October 23, 2003
On the screen are two circles with repeating images inside - one in black and white, one in color. It takes most people a few moments to realize what they're seeing, but soon images of angels and
devils appear in one and fish appear in the other. One drawing is by M.C. Escher; the other is an Escher-like drawing by another artist.
Jennifer Taback, visiting assistant professor of Mathematics, said that when asked for an opinion about such a drawing, most people reply, " Oh, it's very pretty."
Taback spoke on "Questioning Euclid and Understanding Escher" at a recent faculty seminar. "Today I want to go beyond, 'oh, it's pretty,'" she said. "When I look at this, I might see something
slightly different than when you look at this."
Most people see the drawings as containing images that grow increasingly smaller as they approach the edge of the circle, Taback said, while she thinks of all of the devils in the first drawing as
being the same size. "So somehow I must be interpreting this slightly differently than you are."
Taback's area of expertise is geometry, which is also an important aspect of Escher's art. To understand the drawings, you first have to understand tessellation. Tessellation is simply a repeating
pattern; for example, the tile on a bathroom floor is a tessellation. Viewing Escher's drawings as tessellations can help to explain what he's doing.
Looking at another Escher drawing "Liberation" it's easy to see the geometric component.
In this drawing he starts with repeating triangles that eventually morph into repeating birds. Even when he's tessellating a figure, he starts with a geometric shape.
"The essential underlying geometry is easy to understand," Taback said, "because we know about triangles and we know how they repeat."
The circle drawings are a bit different though, because the same shape can't repeat indefinitely since the circle is a confined space.
"Right now, it should look to you like the devils are getting smaller," Taback said, referring to the first drawing. "At the end of the talk, it shouldn't look to you like the devils are getting
To begin the transformation of the audience, however, Taback had to start with the definition of geometry. Most geometry happens in a plane and is known as Euclidian geometry because it follows five
postulates put forth by Euclid:
1. Given any two distinct points, there exists a line segment between them.
2. Any line segment can be extended infinitely in either direction.
3. Given any line segment, one can draw a circle with the segment as a radius, and one endpoint as center.
4. All right angles are congruent.
5. Through a given point not on a given line, there exists at most one line through the point parallel to the given line.
The fifth postulate - known as the Parallel Postulate - was the key to Taback's transformation of the audience.
"This is the one that has inspired controversy," she said. "People tried for a long time to prove the parallel postulate."
Some mathematicians felt that geometry only needed four postulates and that the fifth should follow the other four and should be provable. (No one has been able to prove it, so it has remained a
This got a few brilliant minds thinking. They wondered what would happen to geometry if they threw out the parallel postulate and invented a geometry where it wasn't true.
"Back then, it was very daring to think that the parallel postulate could be false," Taback said.
Once you assume that infinitely many lines can be drawn, you need to change everything else about geometry too. This required thinking of parallel lines in a different way. They chose to think of
parallel lines simply as non-intersecting lines (and to forget about them being equidistant from each another). They realized that for the parallel postulate to be false it couldn't happen in a
Euclidian plane. The new system of geometry in which the parallel postulate is false is called hyperbolic geometry.
"Hyperbolic geometry doesn't happen in a plane; hyperbolic geometry happens in a disc," Taback said. A line in hyperbolic geometry is a piece of a circle perpendicular to the boundary of the disc. "
[P]arallel now means non-intersecting. This is why the parallel postulate can fail here, it's because the lines curve the way they do."
So, with the problem of the parallel postulate solved, the next question to address was the length of the lines. Lines in Euclidian geometry are infinitely long; in hyperbolic geometry, they're shown
in a disc, so they look short, but is this really the case?
"I really do want my lines to be infinitely long," Taback said. "The problem is that we're all wearing our Euclidian glasses."
In hyperbolic space, repeating objects of the same size have to be drawn smaller and smaller to get them to fit in the disc - but that's only a drawing.
"They look like they're only eight inches long, but they're really infinitely long," Taback said. "So, if you live in hyperbolic space...they would all look the same to you."
Taback turned her attention back to the Escher drawing. In hyperbolic space, triangles, pentagons and other shapes have curved lines, so they look slightly different than they do in Euclidian space.
An Escher circle drawing is happening in hyperbolic space, so Taback views the figures as all being the same size.
"These are all different tessellations of a hyperbolic plane," Taback said.
"There are infinitely many triangles here, all of which are the same size if you lived in the hyperbolic plane...But to us those fish look really, really tiny because that's how we have to draw them
to reflect the hyperbolic plane."
Hyperbolic geometry was developed independently by three different people: Johann Carl Friedrich Gauss (1777-1855); Nicolai Lobachevsky (1792-1856); and Janos Bolyai (1802-1860).
Gauss was a mathematician who had published many other important theoretical works, and though he wrote about hyperbolic geometry, he didn't feel the need to publish any of this work. Lobachevsky was
Russian mathematician, who tried to find a publisher for his work on hyperbolic geometry, but wasn't able to for many years. His work didn't become well known until the 1900s. Bolyai came up with
hyperbolic geometry when he was just 18 years old, and he published it as a 20-page appendix to one of his father's textbooks. It was the only thing he ever published, but after his death 20,000
pages of other writings were found.
Escher wasn't trained in mathematics and didn't call what he was doing hyperbolic geometry, but he was able, on his own, to develop a system that works in the same way.
"So, without formal training, he really did derive this idea," Taback said.
"Now, I hope when you see a picture like this," she said, pointing the Escher-like fish tessellation, " you won't just say 'isn't that pretty.' You'll say 'There's a hyperbolic plane, and all those
fish are the same size.'"
« Back | Campus News | Academic Spotlight | | Subscribe to Bowdoin News by Email | {"url":"http://www.bowdoin.edu/news/archives/math/000015.shtml","timestamp":"2014-04-16T10:16:23Z","content_type":null,"content_length":"23678","record_id":"<urn:uuid:264f537e-2de7-43c1-9ef6-7280877f6780>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
- Marilyn snubs Algebra I, again
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Marilyn snubs Algebra I, again
Replies: 3 Last Post: Nov 27, 2000 11:16 AM
Messages: [ Previous | Next ]
Marilyn snubs Algebra I, again
Posted: Nov 20, 2000 9:46 PM
On 1 October 2000, Marilyn vos Savant's column contained the following
problem sent by Joe Black of Athens, TX. "Say that two motorboats on
opposite shores of a river start moving toward each other, but at
different speeds. (Neglect all other factors, like acceleration,
turn-around and current.) When they pass each other the first time, they
are 700 yards from one shoreline. They continue to the opposite shore,
then turn around and start moving toward each other again. When they
pass the second time, they are 300 yards from the other shoreline.
(Their speeds, although different, remain constant.) How wide is the
On 19 November 2000, Marilyn published the following letter from Peter
Mantos of Albuquerque, NM. "I found it interesting that the recent
motorboat/river problem is _not_ solvable in a straightforward
mathematical way: The problem appears to have five unknowns and only
four equations. But you demonstrated that it can be solved with pure
Marilyn says that she has "heard from 462(!) readers, including plenty
of mathematicians and other professionals, who fervently believe my
answer is wrong."
Although there is overwhelming evidence that many professionals are
completely stumped by Algebra I problems, I am surprised that any
mathematician would question her answer.
When I first read the problem, I worked on it without reading her
solution. I made the same observations 1), 2) and 3) that Marilyn made,
but I did not reach her observations 4) and 5). Based on the way I was
taught by Miss Phoebe Fitzpatrick, I identified one unknown and tried to
derive an equation. The first equation reduced to an identity. The
second yielded a quadratic equation with the same solution that Marilyn
obtained logically.
The second letter is indicative of the twisted way in which Algebra I
continues to be taught. It gives readers the fallacious impression that
this "problem is _not_ solvable in a straightforward mathematical way:
The problem appears to have five unknowns and only four equations." Why
does Marilyn continue to promote this type of nonsense?
Dom Rosa
Date Subject Author
11/20/00 Marilyn snubs Algebra I, again Domenico Rosa
11/21/00 Marilyn snubs Algebra I, again Sedinger, Harry
11/22/00 Re: Marilyn snubs Algebra I, again Domenico Rosa
11/27/00 Harry's dumber than Marilyn Sedinger, Harry | {"url":"http://mathforum.org/kb/thread.jspa?forumID=206&threadID=483184","timestamp":"2014-04-19T05:02:08Z","content_type":null,"content_length":"21530","record_id":"<urn:uuid:b4e6b21d-92b1-46e3-bc7f-2d6c583c5d0d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
Deterministic Operations Research: Models and Methods in Linear Optimization
Deterministic Operations Research: Models and Methods in Linear Optimization
September 2010, ©2010
Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems
Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing
the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result
is a clear-cut resource for understanding three cornerstones of deterministic operations research: modeling real-world problems as linear optimization problem; designing the necessary algorithms to
solve these problems; and using mathematical theory to justify algorithmic development.
Treating real-world examples as mathematical problems, the author begins with an introduction to operations research and optimization modeling that includes applications form sports scheduling an the
airline industry. Subsequent chapters discuss algorithm design for continuous linear optimization problems, covering topics such as convexity. Farkas’ Lemma, and the study of polyhedral before
culminating in a discussion of the Simplex Method. The book also addresses linear programming duality theory and its use in algorithm design as well as the Dual Simplex Method. Dantzig-Wolfe
decomposition, and a primal-dual interior point algorithm. The final chapters present network optimization and integer programming problems, highlighting various specialized topics including
label-correcting algorithms for the shortest path problem, preprocessing and probing in integer programming, lifting of valid inequalities, and branch and cut algorithms.
Concepts and approaches are introduced by outlining examples that demonstrate and motivate theoretical concepts. The accessible presentation of advanced ideas makes core aspects easy to understand
and encourages readers to understand how to think about the problem, not just what to think. Relevant historical summaries can be found throughout the book, and each chapter is designed as the
continuation of the “story” of how to both model and solve optimization problems by using the specific problems-linear and integer programs-as guides. The book’s various examples are accompanied by
the appropriate models and calculations, and a related Web site features these models along with Maple™ and MATLAB content for the discussed calculations.
Thoroughly class-tested to ensure a straightforward, hands-on approach, Deterministic Operations Research is an excellent book for operations research of linear optimization courses at the
upper-undergraduate and graduate levels. It also serves as an insightful reference for individuals working in the fields of mathematics, engineering, computer science, and operations research who use
and design algorithms to solve problem in their everyday work.
See More
1. Introduction to Operations Research.
1.1 What is Deterministic Operations Research?
1.2 Introduction to Optimization Modeling.
1.3 Common Classes of Mathematical Programs.
1.4 About the Book.
2. Linear Programming Modeling.
2.1 Resource Allocation Models.
2.2 Work Scheduling Models.
2.3 Models and Data.
2.4 Blending Models.
2.5 Production Process Models.
2.6 Multiperiod Models: Work Scheduling and Inventory.
2.7 Linearization of Special Nonlinear Models.
2.8 Various Forms of Linear Programs.
2.9 Network Models.
3. Integer and Combinatorial Models.
3.1 Fixed-Charge Models.
3.2 Set Covering Models.
3.3 Models Using Logical Constraints.
3.4 Combinatorial Models.
3.5 Sports Scheduling and an Introduction to IP Solution Technques.
4. Real-World Operations Research Applications: An Introduction.
4.1 Vehicle Routing Problems.
4.2 Facility Location and Network Design Models.
4.3 Applications in the Airline Industry.
5. Introduction to Algorithm.
5.1 Exact and Heuristic Algorithms.
5.2 What to Ask When Designing Algorithms?
5.3 Constructive versus Local Search Algorithms.
5.4 How Good is our Heuristic Solution?
5.5 Example of a Local Search Method.
5.7 Other Heuristic Methods.
5.8 Designing Exact Methods: Optimality Conditions.
6. Improving Search Algorithms and Comvexity.
6.1 Improving Search and Optimal Solutions.
6.2 Finding Better Solutions.
6.3 Convexity: When Does Improving Search Imply Global Optimality?
6.4 Farkas’ Lemma: When Can No Improving Feasible Direction be Found?
7. Geometry and Algebra of Linear Programs.
7.1 Geometry and Algebra of “Corner Points”.
7.2 Fundamental Theorem of Linear Programming.
7.3 Linear Programs in Canonical Form.
8. Solving Linear Programs: Simplex Method.
8.1 Simplex Method.
8.2 Making the Simplex Method More Efficient.
8.3 Convergence, Degeneracy, and the Simplex Method.
8.4 Finding an Initial Solution: Two-Phase Method.
8.5 Bounded Simplex Method.
8.6 Computational Issues.
9. Linear Programming Duality.
9.1 Motivation: Generation Bounds.
9.2 Dual Linear Program.
9.3 Duality Theorems.
9.4 Another Interpretation of the Simplex Method.
9.5 Farkas’ Lemma Revisited.
9.6 Economic Interpretation of the Dual.
9.7 Another Duality Approach: Lagrangian Duality.
10. Sensitivity Analysis of Linear Programs.
10.1 Graphical Sensitivity Analysis.
10.2 Sensitivity Analysis Calculations.
10.3 Use of Sensitivity Analysis.
10.4 Parametric Programming.
11. Algorithmic Applications of Duality.
11.1 Dual Simplex Method.
11.2 Transportation Problem.
11.3 Column Generation.
11.4 Dantzig-Wolfe Decomposition.
11.5 Primal-Dual Interior Point Method.
12. Network Optimization Algorithms.
12.1 Introduction to Network Optimization.
12.2 Shortest Path Problems.
12.3 Maximum Flow Problems.
12.4 Minimum Cost Network Flow Problems.
13. Introduction to Integer Programming.
13.1 Basic Definitions and Formulations.
13.2 Relaxations and Bounds.
13.3 Preprocessing and Probing.
13.4 When are Integer Programs “Easy?’
14. Solving Integer Programs: Exact Methods.
14.1 Complete Enumeration.
14.2 Branch-and Bound Methods.
14.3 Valid Inequalities and Cutting Planes.
14.4 Gomory’s Cutting Plane Algorithm.
14.5 Valid Inequalities for 0-1 Knapsack Constraints.
14.6 Branch-and-Cut Algorithms.
14.7 Computational Issues.
15. Solving Integer Programs: Modern Heuristic Techniques.
15.1 Review of Local Search Methods: Pros and Cons.
15.2 Simulated Annealing.
15.3 Tabu Search.
15.4 Genetic Algorithms.
15.5 GRASP Algorithms.
Appendix A: Background Review.
A.1 Basic Notation.
A.2 Graph Theory.
A.3 Linear Algebra.
See More
David J. Rader Jr., PhD, is Associate Professor of Mathematics at Rose-Hulman Institute of Technology, where he is also the editor of the Rose-Hulman Institute of Technology Undergraduate Mathematics
Journal. Dr. Rader currently focuses his research in the areas of nonlinear 0-1 optimization, computational integer programming, and exam time timetabling.
See More
“Dr. Phillips has used other texts, but he is especially enthused with this book, influenced by student feedback. He says, “Algorithmic ideas are introduced at a pace that emphasizes and encourages
intuitive understanding.” (Informs Journal on Computing, 1 June 2012)
"The book is aimed at serving upper-undergraduate and graduate students of all fields as a comprehensive textbook or as a reference for studies on the subject." (Zentralblatt MATH, 2011)
"The result is a clear-cut resource for understanding three cornerstones of deterministic operations research: modeling real-world problems as linear optimization problems; designing the necessary
algorithms to solve these problems; and using mathematical theory to justify algorithmic development." (InfoTECH Spotlight - TMCnet, 8 February 2011)
See More
Purchase Options
Deterministic Operations Research: Models and Methods in Linear Optimization
ISBN : 978-0-470-48451-7
632 pages
September 2010, ©2010
Deterministic Operations Research: Models and Methods in Linear Optimization
ISBN : 978-1-118-30456-3
632 pages
May 2012
Information about Wiley E-Texts:
• Wiley E-Texts are powered by VitalSource technologies e-book software.
• With Wiley E-Texts you can access your e-book how and where you want to study: Online, Download and Mobile.
• Wiley e-texts are non-returnable and non-refundable.
• WileyPLUS registration codes are NOT included with the Wiley E-Text. For informationon WileyPLUS, click here .
• To learn more about Wiley e-texts, please refer to our FAQ.
Information about e-books:
• E-books are offered as e-Pubs or PDFs. To download and read them, users must install Adobe Digital Editions (ADE) on their PC.
• E-books have DRM protection on them, which means only the person who purchases and downloads the e-book can access it.
• E-books are non-returnable and non-refundable.
• To learn more about our e-books, please refer to our FAQ.
This title is also available on : | {"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-EHEP002372.html","timestamp":"2014-04-16T04:41:19Z","content_type":null,"content_length":"51038","record_id":"<urn:uuid:34c9dfe6-8298-4961-92f6-924e2d7856ef>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proposition 58
If an area is contained by a rational straight line and the fifth binomial, then the side of the area is the irrational straight line called the side of a rational plus a medial area.
Let the area AC be contained by the rational straight line AB and the fifth binomial AD divided into its terms at E, so that AE is the greater term.
I say that the side of the area AC is the irrational straight line called the side of a rational plus a medial area.
Make the same construction shown before. It is then manifest that MO is the side of the area AC. It is then to be proved that MO is the side of a rational plus a medial area.
Since AG is incommensurable with GE, therefore AH is also commensurable with HE, that is, the square on MN with the square on NO. Therefore MN and NO are incommensurable in square.
Since AD is a fifth binomial straight line, and ED the lesser segment, therefore ED is commensurable in length with AB.
But AE is incommensurable with ED, therefore AB is also incommensurable in length with AE. Therefore AK, that is, the sum of the squares on MN and NO, is medial.
Since DE is commensurable in length with AB, that is, with EK, while DE is commensurable with EF, therefore EF is also commensurable with EK.
And EK is rational, therefore EL, that is, MR, that is, the rectangle MN by NO, is also rational.
Therefore MN and NO are straight lines incommensurable in square which make the sum of the squares on them medial, but the rectangle contained by them rational.
Therefore MO is the side of a rational plus a medial area and is the side of the area AC.
Therefore, if an area is contained by a rational straight line and the fifth binomial, then the side of the area is the irrational straight line called the side of a rational plus a medial area. | {"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookX/propX58.html","timestamp":"2014-04-19T12:06:03Z","content_type":null,"content_length":"5793","record_id":"<urn:uuid:75be006c-8e69-4f6c-a96c-9839d15fea4e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comparison of Standard Containers
This is a rough comparison of the different container types that are provided by the OCaml language or by the OCaml standard library. In each case, n is the number of valid elements in the container.
Note that the big-O cost given for some operations reflects the current implementation but is not guaranteed by the official documentation. Hopefully it will not become worse. Anyway, if you want
more details, you should read the documentation about each of the modules. Often, it is also instructive to read the corresponding implementation.
See also: Standard Library Examples
Lists: immutable singly-linked lists
Adding an element always creates a new list l from an element x and list tl. tl remains unchanged, but it is not copied either.
• "adding" an element: O(1), cons operator ::
• length: O(n), function List.length
• accessing cell i: O(i)
• finding an element: O(n)
Well-suited for: IO, pattern-matching
Not very efficient for: random access, indexed elements
Arrays and strings: mutable vectors
Arrays and strings are very similar. Strings are specialized in storing chars (bytes), have some convenient syntax and store them compactly.
• "adding" an element: O(n)
• length: O(1), function Array.length
• accessing cell i: O(1)
• finding an element: O(n)
Well-suited for sets of elements of known size, access by numeric index, in-place modification. Basic arrays and strings have a fixed length. For extensible strings, the standard Buffer type can be
used (see below).
Set and Map: immutable trees
Like lists, these are immutable and they may share some subtrees. They are a good solution for keeping older versions of sets of items.
• "adding" an element: O(log n)
• returning the number of elements: O(n)
• finding an element: O(log n)
Sets and maps are very useful in compilation and meta-programming, but in other situations hash tables are often more appropriate (see below).
Hashtbl: automatically growing hash tables
Ocaml hash tables are mutable data structures, which are a good solution for storing (key, data) pairs in one single place.
• adding an element: O(1) if the initial size of the table is larger than the number of elements it contains; O(log n) on average if n elements have been added in a table which is initially much
smaller than n.
• returning the number of elements: O(1)
• finding an element: O(1)
Buffer: extensible strings
Buffers provide an efficient way to accumulate a sequence of bytes in a single place. They are mutable.
• adding a char: O(1) if the buffer is big enough, or O(log n) on average if the initial size of the buffer was much smaller than the number of bytes n.
• adding a string of k chars: O(k * "adding a char")
• length: O(1)
• accessing cell i: O(1)
OCaml queues are mutable first-in-first-out (FIFO) data structures.
• adding an element: O(1)
• taking an element: O(1)
• length: O(1)
OCaml stacks are mutable last-in-first-out (LIFO) data structures. They are just like lists, except that they are mutable, i.e. adding an element doesn't create a new stack but simply adds it to the
• adding an element: O(1)
• taking an element: O(1)
• length: O(n) | {"url":"http://ocaml.org/learn/tutorials/comparison_of_standard_containers.html","timestamp":"2014-04-19T00:04:05Z","content_type":null,"content_length":"13833","record_id":"<urn:uuid:1e9decd1-8a90-4ef3-8100-a969876f858c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] trignometry
September 6th 2008, 08:43 AM #1
Sep 2008
[SOLVED] trignometry
hi! can't understand what does "bearing of 325 degrees" in trigo means.
For example take this problem of 10th standard of IGCSE board-
Two boats set off from X at the same time.Boat A sets off on a bearing of 325 degrees and with a velocity of 14kmperhour.Boat B sets off on a bearing of 235 degrees with a velocity of 18kmper
hour.Calculate the distance between the boats after 2.5 hours
kindly explain how to get the triangle when bearing in degrees is given.
A bearing like that is technically an azimuth. It is turned clockwise from north or the positive y-axis right around the circle.
325-235=90, so you have a right triangle which is easy to work with.
Boat A travels 14*2.5=35 km
Boat B travels 18*2.5=45 km.
So you have two sides of length 35 and 45 and a 90 degree angle.
All you need is Pythagoras.
Last edited by galactus; November 24th 2008 at 05:38 AM.
thanks a lot
September 6th 2008, 08:52 AM #2
September 7th 2008, 02:21 AM #3
Sep 2008 | {"url":"http://mathhelpforum.com/trigonometry/47900-solved-trignometry.html","timestamp":"2014-04-18T07:05:27Z","content_type":null,"content_length":"34930","record_id":"<urn:uuid:a7da5b8a-6ef6-40b0-ba69-9b6bc07fe196>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Option pricing and risk management
Chapter 2 discussed the basic principles underlying of the two major option pricing formulae. It clearly showed that two totally different approaches were followed in each case, and yet both arrived
at approximately the same value for the price of an option. Both these approaches made certain assumptions in their derivation of the formulae in order to simplify the final expressions, and to
produce a more workable solution. They both however made substantial use of statistical probability in order to determine the likelihood of a certain event occurring. Chapter 3 gave a detailed
derivation of both the Black and Scholes and the Binomial tree pricing formulae, as well as the associated criticism and advantages of the respective approaches. Value at risk, or VaR, was used in
determining the statistical probability of a certain portfolio consisting of a specified option losing more than a certain percentage of its value over a given period of time. The resulting number
obtained can be used to judge the riskiness of a portfolio in the given market conditions. All of these formulae are used on a daily basis by financial professionals in the daily operations of a
magnitude of different institutions in order to value financial portfolios, the risk associated with these portfolios and the probability of certain events occurring within the portfolios in order to
make better decisions and increase the profitability of these institutions, without actually knowing the underlying principles. - As- such these --formulae merely become a number crunching business,
and interpretation of these numbers, without realising the pitfalls associated with the approaches in establishing these formulae. The random walk theory for unrestricted movement assumes that at t=
0, the rates are at the origin. This can be interpreted as 0%, and instinctively any person would agree that 0% is not possible in any fixed income environment, due to the time value attached to
money. Choosing the ruling rate as the origin would be more practical in determining the origin, but care must be taken in assigning probabilities to the up and down movements. At the onset of the
problems amongst the emerging markets during 1998, the probability of rates increasing once it reached 17,00% was much higher than that of the rates decreasing. However, barely a month later when the
rates had reached its peak at more than 21,00% and were declining again, the probability of the rates increasing once it reached 17,00% again was much lower than that of it decreasing further. This
would have a significant effect on the probability generating function, and hence also an effect on the mean and variance thus derived. The probability curve of the rates during these times were also
not represented by a standard normal curve, and as such the heteroscedacity of the curve had a major influence on the pricing of options. During extreme periods both the random walk theory and the
Wiener process would be totally skewed, and unreliable answers would be derived from this approach. By 'adjusting the expression for a non-standard distribution, these problems can be eliminated and
an accurate approach once again obtained using this process. Problems that could occur when using this approach to solve inaccuracies would amongst others include the following: The incorrect
distribution function is being applied for the specific set of conditions prevailing in the market. This is due to the fact that under these abnormal conditions the distribution function can change
over a very short period of time. Incorrect skews being applied to the distribution function due to fast changing market conditions. When to revert back to the normal distribution function. It then
becomes a question not of an improper analytical approach, but incorrect timing approach. Since markets mostly perform according to the standardised normal distribution function the Wiener approach
hold true for most applications. | {"url":"https://ujdigispace.uj.ac.za/handle/10210/6728","timestamp":"2014-04-19T12:59:22Z","content_type":null,"content_length":"26394","record_id":"<urn:uuid:1f15543f-0404-49e7-817d-bd1ce2af42c7>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: October 2010 [00653]
[Date Index] [Thread Index] [Author Index]
Re: Manually nested Tables faster than builtin nesting?
• To: mathgroup at smc.vnet.net
• Subject: [mg113449] Re: Manually nested Tables faster than builtin nesting?
• From: Daniel Lichtblau <danl at wolfram.com>
• Date: Fri, 29 Oct 2010 06:28:01 -0400 (EDT)
Bill Rowe wrote:
> On 10/27/10 at 5:13 AM, sschmitt at physi.uni-heidelberg.de (Sebastian
> Schmitt) wrote:
>> I find that manually nested Tables are faster (factor 2 in my
>> example) compared to the builtin nesting. I probably do a stupid
>> thinko. Could you please have a look?
>> In[52]:= imax = jmax = 5000;
>> In[53]:= (tabletable = Table[Table[i*j, {i, imax}], {j, jmax}];) //
>> Timing // First
>> Out[53]= 1.84
>> In[54]:= (table = Table[i*j, {i, imax}, {j, jmax}];) // Timing //
>> First
>> Out[54]= 14.55
> I get similar timing results. My speculation is as follows:
> When you use the built-in nesting, you explicitly do 25 million
> multiplies. No possibility of vector processing.
> But when you use explicit nesting, you are multiplying a packed
> array by a scalar. My guess is Mathematica employs some vector
> processing here and gains a speed up over individual multiplies.
> If my speculation is right, then there should be some operations
> which will not gain such a dramatic speed-up.
This is heading in the right direction. Actually the entire issue
involves use, or not, of packed arrays. No vectorization of
multiplication is done here. If it were, there would be an additional
speed gain.
imax = jmax = 2000;
First the basic nested version.
In[2]:= First[Timing[tnest1 = Table[Table[i*j, {i, imax}], {j, jmax}];]]
Out[2]= 0.239963
Now we vectorize and it gets noticeably faster.
In[3]:= First[Timing[tnest2 = Table[j *
Table[i, {i, imax}], {j, jmax}];]]
Out[3]= 0.099985
In[4]:= tnest1===tnest2
Out[4]= True
Now without explicit nesting.
In[5]:= First[Timing[tnonest1 = Table[j*i, {i, imax}, {j, jmax}];]]
Out[5]= 1.46478
In[6]:= tnonest1===tnest2
Out[6]= True
Why so much slower? It has to do with packing. Neither is packed, but
the first ones have packed rows.
In[10]:= And @@ Map[PackedArrayQ,tnest2]
Out[10]= True
In[11]:= And @@ Map[PackedArrayQ,tnest1]
Out[11]= True
In[12]:= And @@ Map[PackedArrayQ,tnonest1]
Out[12]= False
One might well wonder why the packing is not done in this case. It has
to do with evaluation semantics. Table sees the second iterator upper
bound as a variable (because it is Held). Indeed, it could be changed in
process of handling the first iterator. Hence Table cannot set up a
packed array.
We can force preevaluation of that iterator bound as below. This
recovers the expected speed.
In[13]:= First[Timing[tnonest2 = With[{jmax=jmax},
Table[j*i, {i, imax}, {j, jmax}]];]]
Out[13]= 0.138979
We can even pack the entire thing, simply by forcing numerical
evaluation of both inner and outer iterator bounds.
In[15]:= First[Timing[tnonest3 = With[{imax=imax,jmax=jmax},
Table[j*i, {i, imax}, {j, jmax}]];]]
Out[15]= 0.13898
In[16]:= PackedArrayQ[tnonest3]
Out[16]= True
In[17]:= tnonest3===tnonest2===tnonest1
Out[17]= True
One will notice the construction of tnest2 was slightly faster. This is
presumably an indication of the speed gain from vectorization of the
Daniel Lichtblau
Wolfram Research | {"url":"http://forums.wolfram.com/mathgroup/archive/2010/Oct/msg00653.html","timestamp":"2014-04-16T10:16:48Z","content_type":null,"content_length":"28174","record_id":"<urn:uuid:f88c6d89-0bb7-402a-a80f-3e5eee1247c9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Introduction to Mathematical Cosmology 2nd edition by Islam | 9780521499736 | Chegg.com
An Introduction to Mathematical Cosmology 2nd edition
Details about this item
An Introduction to Mathematical Cosmology: This book provides a concise introduction to the mathematical aspects of the origin, structure and evolution of the universe. The book begins with a brief
overview of observational and theoretical cosmology, along with a short introduction of general relativity. It then goes on to discuss Friedmann models, the Hubble constant and deceleration
parameter, singularities, the early universe, inflation, quantum cosmology and the distant future of the universe. This new edition contains a rigorous derivation of the Robertson-Walker metric. It
also discusses the limits to the parameter space through various theoretical and observational constraints, and presents a new inflationary solution for a sixth degree potential. This book is
suitable as a textbook for advanced undergraduates and beginning graduate students. It will also be of interest to cosmologists, astrophysicists, applied mathematicians and mathematical physicists.
Back to top
Rent An Introduction to Mathematical Cosmology 2nd edition today, or search our site for Jamal N. textbooks. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Cambridge
University Press. | {"url":"http://www.chegg.com/textbooks/an-introduction-to-mathematical-cosmology-2nd-edition-9780521499736-0521499739","timestamp":"2014-04-20T22:03:56Z","content_type":null,"content_length":"20924","record_id":"<urn:uuid:2ea3e39a-fb9a-49dc-8373-6b266d44e47c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Help with Probability Problems
Replies: 2 Last Post: Mar 31, 2013 4:27 PM
Messages: [ Previous | Next ] Topics: [ Previous | Next ]
BBPHX99 Re: Help with Probability Problems
Posted: Mar 31, 2013 4:27 PM
Posts: 17
Registered: 12/6/04 Hi:
These are simple probability questions. Please call and I will be happy to explain the solutions in a detailed manner for you to understand.
Howard Heller
(212) 874-4105
Date Subject Author
3/23/13 Help with Probability Problems ireckless
3/24/13 Re: Help with Probability Problems Peter Scales
3/31/13 Re: Help with Probability Problems BBPHX99 | {"url":"http://mathforum.org/kb/thread.jspa?messageID=8795405&tstart=0","timestamp":"2014-04-16T07:58:08Z","content_type":null,"content_length":"18657","record_id":"<urn:uuid:065d6240-bb61-477a-bcf0-0933d203d999>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
set of p-adic integers is homeomorphic to Cantor set; how?
Could somebody explain with due brevity why/how the set of p-adic integers is homeomorphic to the Cantor set less one point for any prime p?
This is a quote from
Wikipedia:Cantor Set
: "The Cantor set is also homeomorphic to the p-adic integers, and, if one point is removed from it, to the p-adic numbers."
Can somebody explain this simply, I don'y really get p-adic #'s.
P.S. Not homework, don't want a proof, just understanding of it. | {"url":"http://www.physicsforums.com/showthread.php?t=101634","timestamp":"2014-04-17T07:25:11Z","content_type":null,"content_length":"22620","record_id":"<urn:uuid:5e4bcb05-7a2b-448a-8183-2a4cec69ffb2>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics and Phenomenology. The correspondence between Oskar
, 2005
"... Hilbert’s program is, in the first instance, a proposal and a research program in the philosophy and foundations of mathematics. It was formulated in the early 1920s by German mathematician
David Hilbert (1862–1943), and was pursued by him and his collaborators at the University of Göttingen and els ..."
Cited by 4 (0 self)
Add to MetaCart
Hilbert’s program is, in the first instance, a proposal and a research program in the philosophy and foundations of mathematics. It was formulated in the early 1920s by German mathematician David
Hilbert (1862–1943), and was pursued by him and his collaborators at the University of Göttingen and elsewhere in the 1920s
"... We describe the pictorial representations of infinite ordinals used in teaching set theory, and discuss a possible use in naturalistic foundations of mathematics. 1 ..."
Add to MetaCart
We describe the pictorial representations of infinite ordinals used in teaching set theory, and discuss a possible use in naturalistic foundations of mathematics. 1
"... Abstract. The notions of “construction principles ” is proposed as a complementary notion w.r. to the familiar “proof principles ” of Proof Theory. The aim is to develop a parallel analysis of
these principles in Mathematics and Physics: common construction principles, in spite of different proof pr ..."
Add to MetaCart
Abstract. The notions of “construction principles ” is proposed as a complementary notion w.r. to the familiar “proof principles ” of Proof Theory. The aim is to develop a parallel analysis of these
principles in Mathematics and Physics: common construction principles, in spite of different proof principles, justify the effectiveness of Mathematics in Physics. The very “objects ” of these
disciplines are grounded on commun genealogies of concepts: there is no trascendence of concepts nor of objects without their contingent and shared constitution. A comparative analysis of Husserl’s
and Gödel’s philosophy is hinted, with many references to H. Weyl’s reflections on Mathematics and Physics. Introduction (with F. Bailly) With this text, we will first of all discuss a distinction,
internal to mathematics, between “construction principles ” and “proof principles ” (see [Longo, 1999; 2002]). In short, it will be a question of grasping the difference between the construction of
mathematical concepts and structures | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=4750065","timestamp":"2014-04-17T14:33:37Z","content_type":null,"content_length":"16331","record_id":"<urn:uuid:182d601c-35fb-45b2-8da7-1ae69ec18168>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
Current status of a conjecture of Bloch
up vote 9 down vote favorite
In the seminal paper $K_2$ and algebraic cycles, Bloch make the following conjecture :
Suppose $A$ is a local Noetherian integral domain with quotient field $F$
□ $K_2(A)$ → $K_2(F)$ is injective
□ Assume in addition $A$ is normal, $K_2(A)$ = $∩_pK_2(A_p)$ where $p$ runs through all height 1 prime ideals in $A$.
What is the current status of this conjecture?
I only know that the first statement is true for discrete valuation ring by a theorem of Dennis and Stein. Can we prove it for a local algebra over a field?
Moreover, the second claim in this conjecture is a Hartogs like statement, so we want it still to hold without the local assumption, could this be true? For example,can we prove it for Dedekind
domain or more specifically coordinate ring of a smooth affine curve over a (finite) field?
algebraic-k-theory ac.commutative-algebra
add comment
1 Answer
active oldest votes
The second statement is false (even if we modify it by replacing $K_2(A)$ by its image in $K_2(F)$). A counterexample is $A=k[x,y,z]_(x,y,z)/(z^2-xy)$. See J. Reine Angew.
up vote 8 down vote Math. 381 (1987), 37–50.
add comment
Not the answer you're looking for? Browse other questions tagged algebraic-k-theory ac.commutative-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/72782/current-status-of-a-conjecture-of-bloch","timestamp":"2014-04-18T21:03:32Z","content_type":null,"content_length":"50492","record_id":"<urn:uuid:ae6d2cce-a9af-423d-a922-39e7897bede9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/cahit/answered","timestamp":"2014-04-20T00:51:10Z","content_type":null,"content_length":"102177","record_id":"<urn:uuid:5f554ae6-b751-4b48-880b-aff0ebede6a6>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
Starting with Scaling
This post is from LeanAgileTraining by Joe Little. Click here to see the original post in full.
I have gotten a few questions lately that go about like this:
We are starting Scrum. We have the kind of projects that require scaling. But how do we start with Scrum and have some scaling?
The first thing to say is: The basic framework of Scrum does not attempt to answer this question. It assumes you will use lean-agile-scrum principles and values, and devise your own, specific
solution to this problem.
Still, the Scrum community has dealt with this problem many times. So, here is what Jeff Sutherland and Ken Schwaber and lots of others think are some good ideas to start with.
Let’s assume you are talking about putting 3 teams together to work on one project. To release roughly every 4 months. Let’s assume each team is about 7 people, including the PO and the SM.
Let’s also assume that we continue to focus on Team success. Meaning: We realize that the core of activity is inside the Team. If each Team is not ‘working’, then no amount of scaling is going to
help. So, the Team’s are real teams. Each person is 100% dedicated to one Team.
1. Chief Product Owner. Each team has a product owner, and, in addition, there is a Chief Product Owner — who manages the Master Product Backlog for all 3 teams. So, the CPO is not dedicated to one
Team, but to all 3 teams.
2. Product Owner group. The CPO and...
read more Home
One Response to “Starting with Scaling”
1. Read Dean Leffingwell’s book, ‘Agile Software Requirements’. Title doesn’t sound like the answer to this problem, but it is excellent for it. | {"url":"http://www.allaboutagile.com/starting-with-scaling/","timestamp":"2014-04-19T22:05:21Z","content_type":null,"content_length":"11303","record_id":"<urn:uuid:50b293b3-cf62-4d8a-bf6f-f1d3b319e33c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00086-ip-10-147-4-33.ec2.internal.warc.gz"} |
pound force
Convert between pounds (force) and other major units of force.
Convert between pounds (force) per square inch and other major units of pressure.
Convert between foot pounds (or pound feet) or inch pounds and other major units of torque. Please see foot pounds.
The unit of force in the British gravitational system of units, approximately the force which a mass of 1 pound exerts on whatever it is resting on on the Earth; that is, the weight of a mass of 1
pound. Symbol, lbf. (Some, however, have used the symbol Lb, with the L capitalized to distinguish it from the symbol lb for the pound.^1)
The weight of a mass of 1 pound varies from place to place; it weighs less at the equator than at the poles, and is lighter at high altitudes than at sea level. The pound-force, however, is
unvarying; it is defined as the weight (a force!) that a body with a mass of 1 pound would exert at a location where the acceleration due to gravity was exactly 32.1740 feet per second per second,
which is about the value at sea-level at a latitude of 45°. So 1 pound-force = 32.1740 poundals. Later a standard value of 9.80665 meters per second per second was adopted for the acceleration due to
gravity, and 1 pound-force = 4.448 221 615 260 5 newtons.
1. Ernst Schmidt.
International system of units. MKSA system in applied thermodynamics.
in Systems of Units. National and International Aspects.
Carl F. Kayan, editor.
Publication No. 57 of the AAAS.
Washington, D. C.: American Association for the Advancement of Science, 1959.
Page 274: “the pound-force is abbreviated Lb with capital initial letter.”
home | units index | search | to contact Sizes | acknowledgements | help |
terms of use | {"url":"http://www.sizes.com/units/pound_force.htm","timestamp":"2014-04-20T18:48:34Z","content_type":null,"content_length":"6434","record_id":"<urn:uuid:9d43cae7-c465-4f17-b29f-f8bd10853c9e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
CaptDallas' Redneck Theoretical Physics Forum
I played around with the Atmospheric R values, just to show there is more than one way to skin a cat fish. The best way I am firmly convinced is the modified Kimoto equation.
As a refresher, Kimoto is base on the simplification of the S-B relationship where dF/dT=4F/T, or the change in flux with respect to the change in temperature is equal to or "proportional" to four
times the flux divided by the temperature. That is an approximation and it is not truly a linear relationship.
Depending on your choice of a frame of reference, the approximation could be 5F/T or just F/T, S-B has a forth power relationship between temperature and flux.
Using the 4F/T is equivalent to using 255K as a reference temperature. If I use the surface as my frame of reference, then the modification, a*4F/T is required where a is a variable dependent on the
thermodynamic reference with respect to 255K. Only you can make a suitable to any other reference temperature just by changing the charact5eristics of a. Simple right? Evidently not so much for a lot
of people, but that is how it works.
Once people get beyond that silly 4, they begin to realize you can make the equation approximately a linear relationship for small changes from any consistent Thermodynamic frame of reference.
The neat part is once that sinks in, you can see how flexible that simple equation can be. aFt + bFl + eFr, for thermal, latent and radiant. I mentioned in a previous post the F? should always be
considered. We are dealing with a dynamic system so there are going to be surprises. That is the beauty of the modified equation, it can learn with you.
Fr can be split into Fr(GHG) + Fr(water,liquid) + ... F sub whatever you either have good information on or wish to learn more about. Fr can be separated into a complete line by line(LBL) spectral
analysis if you wish.
The whole equation can be used for dF/dT surface, dF/dT 600mb or any reference layer you wish to study in any direction by just adjusting the coefficients for the reference temperatures of source and
sink. dF/dT tropics with dF/dT sub-tropics could be used for average energy transfer between regions. It is a powerful equation, as long as you reference back to a common frame of reference.
Since boundary layers are plentiful and hard to deal with in fluid dynamics, using a boundary as a reference from in two directions could simplify resolution of changes in boundary layer flex changes
with time. I haven't tried that yet, but it appears very likely.
By skipping boundary layers, ie surface to tropopause versus surface to stratopause you can compare effective emissivities to help resolve solar impacts in the atmosphere. As long as you have
sufficient temperature and pressure resolution of the target layers, you can double check flux relationships between layers.
Most importantly, you can select layers where the best temperature data is available, like the 500mb mid-troposphere satellite data.
The more physical data you have, the more you can learn about F? | {"url":"http://redneckphysics.blogspot.com/2011/11/learning-equation-kimoto-modified.html","timestamp":"2014-04-20T00:38:50Z","content_type":null,"content_length":"77913","record_id":"<urn:uuid:5610a253-1079-4250-943b-34fb0228fa9e>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
First evidence of new physics in b ↔ s transitions
We combine all the available experimental information on B[s ]mixing, including the very recent tagged analyses of B[s ]→ J/Ψϕ by the CDF and DØ collaborations. We find that the phase of the B[s ]
mixing amplitude deviates more than 3σ from the Standard Model prediction. While no single measurement has a 3σ significance yet, all the constraints show a remarkable agreement with the combined
result. This is a first evidence of physics beyond the Standard Model. This result disfavours New Physics models with Minimal Flavour Violation with the same significance.
PACS Codes: 12.15.Ff, 12.15.Hh, 14.40.Nb
1. Letter
In the Standard Model (SM), all flavour and CP violating phenomena in weak decays are described in terms of quark masses and the four independent parameters in the Cabibbo-Kobayashi-Maskawa (CKM)
matrix [1,2]. In particular, there is only one source of CP violation, which is connected to the area of the Unitarity Triangle (UT). A peculiar prediction of the SM, due to the hierarchy among CKM
matrix elements, is that CP violation in B[s ]mixing should be tiny. This property is also valid in models of Minimal Flavour Violation (MFV) [3-8], where flavour and CP violation are still governed
by the CKM matrix. Therefore, the experimental observation of sizable CP violation in B[s ]mixing is a clear (and clean) signal of New Physics (NP) and a violation of the MFV paradigm. In the past
decade, B factories have collected an impressive amount of data on B[d ]flavour- and CP-violating processes. The CKM paradigm has passed unscathed all the tests performed at the B factories down to
an accuracy just below 10% [9-11]. This has been often considered as an indication pointing to the MFV hypothesis, which has received considerable attention in recent years. The only possible hint of
non-MFV NP is found in the penguin-dominated b → s non-leptonic decays. Indeed, in the SM, the b → [12-21], while hadronic models predict a shift in the opposite direction in many cases [22-29].
From the theoretical point of view, the hierarchical structure of quark masses and mixing angles of the SM calls for an explanation in terms of flavour symmetries or of other dynamical mechanisms,
such as, for example, fermion localization in models with extra dimensions. All such explanations depart from the MFV paradigm, and generically cause deviations from the SM in flavour violating
processes. Models with localized fermions [30-32], and more generally models of Next-to-Minimal Flavour Violation [33], tend to produce too large effects in ε[K ][34,35]. On the contrary, flavour
models based on nonabelian flavour symmetries, such as U(2) or SU(3), typically suppress NP contributions to s ↔ d and possibly also to b ↔ d transitions, but easily produce large NP contributions to
b ↔ s processes. This is due to the large flavour symmetry breaking caused by the top quark Yukawa coupling. Thus, if (nonabelian) flavour symmetry models are relevant for the solution of the SM
flavour problem, one expects on general grounds NP contributions to b ↔ s transitions. On the other hand, in the context of Grand Unified Theories (GUTs), there is a connection between leptonic and
hadronic flavour violation. In particular, in a broad class of GUTs, the large mixing angle observed in neutrino oscillations corresponds to large NP contributions to b ↔ s transitions [36-39].
In this Letter, we show that present data give evidence of a B[s ]mixing phase much larger than expected in the SM, with a significance of more than 3σ. This result is obtained by combining all
available experimental information with the method used by our collaboration for UT analyses and described in Ref. [40].
We perform a model-independent analysis of NP contributions to B[s ]mixing using the following parametrization [41-46]:
where β[s ]is defined as
We use the following experimental input: the CDF measurement of Δm[s ][47], the semileptonic asymmetry in B[s ]decays [48], the dimuon charge asymmetry [49] and CDF [50], the measurement of the B[s ]
lifetime from flavour-specific final states [51-59], the two-dimensional likelihood ratio for ΔΓ[s ]and ϕ[s ]= 2(β[s ]- B[s ]→ J/ψϕ decays by CDF [60] and the correlated constraints on Γ[s], ΔΓ[s ]
and ϕ[s ]from the same analysis performed by DØ [61]. For the latter, since the complete likelihood is not available yet, we start from the results of the 7-variable fit in the free-ϕ[s ]case from
Table one of ref. [61]. We implement the 7 × 7 correlation matrix and integrate over the strong phases and decay amplitudes to obtain the reduced 3 × 3 correlation matrix used in our analysis. In the
DØ analysis, the twofold ambiguity inherent in the measurement (ϕ[s ]→ π - ϕ[s], ΔΓ[s ]→ - ΔΓ[s], cos δ[1,2 ]→ - cos δ[1,2]) for arbitrary strong phases was removed using a value for cos δ[1,2 ]
derived from the BaBar analysis of B[d ]→ J/ΨK* using SU(3). However, the strong phases in B[d ]→ J/ΨK* and B[s ]→ J/Ψϕ cannot be exactly related in the SU(3) limit due to the singlet component of ϕ.
Although the sign of cos δ[1,2 ]obtained using SU(3) is consistent with the factorization estimate, to be conservative we reintroduce the ambiguity in the DØ measurement. To this end, we take the
errors quoted by DØ as Gaussian and duplicate the likelihood at the point obtained by applying the discrete ambiguity. Indeed, looking at Fig. 2 of ref. [61], this seems a reasonable procedure.
Hopefully DØ will present results without assumptions on the strong phases in the future, allowing for a more straightforward combination. Finally, for the CKM parameters we perform the UT analysis
in the presence of arbitrary NP as described in ref. [34], obtaining β[s ]= 0.0409 ± 0.0038. The new input parameters used in our analysis are summarized in Table 1, all the others are given in Ref.
[34]. The relevant NLO formulae for ΔΓ[s ]and for the semileptonic asymmetries in the presence of NP have been already discussed in refs. [34,62,63].
Table 1. Input parameters used in the analysis.
The results of our analysis are summarized in Table 2. We see that the phase σ. We comment below on the stability of this significance. In Fig. 1 we present the two-dimensional 68% and 95%
probability regions for the NP parameters B[s ]→ J/Ψϕ is slightly broken by the presence of the CKM-subleading terms in the expression of Γ[12]/M[12 ](see for example eq. (5) of ref. [63]). The
solution around ϕ[s ]and thus Re
Figure 1. From left to right and from top to bottom, 68% (dark) and 95% (light) probability regions in the .
Table 2. Fit results for NP parameters, semileptonic asymmetries and width differences.
Before concluding, we comment on our treatment of the DØ result for the tagged analysis and on the stability of the NP fit. Clearly, the procedure to reintroduce the strong phase ambiguity in the DØ
result and to combine it with CDF is not unique given the available information. In particular, the Gaussian assumption can be questioned, given the likelihood profiles shown in Ref. [61]. Thus, we
have tested the significance of the NP signal against different modeling of the probability density function (p.d.f.). First, we have used the 90% C.L. range for ϕ[s ]= [-0.06, 1.20]° given by DØ to
estimate the standard deviation, obtaining ϕ[s ]= (0.57 ± 0.38)° as input for our Gaussian analysis. This is conservative since the likelihood has a visibly larger half-width on the side opposite to
the SM expectation (see Fig. 2 of Ref. [61]). Second, we have implemented the likelihood profiles for ϕ[s ]and ΔΓ[s ]given by DØ, discarding the correlations but restoring the strong phase ambiguity.
The likelihood profiles include the second minimum corresponding to ϕ[s ]→ ϕ[s]+π, ΔΓ → -ΔΓ, which is disfavoured by the oscillating terms present in the tagged analysis and is discarded in our
Gaussian analysis. Also this approach is conservative since each one-dimensional profile likelihood is minimized with respect to the other variables relevant for our analysis. It is remarkable that
both methods give a deviation of σ (the 3 σ ranges for
To illustrate the impact of the experimental constraints, we show in Fig. 2 the p.d.f. for B[s ]→ J/Ψϕ or including only CDF or DØ results. Including only the CDF tagged analysis, we obtain σ). For
DØ, we show results obtained with the Gaussian and likelihood profile treatment of the errors. In the Gaussian case, the DØ tagged analysis gives σ), while using the likelihood profiles σ). Finally,
it is remarkable that the different constraints in Fig. 2 are all consistent among themselves and with the combined result. We notice, however, that the top-left plot is dominated by the measurement
of 2 we also quote the fit results for [s]/Γ[s].
Figure 2. From left to right: P.d.f. for B[s ]→ J/Ψϕ, including only the CDF analysis, including only the DØ Gaussian analysis, including only the DØ likelihood profiles. We show 68% (dark) and 95%
(light) probability regions.
In this Letter we have presented the combination of all available constraints on the B[s ]mixing amplitude leading to a first evidence of NP contributions to the CP-violating phase. With the
procedure we followed to combine the available data, we obtain an evidence for NP at more than 3σ. To put this conclusion on firmer grounds, it would be advisable to combine the likelihoods of the
tagged B[s ]→ J/Ψϕ angular analyses obtained without theoretical assumptions. This should be feasible in the near future. We are eager to see updated measurements using larger data sets from both the
Tevatron experiments in order to strengthen the present evidence, waiting for the advent of LHCb for a high-precision measurement of the NP phase.
It is remarkable that to explain the result obtained for ϕ[s], new sources of CP violation beyond the CKM phase are required, strongly disfavouring the MFV hypothesis. These new phases will in
general produce correlated effects in ΔB = 2 processes and in b → s decays. These correlations cannot be studied in a model-independent way, but it will be interesting to analyse them in specific
extensions of the SM. In this respect, improving the results on CP violation in b → s penguins at present and future experimental facilities is of the utmost importance.
2. Note added
During the review procedure of this Letter, results based on new data were presented by the Tevatron experiments, as well as a combination of Tevatron results on the tagged angular analysis of B[s ]→
J/ψϕ. However these updates are all unpublished. Furthermore, the likelihoods required by our analysis are not publicly available except for the new DØ analysis with no assumption on strong phases [
64]. For the sake of completeness, we quote B[s ]→ J/ψϕ. Clearly, we no longer need to manipulate the DØ likelihood to remove the strong phase assumption and to account for the non-Gaussian shape as
described above. Remarkably, this updated result is well compatible with the results of this Letter, confirming a deviation from the SM at the level of ~3σ (99.6% probability). More recent
experimental results seem to confirm the effect discussed in this Letter. We will include them in future analyses as soon as they become available.
We are much indebted to M. Rescigno for triggering this analysis and for improving it with several valuable suggestions. We also thank G. Giurgiu, G. Punzi and D. Zieminska for their assistance with
the Tevatron experimental results. We acknowledge partial support from RTN European contracts MRTN-CT-2006-035482 "FLAVIAnet" and MRTN-CT-2006-035505 "Heptools". M.C. is associated to the
Dipartimento di Fisica, Università di Roma Tre. E.F. and L.S. are associated to the Dipartimento di Fisica, Università di Roma "La Sapienza".
1. Phys Rev Lett. 1963, 10:531. Publisher Full Text
2. Prog Theor Phys. 1973, 49:652. Publisher Full Text
3. Phys Rev Lett. 1990, 65:2939. PubMed Abstract | Publisher Full Text
4. Nucl Phys B. 1995, 433:3.
[Erratum-ibid. B 507, 549 (1997)]
Publisher Full Text
5. Ciuchini M, Degrassi G, Gambino P, Giudice GF:
Nucl Phys B. 1998, 534:3. Publisher Full Text
6. Buras AJ, Gambino P, Gorbahn M, Jager S, Silvestrini L:
Phys Lett B. 2001, 500:161. Publisher Full Text
7. D'Ambrosio G, Giudice GF, Isidori G, Strumia A:
Nucl Phys B. 2002, 645:155. Publisher Full Text
8. Bona M, Ciuchini M, Franco E, Lubicz V, Martinelli G, Parodi F, Pierini M, Roudeau P, Schiavi C, Silvestrini L, Stocchi A, [UTfit Collaboration]:
JHEP. 2005, 0507:028. Publisher Full Text
9. Bona M, Ciuchini M, Franco E, Lubicz V, Martinelli G, Parodi F, Pierini M, Roudeau P, Schiavi C, Silvestrini L, Stocchi A, Vagnoni V, [UTfit Collaboration]:
JHEP. 2006, 0610:081. Publisher Full Text
10. Charles J, Hocker A, Lacker H, Laplace S, Le Diberder FR, Malcles J, Ocariz J, Pivk M, Roos L, [CKMfitter Group]:
Eur Phys J C. 2005, 41:1. Publisher Full Text
11. Aubert B, [BABAR Collaboration], et al. arXiv:hep-ex/0607101
12. Aubert B, [BABAR Collaboration], et al.:
Phys Rev Lett. 2007, 98:031801. PubMed Abstract | Publisher Full Text
13. Aubert B, [BABAR Collaboration], et al.:
Phys Rev D. 2007, 76:071101. Publisher Full Text
14. Aubert B, [BABAR Collaboration], et al.:
Phys Rev D. 2007, 76:091101. Publisher Full Text
15. Aubert B, [BABAR Collaboration], et al.:
Phys Rev Lett. 2007, 99:161802. PubMed Abstract | Publisher Full Text
16. Aubert B, [BABAR Collaboration], et al.:
Phys Rev D. 2008, 77:012003. Publisher Full Text
17. Aubert B, [BABAR Collaboration], et al. arXiv:hep-ex/0708.2097
arXiv:0708.2097 [hep-ex]
18. Chen KF, [Belle Collaboration], et al.:
Phys Rev Lett. 2007, 98:031802. PubMed Abstract | Publisher Full Text
19. Abe K, [Belle Collaboration], et al.:
Phys Rev D. 2007, 76:091103. Publisher Full Text
20. Abe K, [Belle Collaboration], et al. arXiv:hep-ex/0708.1845
arXiv:0708.1845 [hep-ex]
21. Phys Lett B. 2005, 620:143. Publisher Full Text
22. Phys Rev D. 2005, 72:075013. Publisher Full Text
23. Phys Rev D. 2005, 72:114005. Publisher Full Text
24. Phys Rev D. 2005, 72:114017. Publisher Full Text
25. Raz G arXiv:hep-ph/0509125
26. Phys Rev D. 2006, 74:014003.
[Erratum-ibid. D 74, 03901 (2006)]
Publisher Full Text
27. Phys Rev D. 2006, 74:094020. Publisher Full Text
28. Ann Rev Nucl Part Sci. 2007, 57:405. Publisher Full Text
29. Nucl Phys B. 2000, 586:141. Publisher Full Text
30. Phys Rev D. 2005, 71:016002. Publisher Full Text
31. Contino R, Kramer T, Son M, Sundrum R:
JHEP. 2007, 0705:074. Publisher Full Text
32. Agashe K, Papucci M, Perez G, Pirjol D arXiv:hep-ph/0509117
33. Bona M, Ciuchini M, Franco E, Lubicz V, Martinelli G, Parodi F, Pierini M, Roudeau P, Schiavi C, Silvestrini L, Sordini V, Stocchi A, Vagnoni V, [UTfit Collaboration]:
JHEP. 2008, 0803:049.
[arXiv:0707.0636 [hep-ph]]
Publisher Full Text
34. Davidson S, Isidori G, Uhlig S arXiv:hep-ph/0711.3376
arXiv:0711.3376 [hep-ph]
35. Baek S, Goto T, Okada Y, Okumura Ki:
Phys Rev D. 2001, 63:051701. Publisher Full Text
36. Chang D, Masiero A, Murayama H:
Phys Rev D. 2003, 67:075013. Publisher Full Text
37. Harnik R, Larson DT, Murayama H, Pierce A:
Phys Rev D. 2004, 69:094024. Publisher Full Text
38. Phys Lett B. 2003, 565:183. Publisher Full Text
39. Ciuchini M, D'Agostini G, Franco E, Lubicz V, Martinelli G, Parodi F, Roudeau P, Stocchi A:
JHEP. 2001, 0107:013. Publisher Full Text
40. Phys Rev D. 1993, 47:1021. Publisher Full Text
41. Goto T, Kitazawa N, Okada Y, Tanaka M:
Phys Rev D. 1996, 53:6662. Publisher Full Text
42. Phys Rev Lett. 1996, 77:4499. PubMed Abstract | Publisher Full Text
43. Phys Rev D. 1997, 55:5331. Publisher Full Text
44. Cohen AG, Kaplan DB, Lepeintre F, Nelson AE:
Phys Rev Lett. 1997, 78:2300. Publisher Full Text
45. Phys Lett B. 1997, 407:307. Publisher Full Text
46. Abulencia A, [CDF Collaboration], et al.:
Phys Rev Lett. 2006, 97:242003. PubMed Abstract | Publisher Full Text
47. Abazov VM, [D0 Collaboration], et al.:
Phys Rev Lett. 2007, 98:151801. PubMed Abstract | Publisher Full Text
48. Abazov VM, [D0 Collaboration], et al.:
Phys Rev D. 2006, 74:092001. Publisher Full Text
49. Buskulic D, [ALEPH Collaboration], et al.:
Phys Lett B. 1996, 377:205. Publisher Full Text
50. Abe F, [CDF Collaboration], et al.:
Phys Rev D. 1999, 59:032004. Publisher Full Text
51. Abreu P, [DELPHI Collaboration], et al.:
Eur Phys J C. 2000, 16:555. Publisher Full Text
52. Ackerstaff K, [OPAL Collaboration], et al.:
Phys Lett B. 1998, 426:161. Publisher Full Text
53. Abazov VM, [D0 Collaboration], et al.:
Phys Rev Lett. 2006, 97:241801. PubMed Abstract | Publisher Full Text
54. Barberio E, [HFAG], et al. arXiv:hep-ex/0603003
55. Aaltonen T, [CDF Collaboration], et al. arXiv:hep-ex/0712.2397
arXiv:0712.2397 [hep-ex]
56. Abazov VM, [D0 Collaboration], et al. arXiv:hep-ex/0802.2255
arXiv:0802.2255 [hep-ex]
57. Bona M, Ciuchini M, Franco E, Lubicz V, Martinelli G, Parodi F, Pierini M, Roudeau P, Schiavi C, Silvestrini L, Stocchi A, Vagnoni V, [UTfit Collaboration]:
JHEP. 2006, 0603:080. Publisher Full Text
58. Bona M, Ciuchini M, Franco E, Lubicz V, Martinelli G, Parodi F, Pierini M, Roudeau P, Schiavi C, Silvestrini L, Stocchi A, Vagnoni V, [UTfit Collaboration]:
Phys Rev Lett. 2006, 97:151803. PubMed Abstract | Publisher Full Text
59. [http://www-d0.fnal.gov/Run2Physics/WWW/results/final/B/B08A/likelihoods/] webcite
Sign up to receive new article alerts from PMC Physics A | {"url":"http://www.physmathcentral.com/1754-0410/3/6","timestamp":"2014-04-21T07:59:19Z","content_type":null,"content_length":"112045","record_id":"<urn:uuid:a77ca2ec-c445-4a1c-8392-6b7a105099a5>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00317-ip-10-147-4-33.ec2.internal.warc.gz"} |
Combination Problem
February 18th 2006, 06:21 AM
Combination Problem
I thought this would be a simple problem, but I can still not come up with the correct answer.
The problem is:
12 Students are in a class. Five can go to room A, Four to room B, and Three to room C. How many ways can this happen?
Since order is not important it is a combination problem and not a permuntation problem.
So by using nCr for {5,4} = 5 and {5,3} = 10 gives me a total of fifty combinations(using the fundamental counting principal).
Would I not just figure the combination of how many students can go into room B? nCr{4,1} = 4 and then multiply 5 x 10 x 4 ?
I get 200 for the answer but according to the quiz this is not the answer.
Could someone tell me what I am doing wrong?
Thanks in advance
February 18th 2006, 08:07 AM
Originally Posted by jamesinsc
I thought this would be a simple problem, but I can still not come up with the correct answer.
The problem is:
12 Students are in a class. Five can go to room A, Four to room B, and Three to room C. How many ways can this happen?
Since order is not important it is a combination problem and not a permuntation problem.
So by using nCr for {5,4} = 5 and {5,3} = 10 gives me a total of fifty combinations(using the fundamental counting principal).
Would I not just figure the combination of how many students can go into room B? nCr{4,1} = 4 and then multiply 5 x 10 x 4 ?
I get 200 for the answer but according to the quiz this is not the answer.
Could someone tell me what I am doing wrong?
Thanks in advance
The first room can be filled in $12\times 11\times 10\times 9 \times 8=12!/7!$ ways (counting each
permutation as a distinct way of filling the room. Thus for each combination
we have counted 5! permutations, so the number of ways of filling the first
room is $12!/(7! 5!)$.
Similaly for the next room except that we have only the $7$ left overs to use,
so there are now $7!/(3!4!)$.
There are no options for who is allocated to room three once the first
two rooms are filled.
So our answer is:
$\frac{12!}{7!5!} \frac{7!}{3!4!}=\frac{12!}{5!4!3!}=27720 <br />$
February 18th 2006, 08:23 AM
Thats it!!
Thanks Ron!!!
I was going about that one all wrong. It's hard for an old man to go back to school.
Thanks again
October 19th 2008, 12:43 AM
nameless virus
'tis maybe is an indistinguishable permutation
I thought this would be a simple problem, but I can still not come up with the correct answer.
The problem is:
12 Students are in a class. Five can go to room A, Four to room B, and Three to room C. How many ways can this happen?
Since order is not important it is a combination problem and not a permuntation problem.
So by using nCr for {5,4} = 5 and {5,3} = 10 gives me a total of fifty combinations(using the fundamental counting principal).
Would I not just figure the combination of how many students can go into room B? nCr{4,1} = 4 and then multiply 5 x 10 x 4 ?
I get 200 for the answer but according to the quiz this is not the answer.
Could someone tell me what I am doing wrong?
Thanks in advance
(Sleepy).........'TIS COULD BE AN INDISTINGUISHABLE PERMUTATION
use the formula: n!/p!q!r!...
MAYBE YOU COULD TRY THIS ONE...(Speechless)
October 19th 2008, 08:31 AM
Hello, James!
Another approach . . .
12 Students are in a class.
Five can go to room A, Four to room B, and Three to room C.
How many ways can this happen?
Assign 5 students to room A.
. . There are: . $_{12}C_5 \:=\:\frac{12!}{5!7!} \:=\:792$ ways.
From the remaining 7 students, assign 4 students to room B.
. . There are: . $_7C_4 \:=\:\frac{7!}{4!3!} \:=\:35$ ways.
From the remaining 3 students, assign 3 students to room C.
. . Of course, there is: . $_3C_3 \:=\:1$ way.
Therefore, there are: . $792 \times 35 \times 1 \:=\:27,\!720$ ways. | {"url":"http://mathhelpforum.com/statistics/1920-combination-problem-print.html","timestamp":"2014-04-16T07:00:01Z","content_type":null,"content_length":"10947","record_id":"<urn:uuid:d12f84f8-10f4-40fc-8df3-504b40da96a8>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
December 3rd 2006, 07:06 PM #1
Dec 2006
The surface area of a cube is changing at a rate of 8 cm^2/s. How fast is the volume changing when the surface area is 60cm^2?
How would i find the length and the rate of change of the length??
Volume of cube, of side length $l$ is:
Surface area of the same cube is:
$A=6 \, l^2$
$\frac{dV}{dt}=3\,l^2\, \frac{dl}{dt}$
$\frac{dA}{dt}=12\,l\, \frac{dl}{dt}$.
Now when the area is $60\ \rm{cm^2}$$l^2=10\ \rm{cm^2}$, so:
$\frac{dV}{dt}=30\, \frac{dl}{dt}$.
$<br /> \frac{dA}{dt}=12 \sqrt{10}\, \frac{dl}{dt}=8\,\rm{cm^2/s}<br />$,
$\frac{dl}{dt}=\frac{2}{3\, \sqrt{10}}$,
$\frac{dV}{dt}=\frac{30\times 2}{3\,\sqrt(10}=2\,\sqrt{10}$
December 3rd 2006, 08:46 PM #2
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/calculus/8375-help.html","timestamp":"2014-04-17T16:18:12Z","content_type":null,"content_length":"30535","record_id":"<urn:uuid:64c15e1a-bcf9-42ac-b766-e83ebafdc33c>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prime Series 2: The Intuition Behind The Infinitude of Prime Numbers
Note: This is the third part of the Prime Number Series
1. Part I: Introduction to Prime Numbers
In Introduction to Prime Numbers, we conjectured that the number of prime numbers is decreasing as the counting numbers increase. In this post, we discuss intuitively that there is no greatest prime,
or that there are infinitely many prime numbers.
Before proceeding with our discussion, it is noteworthy to remember that a number can either be prime or composite. We also know that composite numbers are product of primes.
Many people know Euclid for his work in geometry, but only a few know about his work in number theory. In Book IX of The Elements, Euclid proved that there infinitely many prime numbers. The
discussion below is an intuitive explanation of his proof.
Let us have the following analogy:
1. Assuming that the only primes that exist are in S = {2,3,5}. We multiply all the primes in S and then add 1: (2x3x5 +1) = 31.
2. If 31 is prime (in fact, it is) then our assumption that (1) is false because we found another prime not in S.
3. We can probably be skeptical and say, hmmm… maybe {2,3,5,31} are the only primes. Again, we multiply all the numbers in the set and add 1: 2 x 3 x 5 x 31 + 1 = 931. Is 931 prime? No. 931 = (19)
(7)(7). Note that 19 and 7 are primes and not in S. Again, we found primes not in set S.
Let us summarize the steps that we have done to look for primes.
1. We multiplied all the primes in set S then added 1.
2. If the result is prime, we found another prime not in S.
3. If the result is composite, then it must be a product of primes. Since dividing it with any of the factors in S will give a remainder 1 (Why?), this implies that at least one of its factors is a
prime not in S. Still, we found another prime not in S.
Now, steps 1-3 can go on forever because once we found a prime not in S, we can always include it in S and repeat the process. For example, we can say maybe {2, 3, 5, 7, 19, 31} are the only primes
and we proceed to 2 and 3.
Since it is possible to repeat the process infinitely,
Given a finite list of primes, we can always find primes not on that list.
Hence, there are infinitely many primes.
In the next post in the series, we prove formally the discussion above.
10 thoughts on “Prime Series 2: The Intuition Behind The Infinitude of Prime Numbers”
1. Very interesting. An elegant proof that there are infinitely many primes, a great introduction on the matter.
2. thank you mostlymath.
7. very nice proof….
□ @saurabh: thank you. sorry, i haven’t seen this.
8. Pingback: Ano ang pinakamalaking prime number? | {"url":"http://mathandmultimedia.com/2010/06/21/prime-series-2/","timestamp":"2014-04-16T15:59:03Z","content_type":null,"content_length":"344644","record_id":"<urn:uuid:b5260ce1-b52c-4979-b0df-b281737c6e83>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
D - Consider Generic/Template Modules
"Matthias Spycher" <matthias coware.com>
For example:
module A (T, alias M : module) {
private import M;
class Boxed{
T val;
Boxed f() {
T x = M.g(); // we expect module M to provide g() for type T
return Boxed(x);
module B;
private import A!(int, this);
static count = 0;
function int g() {
return ++count;
Boxed!(int) y = f!();
One could basically inject the symbols of a module into another during
import and then resolve template members in the context of both scopes. In
other words, one can configure a module with the symbols from another.
I'm not sure how complicated this feature is to implement -- is it as bad as
general support for ADL. One could view it as a mechanism for selective ADL.
I believe they call it higher-order functors in some languages like SML.
Jan 25 2004
"Walter" <walter digitalmars.com>
You can do this now:
module A (T, alias M) {
class Boxed{
T val;
Boxed f() {
T x = M.g(); // we expect module M to provide g() for type T
return Boxed(x);
module B;
private import A!(int, B);
static count = 0;
function int g() {
return ++count;
Boxed!(int) y = f!();
"Matthias Spycher" <matthias coware.com> wrote in message
news:bv0po5$2ro4$1 digitaldaemon.com...
For example:
module A (T, alias M : module) {
private import M;
class Boxed{
T val;
Boxed f() {
T x = M.g(); // we expect module M to provide g() for type T
return Boxed(x);
module B;
private import A!(int, this);
static count = 0;
function int g() {
return ++count;
Boxed!(int) y = f!();
One could basically inject the symbols of a module into another during
import and then resolve template members in the context of both scopes. In
other words, one can configure a module with the symbols from another.
I'm not sure how complicated this feature is to implement -- is it as bad
general support for ADL. One could view it as a mechanism for selective
I believe they call it higher-order functors in some languages like SML.
Jan 26 2004
Very cool indeed!
I was thinking about a module configuration pattern that would allow us to
do AspectD using generic modules, but I have to give it some more thought...
"Walter" <walter digitalmars.com> wrote in message
news:bv4cev$2khv$1 digitaldaemon.com...
You can do this now:
module A (T, alias M) {
class Boxed{
T val;
Boxed f() {
T x = M.g(); // we expect module M to provide g() for type T
return Boxed(x);
module B;
private import A!(int, B);
static count = 0;
function int g() {
return ++count;
Boxed!(int) y = f!();
"Matthias Spycher" <matthias coware.com> wrote in message
news:bv0po5$2ro4$1 digitaldaemon.com...
For example:
module A (T, alias M : module) {
private import M;
class Boxed{
T val;
Boxed f() {
T x = M.g(); // we expect module M to provide g() for type T
return Boxed(x);
module B;
private import A!(int, this);
static count = 0;
function int g() {
return ++count;
Boxed!(int) y = f!();
One could basically inject the symbols of a module into another during
import and then resolve template members in the context of both scopes.
other words, one can configure a module with the symbols from another.
I'm not sure how complicated this feature is to implement -- is it as
general support for ADL. One could view it as a mechanism for selective
I believe they call it higher-order functors in some languages like SML.
Jan 26 2004 | {"url":"http://www.digitalmars.com/d/archives/22518.html","timestamp":"2014-04-18T13:11:14Z","content_type":null,"content_length":"14045","record_id":"<urn:uuid:af59ca0f-56a9-49d9-9e68-0ee4890d26a3>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00465-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Challenge (calculus)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50df6efee4b0f2b98c87486c","timestamp":"2014-04-21T10:18:05Z","content_type":null,"content_length":"66992","record_id":"<urn:uuid:a1601648-c75d-4293-930a-63078482f6c2>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simple Harmonic Motion Position Velocity And Acceleration Graphs
Simple Harmonic Motion Position Velocity And Acceleration Graphs PDF
Sponsored High Speed Downloads
... velocity and acceleration graphs for simple harmonic motion. You will also observe the force graph for simple harmonic motion. Part A ... It must return to the same position, and the velocity and
acceleration must also return to the same
Simple Harmonic Motion Kinematics and Dynamics of Simple ... Compare the position-time graphs you obtained with the one you sketched in ... With all three graphs selected, use the Examine tool to
note how the position, velocity and acceleration of the hanger change at various times in a ...
Simple Harmonic Motion I. Introduction: Simple Harmonic Motion (SHM) is a common and very important type of ... a position, velocity, and acceleration graphs for one period (each period has one
maxima and one minima) all with the same time axis.
You can see the repetition in the position, velocity, or acceleration- time graphs. ... “Simple Harmonic Motion” is a form of periodic motion where the ... velocity and acceleration graphs for simple
harmonic motion. You will also look at the force graph for simple harmonic motion. Part ...
In this lab you will observe simple harmonic motion qualitatively in the laboratory and use a ... what will the graphs of position, velocity and acceleration versus ... velocity and position during
simple harmonic motion. Procedure . Part I: Hooke's Law. 1.
1 Simple Harmonic Motion Simple ... motion. You will first examine qualitatively the period of a pendulum, as well as the position, velocity, and acceleration of the pendulum as a ... what is the
sign of the acceleration? Why? 3. Obtain motion graphs when the pendulum is undergoing a large ...
Simple Harmonic Motion ... harmonic motion. You will first examine qualitatively the period of a pendulum, as well as the position, velocity, and acceleration of the pendulum as a ... what is the
sign of the acceleration? Why? 3. Obtain motion graphs when the pendulum is undergoing a large ...
... Simple Harmonic Motion Theory: • What is simple harmonic motion (SHM)? Explain and draw an example. • Define Hooke’s Law. • Equations for position, velocity and acceleration vs. time for SHM ...
As a result you’ll get 3 graphs; position, velocity and acceleration vs. time. Fit ...
When you plot position, velocity and acceleration as a function of time you get the following graphs. a. Find the amplitude. b. ... A particle is executing simple harmonic motion. The displacement x
as a function of time t is shown in the
Part ObservationsC: of Simple Harmonic Motion First, hang 1.000 kg from the spring. Set Logger Pro to plot position vs. time, ... Compare your position, velocity and acceleration graphs with your
predictions on page 1. Resolve any discrepancies. F re Body Di ag m for Mass
Simple Harmonic Motion ... Now that we have determined the spring constant, we are ready to obtain plots of position, velocity, and acceleration vs. time for the oscillating mass. 10. ... Using the
existing t-axis of the velocity and acceleration graphs and your y-axis,
Energy in Simple Harmonic Motion We can describe an oscillating mass in terms of its position, velocity, and acceleration as a ... Click to record position and velocity data. Print your graphs and
compare to your predictions. Comment on any differences.
Simple Harmonic Motion 1 Object ... to better understand the relationships between the physical quantities position, velocity and acceleration foran object undergoingsimpleharmonic motion. ... motion
detector graphs in step 4 of the procedure.
Physics 326 Lab 5 9/15/13 1 Damped Simple Harmonic Motion Purpose To understand the relationships between force, acceleration, velocity, position, and
Simple Harmonic Motion EQUIPMENT INTRODUCTION The purpose of this experiment is to find the minimum and maximum values for the velocity, acceleration and force of a mass on a spring. The position,
velocity and acceleration of the
Energy in Simple Harmonic Motion We can describe an oscillating mass in terms of its position, velocity, and acceleration as a ... Click to record position and velocity data. Print your graphs and
compare to your predictions. Comment on any differences.
Simple Harmonic Motion: a ... plotted against displacement from the balanced position. Simple harmonic motion is characterised by: a = - k s where k is a constant connected to the mass on the ...
velocity and acceleration against time graphs.
acceleration vs. time (a vs. t ) graphs for oscillation over 1 period. Motion Detector Laptop ... Observations of Simple Harmonic Motion First, ... Compare your position , velocity and acceleration
graphs with your predictions on
Simple Harmonic Motion I Objectives ... • measure the position and velocity using the Vernier Motion Detector. ... one can derive the instantaneous velocity v(t) and acceleration a(t) functions
(using elementary calculus):
SIMPLE PENDULUM AND PROPERTY OF SIMPLE HARMONIC MOTION Alexander Sapozhnikov, ... Calculate the phase differences between velocity and acceleration and position ... Consider the graphs of position,
velocity, ...
Energy in Simple Harmonic Motion We can describe an oscillating mass in terms of its position, velocity, and acceleration as a function of time. ... Click to record position and velocity data. Print
your graphs and compare to your predictions.
Lab 1: Simple Harmonic Oscillations ... Motion sensor Force sensor 750 interface, DataStudio ... Use a force sensor and a motion sensor to graph the force on the spring, the position, velocity, and
acceleration of the mass versus time.
SIMPLE HARMONIC MOTION PURPOSE:To study the relationships of displacement, velocity, acceleration, ki-netic energy, potential energy, ... acceleration. Under Display select Two Graphs
andsetupseparatedisplacement and velocity windows.
Simple harmonic motion: Characteristic features of simple harmonic motion, Condition for shm: a ... velocity and acceleration against time. ... Q4.2 Look at the following three graphs. Acceleration
time . page 5 of 16 Use graph (a) to find
Simple Harmonic Motion Many mechanical systems in nature move repeatedly back and forth around a central position. In this course we will call this behavior - simple harmonic m ... it can be seen
that the displacement and velocity graphs are out-of-phase (not in synch). From our equations ...
SIMPLE HARMONIC MOTION INTRODUCTION ... Figure 3: Position, velocity and acceleration vs. time For this particular initial condition (starting position at A in Fig. 2), the position curve is a ...
Figure 9: Graphs of sin versus With this approximation Eq.
resulting oscillation “simple harmonic motion”. As this derivation shows, ... observe the similarity between the graphs of position and velocity in an oscillating ... acceleration. When the position
is at its extrema, what is the velocity and acceleration? b.
... Hooke’s Law and Simple Harmonic Motion PH305 4/28/03 ... creating graphs with Velocity, Acceleration and Force only. On the position vs. time graph, ... position and velocity graphs on one page;
the force and acceleration graphs on the next.
4.1 KINEMATICS OF SIMPLE HARMONIC MOTION ... A. DISPLACEMENT AND VELOCITY B. ACCELERATION - THE DEFINING EQUATION OF SHM III. ENERGY IN SHM IV. ... Graph 2 shows the variation with position d of the
displacement x of particles in the medium at a
ENERGY IN SIMPLE HARMONIC MOTION LAB MECH 17. COMP ... We can describe an oscillating mass in terms of its position, velocity, and acceleration as a function of time. We can also describe the system
from an energy perspective. ... the velocity vs. time graphs on the same sheet
... simple harmonic motion. The most common example of this is a pendulum ... both graphically and by calculation, for acceleration, velocity and displacement during SHM. Kinematics of simple
harmonic motion 4.1 ... position graphs for transverse and for longitudinal waves.
Simple Harmonic Motion More tools: Position, velocity, acceleration Energy Damping Simple Harmonic Motion "# $ least friction" = 0.066 sec most friction" = 0.011 sec Damped oscillations of same
system : resonance curves for different amount of friction
Energy in Simple Harmonic Motion We can describe an oscillating mass in terms of its position, velocity, and acceleration as a ... Click to record position and velocity data. Print your graphs and
compare to your predictions. Comment on any differences.
EXPERIMENT 4: SIMPLE HARMONIC MOTION ... Suspend the spring from a stand, and position the Motion Detector under the oscillating mass. ... Study the resulting Position, Velocity, Acceleration graphs
and make qualitative comments about them.
Simple Harmonic Motion Experiment . ... period for each of the position, velocity and acceleration fits determined above. 4. Print out a few representative graphs to be included with your laboratory
report. 5. Increase the hanging mass to 60 grams ...
Energy in Simple Harmonic Motion We can describe an oscillating mass in terms of its position, velocity, and acceleration as a ... Click to record position and velocity data. Print your graphs and
compare to your predictions. Comment on any differences.
Simple Harmonic Motion Objectives ... • measure the position and velocity using the Vernier Motion Detector. ... one can derive the instantaneous velocity v(t) and acceleration a(t) functions (using
elementary calculus):
Simple Harmonic Motion { Concepts INTRODUCTION ... Figure 3: Position, velocity and acceleration vs. time For this particular initial condition (starting position at A in Figure 2), the position
curve is a ... Figure 9: Graphs of sin versus
... !position,!velocity,! acceleration,!angular!frequency,!energy,!and!energy!exchange.!And!importantly,!since! ... We'll!also!touch!on!the!factthat,!in!reality,!simple!harmonic!motion!represents!a
... velocity!vs.!time!graphs.!How!are!they!the!same?!How!are!they!different?!
4.1 Kinematics of simple harmonic motion (SHM) ... both graphically and by calculation, for acceleration, velocity and displacement during SHM. ... 4.4.7 Draw and explain displacement–time graphs and
displacement–position graphs for
Mathematical pendulum (1) Simple harmonic motion In case of errors, please contact: [email protected] Aim: To show the relationship between position, velocity and acceleration of a simple
... Oscillations. Simple Harmonic Motion. Physics 211: Lab Oscillations. Simple Harmonic Motion. Reading Assignment: Chapter 15 ... Writing the angular acceleration as the second ... Create a
graphing window that contains the three following graphs: Position vs. Time, Velocity vs. Time, ...
26. Simple Harmonic Motion ... ing down (acceleration opposite velocity) ... What is the angular frequency of the motion? (c) If the angular position starts at … radians, express the angular position
of the object, in radians, after 1 second.
Simple Harmonic Motion SimpleHarmonicMotion.tns ... Referring to the graphs of motion you have seen, carefully describe the critical points of this motion in terms of displacement, velocity and
acceleration. Problem 2 ...
Simple Harmonic Motion is a requirement of all high school physics courses, ... 2. Describe qualitatively, with the aid of graphs, the acceleration, velocity, and displacement of such a particle ...
moves in simple harmonic motion with specified initial position and velocity. j.
... we will study simple harmonic motion of a spring, ... as it oscillates vertically. We will measure the force, position, velocity and acceleration of the spring-mass system as it oscillates,
determining estimates of the spring ... pull up four graphs to record force, position ...
... Simple Harmonic Motion Proposed Subject Usage: Mathematics/Physics ... • Graphs were prepared from the exported data to further analyse the pendulum system. ... or equilibrium position. When a
simple pendulum is displaced from its equilibrium position it ...
Let us find the velocity of a simple harmonic motion. According to defin ition of velocity we have v t dx t ... farthest position the acceleration is directed in such a way that it brings it back to
the equilibrium. The picture below shows the graphs for all three dependencies in the case, when
For simple harmonic motion, the acceleration should be given by a x = - ... (Show graphs of Position vs time and Velocity vs time for a different amplitude of oscillation below.) Q7. Investigate what
adding mass to the car (i.e., ...
SIMPLE HARMONIC MOTION ... Use a motion detector to plot graphs of position vs. time, velocity vs. time and acceleration vs. time for the oscillating mass. ... At what point of the motion is the
acceleration the greatest? The least? Discuss with an instructor. | {"url":"http://ebookily.org/pdf/simple-harmonic-motion-position-velocity-and-acceleration-graphs","timestamp":"2014-04-24T16:12:12Z","content_type":null,"content_length":"48052","record_id":"<urn:uuid:ab571aed-bcf7-4c64-a1f2-1f3fbfa4692f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exponential Asymptotics for the Primitive Equations
Following the work of Matthies (2001), it is shown how Gevrey (exponential) regularity of the solution and a classical method used together to prove an exponentially-accurate approximation result for
a singular perturbation problem with a small parameter. The model considered is the viscous primitive equations of the ocean, although the method is applicable more generally. | {"url":"http://www.newton.ac.uk/programmes/HOP/Abstract2/wirosoetisno.html","timestamp":"2014-04-18T20:47:57Z","content_type":null,"content_length":"2466","record_id":"<urn:uuid:0f59cceb-b547-4970-9893-2a4b27ea0061>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cubic graphs without a perfect matching and a vertex incident to three bridges
up vote 5 down vote favorite
The example shown below (courtesy of David Eppstein) is a common example of a cubic graph that admits no perfect matching:
Are there other examples of cubic graphs that do not admit a perfect matching and, unlike the above example, do not contain a vertex that lies at the intersection of three bridges (i.e. an edge whose
removal increases the number of connected components in the graph)?
add comment
2 Answers
active oldest votes
Substitute your central vertex in your graph with a 3-cycle $abc$ so that the graph stays cubic. Now subdivide each edge in this 3-cycle. So we have new vertices $u$ connected to $a$
and $b$, $v$ connected to $b$ and $c$, $w$ connected to $c$ and $a$. Now add a final vertex $x$ and connect it to $u,v$ and $w$. This graph has exactly three bridges, none of which
intersect the other at a vertex, and moreover has no perfect matching!
One result which relates the existence of a perfect matching in a cubic graph and its bridges is the following theorem of Petersen from "Die theorie der regularen graphen", Acta Math.
15 (1891), 163-220:
Theorem: Every cubic graph with at most two bridges contains a perfect matching.
up vote 7 down
vote accepted As well as this strengthening by Errera, "Du colorage des cartes", Mathesis 36 (1922), 56-60:
Theorem: If all the bridges of a connected cubic graph $G$ lie on a single path of $G$, then $G$ has a perfect matching.
So your instinct is true, in the sense that if the graph has no perfect matching, its bridges do not lie on a path. However the example in the beginning of this answer shows that they
are not necessarily incident at the same vertex.
1 Any two bridges lie on a path, so the second theorem is a strengthening of the first, not a converse. – Brendan McKay Jun 1 '12 at 3:53
Oops! It's fixed. – Gjergji Zaimi Jun 1 '12 at 5:02
add comment
I think there are no such graphs.
up vote 0 It was shown by Sumner and Las Vergnas (you can find the references here: http://mathsci.kaist.ac.kr/~sangil/pdf/2009claw.pdf) that a claw-free connected graph has a perfect matching
down vote (assuming even number of vertices, of course!). An intersection of three bridges is clearly a claw.
@Felix: What you are saying is that: no perfect matching implies there is a claw. But a claw does not necessarily imply three bridges. So this does not address the question. – Gjergji
Zaimi May 30 '12 at 15:30
This is surely simple, but given that every vertex in a cubic graph has by definition three neighbours, how can any cubic graph be claw-free? – Anthony Labarre May 30 '12 at 15:32
@Anthony, pick your favourite cubic graph and substitute each vertex with a triangle. You get a claw free cubic graph. – Gjergji Zaimi May 30 '12 at 15:33
@Gjergji Zaimi: Oh, right, I had overlooked the induced subgraph part ;-) Thanks! – Anthony Labarre May 30 '12 at 15:38
Sorry for the wrong answer... – Felix Goldberg Jun 1 '12 at 10:20
add comment
Not the answer you're looking for? Browse other questions tagged graph-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/98385/cubic-graphs-without-a-perfect-matching-and-a-vertex-incident-to-three-bridges?sort=oldest","timestamp":"2014-04-20T13:55:10Z","content_type":null,"content_length":"61838","record_id":"<urn:uuid:7ed0a285-d3f6-4d1b-8775-c86f3126dcc3>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
[no subject]
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[no subject]
In your example, one variable is problematic. In that case something like
forval i = 1/50 {
use state`i'
destring var1, replace
save new_state`i'
should fix the problem. In particular,
1. Whenever -var1- is numeric, -destring- won't complain.
2. If -destring- finds non-numeric characters in a string variable it
will stop the loop.
In the last case, you will need to reverse the process:
forval i = 1/50 {
use state`i'
tostring var1, replace
save new_state`i'
3. If more variables are problematic, you need to add them to the
-destring- or -tostring- statements.
(-tostring- is a command, not a function, in Stata.)
4. I'm recommending new filenames for changed datasets as a matter of
general caution. If you have to iterate, you can always overwrite
new_state1, etc., so long as your originals are safe somewhere.
On 8 May 2013 01:59, Erika Kociolek <ekociole@gmail.com> wrote:
> Hi there,
> I am working with a large number of datasets that have some variables
> in common. I'll use the two datasets below as an example to illustrate
> my question:
> State1.dta
> var1 var2
> . 1
> 1 .
> 4 .
> 13 2
> State2.dta
> var1 var3
> .
> 1' 2
> 3 3
> 4 4
> As you can see above, the files State1 and State2 have var1 in common.
> I would like to append the State1 and State2 files together to create
> a master file containing data for both states (some fields will be
> blank because those datapoints may not have been collected for a given
> state).
> Due to some issues with the source datasets, in some of the State
> files, var1 is a string (as is the case with State2) and in others,
> var1 is numeric (as is the case with State1). Since the variable types
> are different, I cannot append one file to the other.
> My question: is there is any way to define that certain variables be
> imported as a particular type of variable (such as a string variable),
> instead of having to import files and then generate new string
> variables using a function like "tostring"? I'd like to be able to get
> the variables in all .dta files defined consistently so it is easy to
> append them to a master datafile.
> Thanks!
> Erika
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/faqs/resources/statalist-faq/
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2013-05/msg00269.html","timestamp":"2014-04-16T04:22:38Z","content_type":null,"content_length":"8683","record_id":"<urn:uuid:90219333-32b2-4e8d-bc1c-1d5a1ce7e72b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
A rope over the top of a pulley has the same length on each side
A rope over the top of a pulley has the same length on each side. It weighs one third of a pound per foot. On one end hangs a monkey holding a banana, and on the other end is attached a weight equal
to the weight of the monkey and the banana. The banana weighs two ounces per inch. The rope is as long (in feet) as the age of the monkey (in years), and the weight of the monkey (in ounces) is the
same as the age of the monkey's mother.
The combined ages of the monkey and its mother are thirty years. One half the weight of the monkey, plus the weight of the banana, is one fourth as much as the weight of the weight and the weight of
the rope. The monkey's mother is half as old as the monkey will be when it is three times as old as its mother was when she was half as old as the monkey will be when it is as old as its mother will
be when she is four times as old as the monkey was when it was twice as old as its mother was when she was one third as old as the monkey was when it was as old as its mother was when she was three
times as old as the monkey was when it was one fourth as old as it is now.
How long is the banana? | {"url":"http://www.montgomerycollege.edu/~rpenn/teasers.htm","timestamp":"2014-04-18T08:16:15Z","content_type":null,"content_length":"7525","record_id":"<urn:uuid:c2ec60e0-1540-4820-b151-0faf173a22b6>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
Middle City West, PA
Mullica Hill, NJ 08062
Certified and Current Math Teacher
Hello! My name is Christine, and I am a current 6th grade
teacher. I have taught in the classroom for the past 11years (9 years in 7th grade and 2 years in 6th grade) and have been tutoring
from Pre-k through Calculus for the past 15 years. I have my Bachelors...
Offering 10+ subjects including algebra 1, algebra 2 and geometry | {"url":"http://www.wyzant.com/geo_Middle_City_West_PA_Math_tutors.aspx?d=20&pagesize=5&pagenum=2","timestamp":"2014-04-18T22:32:09Z","content_type":null,"content_length":"60553","record_id":"<urn:uuid:43194505-86ea-49de-a1e8-f58969839c57>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math::Polygon - Class for maintaining polygon data
my $poly = Math::Polygon->new( [1,2], [2,4], [5,7], [1,2] );
print $poly->nrPoints;
my @p = $poly->points;
my ($xmin, $ymin, $xmax, $ymax) = $poly->bbox;
my $area = $poly->area;
my $l = $poly->perimeter;
if($poly->isClockwise) { ... };
my $rot = $poly->startMinXY;
my $center = $poly->centroid;
if($poly->contains($point)) { ... };
my $boxed = $poly->lineClip($xmin, $xmax, $ymin, $ymax);
This class provides an OO interface around Math::Polygon::Calc and Math::Polygon::Clip.
You may add OPTIONS after and/or before the POINTS. You may also use the "points" options to get the points listed. POINTS are references to an ARRAY of X and Y.
When new is called as instance method, it is believed that the new polygon is derived from the callee, and therefore some facts (like clockwise or anti-clockwise direction) will get copied unless
-Option --Default
bbox undef
clockwise undef
points undef
Usually computed from the figure automatically, but can also be specified as [xmin,ymin,xmax, ymax]. See bbox().
Is not specified, it will be computed by the isClockwise() method on demand.
See points() and nrPoints().
example: creation of new polygon
my $p = Math::Polygon->new([1,0],[1,1],[0,1],[0,0],[1,0]);
my @p = ([1,0],[1,1],[0,1],[0,0],[1,0]);
my $p = Math::Polygon->new(points => \@p);
Returns the number of points,
Returns the number of uniqe points: one less than nrPoints().
Returns the point with the specified INDEX or INDEXES. In SCALAR context, only the first INDEX is used.
In LIST context, the points are returned as list, otherwise as reference to an ARRAY.
Returns the area enclosed by the polygon. The last point of the list must be the same as the first to produce a correct result. The computed result is cached. Function
Returns a list with four elements: (xmin, ymin, xmax, ymax), which describe the bounding box of the polygon (all points of the polygon are inside that area). The computation is expensive, and
therefore, the results are cached. Function Math::Polygon::Calc::polygon_bbox().
Returns a new, beautified version of this polygon. Function Math::Polygon::Calc::polygon_beautify().
Polygons, certainly after some computations, can have a lot of horrible artifacts: points which are double, spikes, etc. This functions provided by this module beautify
-Option --Default
remove_spikes <false>
Returns the centroid location of the polygon. The last point of the list must be the same as the first to produce a correct result. The computed result is cached. Function
Make sure the points are in clockwise order.
Returns a truth value indicating whether the point is inside the polygon or not. On the edge is inside.
Make sure the points are in counter-clockwise order.
Compare two polygons, on the level of points. When the polygons are the same but rotated, this will return false. See same(). Function Math::Polygon::Calc::polygon_equal().
The points are (in majority) orded in the direction of the hands of the clock. This calculation is quite expensive (same effort as calculating the area of the polygon), and the result is
therefore cached.
Returns true if the first point of the poly definition is the same as the last point.
The length of the line of the polygon. This can also be used to compute the length of any line: of the last point is not equal to the first, then a line is presumed; for a polygon they must
match. Function Math::Polygon::Calc::polygon_perimeter().
Compare two polygons, where the polygons may be rotated wrt each other. This is (much) slower than equal(), but some algorithms will cause un unpredictable rotation in the result. Function
Returns a new polygon object, where the points are rotated in such a way that the point which is losest to the left-bottom point of the bouding box has become the first.
Function Math::Polygon::Calc::polygon_start_minxy().
Implemented in Math::Polygon::Transform: changes on the structure of the polygon except clipping. All functions return a new polygon object or undef.
Returns a polygon object with the points snapped to grid points. See Math::Polygon::Transform::polygon_grid().
raster 1.0
The raster size, which determines the points to round to. The origin [0,0] is always on a grid-point. When the raster value is zero, no transformation will take place.
Mirror the polygon in a line. Only one of the options can be provided. Some programs call this "flip" or "flop".
b 0
line <undef>
rc undef
x undef
y undef
Only used in combination with option rc to describe a line.
Alternative way to specify the mirror line. The rc and b are computed from the two points of the line.
Description of the line which is used to mirror in. The line is y= rc*x+b. The rc equals -dy/dx, the firing angle. If undef is explicitly specified then b is used as constant x: it's a
vertical mirror.
Mirror in the line x=value, which means that y stays unchanged.
Mirror in the line y=value, which means that x stays unchanged.
Returns a moved polygon object: all point are moved over the indicated distance. See Math::Polygon::Transform::polygon_move().
dx 0
dy 0
Displacement in the horizontal direction.
Displacement in the vertical direction.
Returns a resized polygon object. See Math::Polygon::Transform::polygon_resize().
center [0,0]
scale 1.0
xscale <scale>
yscale <scale>
Resize the polygon with the indicated factor. When the factor is larger than 1, the resulting polygon with grow, when small it will be reduced in size. The scale will be respective from the
Specific scaling factor in the horizontal direction.
Specific scaling factor in the vertical direction.
Returns a rotated polygon object: all point are moved over the indicated distance. See Math::Polygon::Transform::polygon_rotate().
-Option --Default
center [0,0]
degrees 0
radians 0
specify rotation angle in degrees (between -180 and 360).
specify rotation angle in rads (between -pi and 2*pi)
Returns a polygon object where points are removed. See Math::Polygon::Transform::polygon_simplify().
-Option --Default
max_points undef
same 0.0001
slope undef
First, same and slope reduce the number of points. Then, if there are still more than the specified number of points left, the points with the widest angles will be removed until the
specified maximum number is reached.
The distance between two points to be considered "the same" point. The value is used as radius of the circle.
With three points X(n),X(n+1),X(n+2), the point X(n+1) will be removed if the length of the path over all three points is less than slope longer than the direct path between X(n) and X(n+2).
The slope will not be removed around the starting point of the polygon. Removing points will change the area of the polygon.
Clipping a polygon into rectangles can be done in various ways. With this algorithm, the parts of the polygon which are outside the BOX are mapped on the borders. The polygon stays in one piece,
but may have vertices which are followed in two directions.
Returned is one polygon, which is cleaned from double points, spikes and superfluous intermediate points, or undef when no polygon is outside the BOX. Function
Returned is a list of ARRAYS-OF-POINTS containing line pieces from the input polygon. Function Math::Polygon::Clip::polygon_line_clip().
This module is part of Math-Polygon distribution version 1.02, built on September 19, 2011. Website: http://perl.overmeer.net/geo/
Copyrights 2004,2006-2011 by Mark Overmeer. For other contributors see ChangeLog.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See http://www.perl.com/perl/misc/Artistic.html | {"url":"http://search.cpan.org/~markov/Math-Polygon-1.02/lib/Math/Polygon.pod","timestamp":"2014-04-18T09:23:16Z","content_type":null,"content_length":"29830","record_id":"<urn:uuid:c2db9e74-4efd-488e-905f-c17719cf766e>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lynn, MA Science Tutor
Find a Lynn, MA Science Tutor
...While actual test questions are not re-used, students discover history can repeat itself in numerous ways with this test. I have been writing in some capacity since the tender age of eight. In
addition to my professional experience as a technical writer, curriculum developer, standardized-test...
6 Subjects: including chemistry, writing, organic chemistry, physical science
...Although I am an epidemiologist, I have worked in an administrative capacity in the mental health field, and have collaborated on psycho-epidemiologic studies. Those in the field of psychology
who are seeking guidance in study design, research administration, proposal-writing, statistical analys...
18 Subjects: including biostatistics, psychology, English, writing
I received my PhD in Chemistry from the University of Massachusetts, Amherst and am currently a research fellow at Mass General Hospital/ Harvard Med. I have taught Bio 101 at a local community
college recently and have experience teaching Chemistry from my graduate studies as well. I began tutori...
7 Subjects: including biochemistry, genetics, algebra 1, physical science
...I have been a software developer since graduation, working first at PayPal, then a start up in Boston, and, most recently, in my own consulting company, where I build and design applications
for clients. I have owned and used Macintosh computers for the past 10 years or so. I write programs for both the mac desktop operating system and the mobile OS, iOS.
19 Subjects: including psychology, Spanish, English, geometry
...I have a Bachelor's and Master's in Engineering from Stevens Inst of Technology in Hoboken NJ. I tutor students in all areas of math- trig, geometry, algebra, calculus, chemistry, and also
physics. I enjoy math and sciences and would love to help you get a better understanding of the sciences.
31 Subjects: including organic chemistry, ACT Science, chemical engineering, mechanical engineering
Related Lynn, MA Tutors
Lynn, MA Accounting Tutors
Lynn, MA ACT Tutors
Lynn, MA Algebra Tutors
Lynn, MA Algebra 2 Tutors
Lynn, MA Calculus Tutors
Lynn, MA Geometry Tutors
Lynn, MA Math Tutors
Lynn, MA Prealgebra Tutors
Lynn, MA Precalculus Tutors
Lynn, MA SAT Tutors
Lynn, MA SAT Math Tutors
Lynn, MA Science Tutors
Lynn, MA Statistics Tutors
Lynn, MA Trigonometry Tutors
Nearby Cities With Science Tutor
Beverly, MA Science Tutors
Boston Science Tutors
Brookline, MA Science Tutors
Cambridge, MA Science Tutors
Chelsea, MA Science Tutors
Everett, MA Science Tutors
Malden, MA Science Tutors
Nahant Science Tutors
Peabody, MA Science Tutors
Revere, MA Science Tutors
Roxbury, MA Science Tutors
Salem, MA Science Tutors
Saugus Science Tutors
Somerville, MA Science Tutors
Swampscott Science Tutors | {"url":"http://www.purplemath.com/lynn_ma_science_tutors.php","timestamp":"2014-04-17T00:53:02Z","content_type":null,"content_length":"23889","record_id":"<urn:uuid:c053e98a-bf55-4a77-9334-6fa84a4abfe0>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum information processing based on cavity QED with mesoscopic systems
• Introduction: Recent developments in quantum communication and computing [1-3] stimulated an intensive search for physical systems that can be used for coherent processing of quantum information.
It is generally believed that quantum entanglement of distinguishable quantum bits (qubits) is at the heart of quantum information processing. Significant efforts have been directed towards the
design of elementary logic gates, which perform certain unitary processes on pairs of qubits. These gates must be capable of generating specific, in general entangled, superpositions of the two
qubits and thus require a strong qubit-qubit interaction. Using a sequence of single and two-bit operations, an arbitrary quantum computation can be performed [2]. Over the past few years many
systems have been identified for potential implementations of logic gates and several interesting experiments have been performed. Proposals for strong qubit-qubit interaction involve e.g. the
vibrational coupling of cooled trapped ions [4], near dipole-dipole or spin-spin interactions such as in nuclear magnetic resonance [5], collisional interactions of confined cooled atoms [6] or
radiative interactions between atoms in cavity QED [7]. The possibility of simple preparation and measurement of qubit states as well as their relative insensitivity to a thermal environment
makes the latter schemes particularly interesting for quantum information processing. Most theoretical proposals on cavity-QED systems focus on fundamental systems involving a small number of
atoms and few photons. These systems are sufficiently simple to allow for a first-principle description. Their experimental implementation is however quite challenging. For example, extremely
high-Q micro-cavities are needed to preserve coherence during all atom-photon interactions. Furthermore, single atoms have to be confined inside the cavities for a sufficiently long time. This
requires developments of novel cooling and trapping techniques, which is in itself a fascinating direction of current research. Despite these technical obstacles, a remarkable progress has been
made in this area: quantum processors consisting of several coupled qubits now appear to be feasible. | {"url":"https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1215","timestamp":"2014-04-18T06:30:51Z","content_type":null,"content_length":"23760","record_id":"<urn:uuid:c29ec42c-ea0e-443c-8acf-cb9e0683aefd>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00072-ip-10-147-4-33.ec2.internal.warc.gz"} |
hlep with this diff. eq. pls
January 12th 2010, 01:57 AM #1
Junior Member
Aug 2009
hlep with this diff. eq. pls
i have (dx/dt)^2 = (1/x^4) -1 and want to write an integral for t(x)...
(dx/dt) = +/- {x^2/[(1-x^4)^(1/2)]}
so t(x) = +/- integr of {x^2/[(1-x^4)^(1/2)]}
but the book says t = integr (from u=1 to =x) of {u^2/[(1-u^4)^(1/2)]}
can someone please help me see that the book has done here? thanks a lot!
i have (dx/dt)^2 = (1/x^4) -1 and want to write an integral for t(x)...
(dx/dt) = +/- {x^2/[(1-x^4)^(1/2)]}
so t(x) = +/- integr of {x^2/[(1-x^4)^(1/2)]}
but the book says t = integr (from u=1 to =x) of {u^2/[(1-u^4)^(1/2)]}
can someone please help me see that the book has done here? thanks a lot!
Please post the entire question, including the intial condition that came with the DE (I assume it was x = 1 when t = 0. But we shouldn't have to assume.)
Also, note that if $\frac{dt}{dx} = f(x)$ subject to the boundary condition $x = x_0$ when $t = t_0$ then $t(x) = \int_{x_0}^{x} f(u) \, du$.
January 12th 2010, 04:22 AM #2 | {"url":"http://mathhelpforum.com/differential-equations/123381-hlep-diff-eq-pls.html","timestamp":"2014-04-17T08:37:36Z","content_type":null,"content_length":"34153","record_id":"<urn:uuid:ab20e57e-ec91-4869-b629-5b8fbaf4d31c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00235-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lynn, MA Science Tutor
Find a Lynn, MA Science Tutor
...While actual test questions are not re-used, students discover history can repeat itself in numerous ways with this test. I have been writing in some capacity since the tender age of eight. In
addition to my professional experience as a technical writer, curriculum developer, standardized-test...
6 Subjects: including chemistry, writing, organic chemistry, physical science
...Although I am an epidemiologist, I have worked in an administrative capacity in the mental health field, and have collaborated on psycho-epidemiologic studies. Those in the field of psychology
who are seeking guidance in study design, research administration, proposal-writing, statistical analys...
18 Subjects: including biostatistics, psychology, English, writing
I received my PhD in Chemistry from the University of Massachusetts, Amherst and am currently a research fellow at Mass General Hospital/ Harvard Med. I have taught Bio 101 at a local community
college recently and have experience teaching Chemistry from my graduate studies as well. I began tutori...
7 Subjects: including biochemistry, genetics, algebra 1, physical science
...I have been a software developer since graduation, working first at PayPal, then a start up in Boston, and, most recently, in my own consulting company, where I build and design applications
for clients. I have owned and used Macintosh computers for the past 10 years or so. I write programs for both the mac desktop operating system and the mobile OS, iOS.
19 Subjects: including psychology, Spanish, English, geometry
...I have a Bachelor's and Master's in Engineering from Stevens Inst of Technology in Hoboken NJ. I tutor students in all areas of math- trig, geometry, algebra, calculus, chemistry, and also
physics. I enjoy math and sciences and would love to help you get a better understanding of the sciences.
31 Subjects: including organic chemistry, ACT Science, chemical engineering, mechanical engineering
Related Lynn, MA Tutors
Lynn, MA Accounting Tutors
Lynn, MA ACT Tutors
Lynn, MA Algebra Tutors
Lynn, MA Algebra 2 Tutors
Lynn, MA Calculus Tutors
Lynn, MA Geometry Tutors
Lynn, MA Math Tutors
Lynn, MA Prealgebra Tutors
Lynn, MA Precalculus Tutors
Lynn, MA SAT Tutors
Lynn, MA SAT Math Tutors
Lynn, MA Science Tutors
Lynn, MA Statistics Tutors
Lynn, MA Trigonometry Tutors
Nearby Cities With Science Tutor
Beverly, MA Science Tutors
Boston Science Tutors
Brookline, MA Science Tutors
Cambridge, MA Science Tutors
Chelsea, MA Science Tutors
Everett, MA Science Tutors
Malden, MA Science Tutors
Nahant Science Tutors
Peabody, MA Science Tutors
Revere, MA Science Tutors
Roxbury, MA Science Tutors
Salem, MA Science Tutors
Saugus Science Tutors
Somerville, MA Science Tutors
Swampscott Science Tutors | {"url":"http://www.purplemath.com/lynn_ma_science_tutors.php","timestamp":"2014-04-17T00:53:02Z","content_type":null,"content_length":"23889","record_id":"<urn:uuid:c053e98a-bf55-4a77-9334-6fa84a4abfe0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |
free software calculator
...EngCalc is a new calculating
with extended capabilities. This is an essential application...and engineering student. The main advantage of the
is a simple input format even for the...to navigate through. The compact size of the
does not hinder the performance. To the contrary,...distributed as shareware, meaning you can download a
trial version to test the program extensively before... | {"url":"http://www.freevistafiles.com/search/free_software_calculator/","timestamp":"2014-04-19T07:29:43Z","content_type":null,"content_length":"39008","record_id":"<urn:uuid:70f7b7e3-15aa-4007-9691-fee1127c8c55>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
About this
The purpose of this Web site is to provide students and their teachers with an appropriate and authentic resource with which to study the effect of soil types, ground cover, and other variables on
the amount, or quantity, of surface water runoff. The amount of runoff for differing conditions can be modeled through the use of the information on these pages and the "runnable", or interactive,
model that accompanies these materials. The model allows for the modification of such variables as soil type, ground cover type, and rain duration in order for the user to obtain a greater
understanding of the quantity of water runoff and the effect of different soil and cover types on the water runoff.
Why is such a model appropriate and authentic? The interactive model provided here is appropriate because it uses the right technology -- the study of science through its representation in
mathematical language and implemented on a computer -- to understand a complex problem that is exceptionally difficult to study "in the field." Most schools, students, and educators do not have the
resources or expertise to conduct meaningful studies of surface water runoff in their communities. It is appropriate because a solid understanding of the scientific, economic, and political impact of
surface water runoff is critical for all citizens. This is especially important for states like North Carolina, where water runoff has significant implications for farmers, homeowners, and those who
are concerned about the quality of bodies of water such as the Neuse River. The model is authentic because the science and mathematics that provide the foundation of the model are the exact same as
those used by research scientists. The equations and values used in this model come from the U.S. Department of Agriculture Soil Conservation Service (SCS). All of the equations for this model come
from Publication 210-VI-TR-55, Second Edition, 1986. The use of these equations and parameters allows us to put a scientifically-accepted computational tool into the hands of "young" scientists and
their teachers. While the scenarios may be developed for educational purpose, the mathematical and computational tools are authentic.
The model uses the conceptual framework of computational science -- application, algorithm, and architecture. The application refers to the scientific problem of interest and the components of that
problem that we wish to study and/or include. Like the three parts of the "fire triangle -- a source of fuel, a source of a spark, and oxygen -- computational models require and understanding and an
implementation of application, algorithm, and architecture. The model is also defined by what we know from theory, from experiement, and from computation. Old and new theories suggest experiments,
which suggest computaions, which suggest refinements to the theory and ideas for new experiments:
Computational science is an important method for teaching and learning science. Many of the really interesting events of science are those that are difficult to study experimentally because they:
• occur too quickly (such as molecular interactions in chemistry)
• occur too slowly (such as population dynamics)
• are too costly to replicate in the laboratory (wind tunnel modeling)
• are too dangerous (rapid combustion experiments)
With computational science, not only can the events be simulated, but also the experimental variables can be modified and the event can be re-enacted to observe the effect. This is an exciting and
empowering hands-on process of learning that gives the students rapid feedback on their experiments and helps to develop scientific intuition. Computational science has been defined in many ways. We
define it as the correct and efficient match of application, algorithm, and architecture which enables one to do science or engineering on a computer. The application is simply a concise statement of
the problem to be solved. Computational methods are being applied to all areas of study, not just the physical sciences. Typical applications include the changing of voting habits, effects of
atmospheric pollutants, the formation of galaxies, the dynamics of world economies, the erosion of beaches, or the modes of vibration in a molecule. The use of models in the political and social
sciences is becoming increasingly important at all levels. In short, appropriate applications cross all traditional boundaries and disciplines.
Once we have an application to be studied, we must translate that problem into a mathematical format, known as the algorithm.
The final part of the model is the architecture. The mathematical representation (the algorithm) must be converted into some appropriate computer code using some programming language, and then be run
on some piece of hardware. A big challenge in learning to do computational science is choosing the appropriate tools for a particular problem. One does not need to know FORTRAN (a high-level
programming language) that runs on an expensive supercomputer to simulate a simple flu epidemic! The student needs to begin to understand the strengths and weaknesses of various computational tools,
and the impact they play on answering the key question that all computational scientists ask:
How do we know the model is right? | {"url":"http://www.shodor.org/master/environmental/water/runoff/about.html","timestamp":"2014-04-21T12:11:28Z","content_type":null,"content_length":"8665","record_id":"<urn:uuid:0967732d-b06f-4fd8-931b-03e1dcb78354>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elecromagnetic Four-PotentialEdit
This page will introduce the Four-Potential, and the Four-Current notations, as well as the d'Alembertian, which is used when studying these topics under the theoretical framework of Special
relativity. These constructs, while a little confusing for some people, are fundamental to the way in which modern physicists study electric and magnetic fields, and therefore are worth learning.
These topics may be introduced a little early here, although this chapter will not be rigorous. Instead, we will provide some common results here, and explain them thoughtout the next few chapters.
Potentials, As They StandEdit
Electric Potential (V), and Magnetic Vector Potential (A) are given by:
$\mathbf {E} = -abla V$
$\mathbf B = abla \times \mathbf A$
Using Gauss's Law for Electricity:
$abla \cdot \mathbf {E} = \frac {\rho}{\epsilon_0}$
and then substituting E with the Gradient of V. We see that:
$abla \cdot abla V = -\frac {\rho}{\epsilon_0}$
$abla^2 V = -\frac {\rho}{\epsilon_0}$
And from Coulomb's Law we see that:
$V = \frac{1}{4 \pi \epsilon_0} \int \frac{\rho}{\tau}{d\tau}$
Using a similar technique in Magnetostatics gives:
$abla^2 \mathbf{A} = -\mu_0 \mathbf{J}$ and
$\mathbf{A} = \frac{\mu_0}{4 \pi} \int \frac{\mathbf{J}}{\tau}d\tau$
Four Potentials and Four CurrentsEdit
We define a vector called the Four Potential as:
$A^\mu = \left (\frac{V}{c}, \mathbf{A}\right) = (V, A_x, A_y, A_z)$
And another called the Four Current, which instead of V has ρ, the charge density, and instead of A, has J the current density:
$J^\mu = (c\rho, \mathbf{J}) = (\rho, J_x, J_y, J_z)$
Where in each case c is the speed of light.
Both the Four-Potential and the Four-Current are vectors with four scalar values. What each of these values represents will be made clear at a later point.
Bringing it TogetherEdit
You may have noticed that the equations for A with J and V with ρ are pretty much the same, and that the Four Potential and Current contains only these. We can combine these into a single equation as
$abla A^\mu = -\mu_0 J^\mu$
$A^\mu = \frac{\mu_0}{4 \pi} \int \frac{J^\mu}{\tau}d\tau$
The rules of relativity state that a current now cannot produce a magnetic field at a distance instantaneously. The effects of the current may travel at the speed of light at the fastest, and the
field may not change any faster then that. If you've never looked at Special relativity before, this may be a good time to do so.
The d'AlembertianEdit
We obviously need some term which compensates for the fact that nothing travels faster than light. So far our Laplacian operator looks like this:
$abla^2 = \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + \frac{\partial^2}{\partial z^2}$
But now we need to bring it into Minkowski Space (A way of describing Relativistic Space). We can notice that the Laplacian cannot be applied to a four vector, and that the Laplacian is not invariant
under Lorentz Transformations. To correct this we use the d'Alembertian. We define the d'Alembertian as such:
$\Box = abla^2 - \frac{1}{c^2} \frac{\partial^2}{\partial t^2} = \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + \frac{\partial^2}{\partial z^2}- \frac{1}{c^2} \frac{\partial^
2}{\partial t^2}$
The d'Alembertian reduces to the Laplacian (∇^2) if we aren't worried about time dependence, or relativity. There are a number of different ways to denote the d'Alembertian, depending on what text
you read. Here are a number of different methods that are used in common texts:
$\Box = \Box^2 = abla_\mathbf{M} = \partial_i \partial^i = \partial^2 = abla^2 - \frac{1}{c^2} \frac{\partial^2}{\partial t^2}$
This wikibook will use the $\Box$ notation for the d'Alembertian, for simplicity and ease of authoring.
We end up with our equation in terms of the d'Alembertian, as such:
$\Box A^\mu = -\mu_0 J^\mu$
We also have to bring in a time term into the integral form of the equation. It becomes:
$A^\mu = \frac{\mu_0}{4 \pi} \int \frac{J^\mu(r',t')}{|r - r'|} d^3 r'$
Last modified on 10 July 2007, at 02:54 | {"url":"http://en.m.wikibooks.org/wiki/Electrodynamics/Four-Vectors","timestamp":"2014-04-20T03:18:38Z","content_type":null,"content_length":"21053","record_id":"<urn:uuid:f6167a1a-f1cd-44cb-a609-76a07f65cde0>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |
Euclid with Birkhoff
up vote 12 down vote favorite
I'm looking for an short and elementary book which does Euclidean geomety with Birkhoff's axioms.
It would be best if it would also include some topics in projective (and/or) hyperbolic geometry.
About the course. The students suppose know some basic calculus, but they did not see real proofs.
Most of the students in my course want to become math teachers. The course description says: "Euclidean and Hyperbolic geometries and their development from postulate systems".
I choose Birkhoff's axioms sinse they use real numbers as a building block. This makes possible to do intro without cheating and without borring details. I know some good books for school students,
but I am looking for something bit more advanced.
P.S. I want to thank everyone for comments and answers.
As I stated in the comments, I did not find an appropriate book and wrote the lecture notes my-self. They are available on arXiv, here is the link: Euclidean and Hyperbolic Planes.
books reference-request euclidean-geometry mathematics-education
1 Out of curiosity: what kind of students will you have? – Mariano Suárez-Alvarez♦ May 11 '11 at 1:49
2 Out of curiosity: presumably you made a choice and taught the course by now. What did you do, and how did it go? – Gerry Myerson May 11 '11 at 6:04
4 Anton, if your notes are available online I am sure more than a few of us would enjoy seeing them. – roy smith May 14 '11 at 23:30
Could you say more about who the students were? Were these undergraduate math majors (who want to become teachers)? Did they learn to do proofs? (Teaching students how to do proofs is something I
seem to be unable to do at all.) – Deane Yang May 15 '11 at 4:04
Must it be be CW? – SNd May 15 '11 at 15:05
add comment
6 Answers
active oldest votes
Have you taught this course before? After teaching it several times from Millman/Parker and other materials using Birkhoff's axioms, I suggest you consider using Euclid himself plus
Hartshorne's guide, Geometry: Euclid and beyond, which uses a form of Hilbert's axioms.
The problem for me is that real numbers are much more sophisticated than Euclidean geometry, and the Birkhoff approach is thus a bit backwards except for experts like us who know what real
numbers are.
When we covered as much of Millman/Parker as we could manage, the most enjoyable part for the class was the section on neutral geometry, which I learned recently was lifted bodily from
Euclid Book I.
If you like assuming that every line in the plane is really the real numbers R, what about going the rest of the way and assuming the plane itself is R^2? Then you can use matrices to define
rigid motions and do a lot that connects up to their calculus courses.
Moise is more succinct than the 500 pages suggests as I recall, and is an excellent text from a mathematician's standpoint, but very forbidding probably from a student's. I noticed Moise
went from 1.4 to 1.9 pounds from 1st edition to third so maybe the first is also 25% shorter.
up vote The old SMSG books in the 1960's were based on Birkhoff's approach, but are not short. They are also available free on the web.
8 down
vote I just looked at the old SMSG book and found the following circular sort of discussion of real numbers: "if you fill in all those other non rational points on the line, you have the real
Clint McCrory spent several years developing his own course using Birkhoff's approach at UGA, and made it very successful. Here is a link to his course page. The students loved his class at
least in its evolved form after a couple years. they especially appreciated the GSP segment at the start. Apparently many students had little geometric intuition and used that to acquire
some. Clint apparently never found an appropriate book to use though.
After teaching this course myself from Greenberg, Millman/Parker, Clemens, supplemented by Moise, and the original works of Saccheri, my own Birkhoff axioms, I finally found Euclid and
Hartshorne to be my favorite, by a large margin.
But the beauty of the topic is that there is no perfect choice. You will likely enjoy the search for your favorite too. There is a reason however that Euclid has the longevity it has.
When I last taught the course from Euclid, we kept a diary of axioms we thought were needed for his proofs and compared them to Hilbert's. The main difference was, since our list added
tacit assumptions Euclid had made, we had existence of rigid motions as an axiom, whereas Hilbert had SAS congruence, whose proof used that principle, as an axiom. We were also not as
precise on separation properties needed. However Hilbert's own separation axioms were not independent as he claimed. As Robin H. pointed out the several editions of Hilbert mean there is
no definitive list of Hilbert's axioms. – roy smith May 14 '11 at 23:40
add comment
I am not sure this qualifies under short and elementary but I recommend:
Geometry: A Metric Approach with Models
Richard Millman and George Parker; Springer-Verlag, NY, 1981
up vote 6 down vote
(there may be a more recent edition, I did not check).
This book does treat hypberbolic geometry.
2 I checked this book --- it will definetely scary any student... – Anton Petrunin Jul 6 '10 at 13:17
add comment
If you're willing to use an unpublished manuscript, from the little I've looked at it, this book by Matthew Harvey looks pretty good. However, he uses Hilbert's axioms rather than
Birkhoff's. Jack Lee at the University of Washington is writing another book, using a variant on the SMSG postulates, designed for a geometry course for math majors who are considering
up vote 6 teaching high school. His book spends several chapters on hyperbolic geometry, but doesn't have any projective geometry. The book is not publicly available, but you could email him and ask
down vote him about it.
add comment
How about Basic Geometry and Basic Geometry - Manual for Teachers by George D. Birkhoff and Ralph Beatley?
In the spring of 1923, Professor Birkhoff was invited to deliver in Boston a series of Lowell Lectures on Relativity. In order to present this subject with as few technicalities as
up vote 5 possible he decided to devise the simplest possible system of Euclidean geometry he could think of, and... he hit upon the framework of the system that, with all the details filled
down vote in, is now BASIC GEOMETRY.
1 This book does not treat the hyperbolic plane. Another advantage of Millman and Parker, Geometry: A Metric Approach with Models is that it has a much more modern flavor treating things
like the taxicab plane, the Moulton plane, etc. – Joseph Malkevitch Jul 5 '10 at 14:37
Joseph, I haven't read the book you recommend. I do not claim at all that it might be inferior in any sense to the book by Birkhoff and Beatley. – Andrey Rekalo Jul 5 '10 at 14:43
1 Thank you. I checked the book, it is written for school students, a lot of motivation, but it does not go far enough for me (even in Euclidean geometry). – Anton Petrunin Jul 6 '10 at
add comment
Chapter 14 of Prenowitz and Jordan, Basic Concepts of Geometry, begins, "In this chapter a tentative treatment of congruence is given based on a proposal of G. D. Birkhoff (1884-1944) that
the real number system should be assumed in the treatment of Euclidean geometry at an elementary level. Birkhoff's development was modified and simplified by the School Mathematics Study
Group. Our treatment is an adaptation of theirs and assumes a modification of their Ruler Postulate employed by MacLane."
References are given to Birkhoff and Beatley, to the School Mathematics Study Group textbook Geometry (Yale U. Press, 1961), and to S. MacLane, Metric postulates for plane geometry,
American Mathematical Monthly 66 (1959) 543-555.
up vote 4
down vote There is no discussion of projective or hyperbolic geometry in Chapter 14 (but there is an extensive discussion of hyperbolic geometry in earlier chapters).
You might also be interested in Moise, Elementary Geometry From An Advanced Standpoint. In Chapter 8 you find out that the way he has been presenting plane geometry "is not the classical
one. It was proposed in the early 1930's by G. D. Birkhoff, and has only recently become popular." In later chapters he does hyperbolic geometry, but I don't know whether he follows the
Birkhoff path. I see no mention of projective geometry in this book.
1 Moise,to me,is the definitive work on Euclid from a metric standpoint.Millman and Parker is a good alternative,but not as good as Moise. Frankly,I think thier elementary differential
geometry text is MUCH better and it's a tragedy it's out of print. – Andrew L Jul 6 '10 at 7:19
Yes, Moise is quite good. – Anton Petrunin May 10 '13 at 21:42
add comment
Geometry: Plane and Fancy doesn't exactly fit what you describe (it moves on to the fancy stuff a lot sooner than 2/3 of the way through), but might nevertheless be worth a look.
up vote 3 down vote
Yes, "it moves on to the fancy stuff a lot sooner"... – Anton Petrunin Jul 6 '10 at 16:03
add comment
Not the answer you're looking for? Browse other questions tagged books reference-request euclidean-geometry mathematics-education or ask your own question. | {"url":"http://mathoverflow.net/questions/30620/euclid-with-birkhoff","timestamp":"2014-04-18T14:04:48Z","content_type":null,"content_length":"88464","record_id":"<urn:uuid:0bd8abc0-366a-4af9-8640-6fe8ac355500>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Strange object position after IFFT2
Replies: 3 Last Post: Dec 27, 2012 6:39 PM
Messages: [ Previous | Next ]
Re: Strange object position after IFFT2
Posted: Dec 27, 2012 4:56 PM
"Wojtek" wrote in message <kbi4o4$rbn$1@newscl01ah.mathworks.com>...
> I think I start to understand where is the problem. The whole thing concentrates in this simple piece of code which I prepared as an example:
> clear all
> clc
> a = [zeros(1,412) ones(1,200) zeros(1,412)];
> adft = fftshift(fft(a)) ;
> my_spectrum = zeros(1024,1024);
> for k=1:1024
> my_spectrum(k,k) = (adft(k)) ;
> end
> recon = (ifft2(ifftshift(my_spectrum))) ;
> imagesc(abs(recon)) ;
> I calculated 1D FFT of a square function. I placed in in the diagonal of my empty spectrum. The result (the reconstructed image "recon") should be (in my opinion) one square function going
diagonally through the center. So why as a result I get two rectangular functions - not one?
First of all, the FT of a rectangular pulse is a sinc function, not another rectangular pulse. Then you're taking just one sample from that 1D FFT signal and copying it to the output spectrum, but
that sample that you're taking out of the sinc varies line by line. I have no idea why you'd want to do this. Then you're taking the ifft2 of a shifted spectrum and I have no idea why. The fftshift
if mainly to make things look good for display and you really need good bookkeeping if you're going to start FFT'ing a shifted version. Normally you don't, unless it makes filtering easier, but in
your case it doesn't. Why don't you post in the Answers forum and Wayne or Greg can explain better to you why your algorithm is messed up. I can't even figure out what this is supposed to do - I
think you might have been better off just leaving your first question since this code just seems weird and
Date Subject Author
12/27/12 Strange object position after IFFT2 Wojtek
12/27/12 Re: Strange object position after IFFT2 Wojtek
12/27/12 Re: Strange object position after IFFT2 ImageAnalyst
12/27/12 Re: Strange object position after IFFT2 Wojtek | {"url":"http://mathforum.org/kb/message.jspa?messageID=7944337","timestamp":"2014-04-17T05:17:05Z","content_type":null,"content_length":"21262","record_id":"<urn:uuid:c8749082-4b22-460b-a9a5-c2b04f997de3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
maths iq test question for class 10th
Author Message
Bnoiprens Posted: Wednesday 27th of Dec 11:11
I am in urgent need of help in completing a project in maths iq test question for class 10th. I need to finish it by next week and am having a tough time trying to figure out a few
tricky problems. I tried some of the internet help sites but have not gotten much help so far. I would be really grateful if anyone can help me.
kfir Posted: Wednesday 27th of Dec 16:51
How about giving a little more particulars of what exactly is your difficulty with maths iq test question for class 10th? This would aid in finding out ways to search for an answer.
Finding a coach these days quickly enough and that too at a charge that you can pay for can be a frustrating task. On the other hand, these days there are programs that are available
to assist you with your math problems. All you require to do is to go for the most suited one. With just a click the correct answer pops up. Not only this, it helps you to arriving at
the answer. This way you also get to find out how to get at the right answer.
From: egypt
Dxi_Sysdech Posted: Friday 29th of Dec 07:31
Algebrator is one handy tool. I don’t have much interest in math and have found it to be difficult all my life. Yet one cannot always leave math because it sometimes becomes a
compulsory part of one’s course work. My younger brother is a math expert and I found this software in his laptop. It was only then I understood why he finds this subject to be so
From: Right
here, can't you
see me?
resx` Posted: Friday 29th of Dec 19:14
Can a program really help me excel my math? Guys I don’t need something that will solve problems for me, instead I want something that will help me understand the concepts as well.
Gog Posted: Sunday 31st of Dec 08:06
You can buy it from http://www.easyalgebra.com/point-1.html. I don’t think there are too many specific software requirements; you can just download and start using it.
From: Austin,
thicxolmed01 Posted: Monday 01st of Jan 13:49
A truly piece of algebra software is Algebrator. Even I faced similar difficulties while solving system of equations, exponential equations and perpendicular lines. Just by typing in
the problem from homework and clicking on Solve – and step by step solution to my algebra homework would be ready. I have used it through several math classes - Basic Math, Algebra 1
and Pre Algebra. I highly recommend the program.
From: Welly, NZ | {"url":"http://www.easyalgebra.com/elementaryalgebra/binomials/maths-iq-test-question-for.html","timestamp":"2014-04-19T11:57:12Z","content_type":null,"content_length":"23511","record_id":"<urn:uuid:5984eea1-f8ee-4419-857a-866aa90bb3e9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
Methods and apparatus for interpreting measured laboratory data - Patent # 6292761 - PatentGenius
Methods and apparatus for interpreting measured laboratory data
6292761 Methods and apparatus for interpreting measured laboratory data
(2 images)
Inventor: Hancock, Jr.
Date Issued: September 18, 2001
Application: 09/145,999
Filed: September 2, 1998
Inventors: Hancock, Jr.; William Franklin (Burlington, NC)
Primary Shah; Kamini
Attorney Or Myers Bigel Sibley & Sajovec
U.S. Class: 345/440; 702/189; 702/67; 73/23.2
Field Of ; 702/189; 702/67; 702/19; 702/121; 702/176; 702/183; 345/440; 346/33ME; 73/23.36; 73/23.2; 73/23.35
U.S Patent 3855459; 4527240; 5007283; 5371694; 5541854; 5545895; 5616504; 5619428; 5734591
Patent 0 753 283 A1
Other PCT International Search Report, PCT International Application No. PCT/US99/20125 (Dec. 16, 1999)..
References: Brochure, Normal distribution (mu,sigma), http://www.cs.uni.edu/.about.campbell/stat/z-score.html (1998)..
Brochure, The International Temperature Scale of 1990 (ITS-90), Omega Engineering, Inc., Omega.com.TM. (1996)..
Christensen, Introduction to Statistics, p. 149 (1992)..
Abstract: Systems, methods, and computer program products for determining relative normalcy and abnormalcy of a plurality of test results are provided. Test results are transformed into
respective unitized values and then graphically displayed with a unitized reference range. An analytical variation of each of the respective unitized values may be determined and
Claim: That which is claimed is:
1. A method of determining relative normalcy and abnormalcy of a plurality of test results, wherein the test includes a reference range of normal test results associatedtherewith, the
method comprising the following steps that are performed in a data processing system:
unitizing singularly possible normal test results in the normal reference range to a single number, wherein the normal reference range is bounded by upper and lower values of
normalcy, wherein a single fractional value of each normal value in thenormal reference range is equal to a single fractional value of every other normal value in the normal reference
range, and wherein a sum of all fractional values equals the single number;
determining a total number of singularly possible normal test results within respective halves of the normal reference range, comprising:
converting singularly possible normal test results in the normal reference range having a decimal value to a whole number;
determining a total number of singularly possible test results within the normal reference range to produce a normal reference range spread, comprising:
subtracting the lower value of normalcy from the upper value of normalcy; and
adding an integer to the value obtained by subtracting the lower value of normalcy from the upper value of normalcy; and
dividing the normal reference range spread in half;
transforming each of the plurality of test results into respective equilibrated values, wherein each equilibrated value represents relative position of a respective test result with
respect to a mean of the normal reference range so as to yieldnumerically like data values when the data values are equally less than or greater than the respective lower and upper
values of normalcy;
transforming each of the equilibrated values into respective unitized values, wherein each unitized value represents relative normalcy or abnormalcy of a respective test result with
respect to the upper and lower values of normalcy of the normalreference range; and
graphically displaying each of the unitized values with the unitized reference range and a unitized analytical variation, wherein each test is displayed on a single line with each
test referenced to a single uniform reference range for all of thetests, wherein the single uniform reference range comprises the single number.
2. A method according to claim 1 wherein the step of transforming each of the plurality of test results into respective equilibrated values comprises the steps of:
converting each test result having a decimal value to a whole number value;
determining a mean of the normal reference range for each of a plurality of tests; and
determining a difference between the respective mean and each of the plurality of test results.
3. A method according to claim 1 wherein the step of transforming each of the equilibrated values into respective unitized values comprises multiplying each respective equilibrated
value with a fractional value of the plurality of normal testresults in one-half of the normal reference range for each of a plurality of tests.
4. A method according to claim 3 wherein the fractional value of the plurality of test results for each of a plurality of tests comprises a reciprocal of one-half of the total number
of singularly possible test results in the normal referencerange.
5. A method according to claim 1 further comprising the step of determining a unitized analytical variation of each of the respective unitized values, wherein a traditionally
determined analytical variation of the test is multiplied by afractional value of the respective test of a plurality of tests.
6. A method according to claim 1 wherein the unitizing step is preceded by the step of storing a single unitizing normal reference range number, a plurality of test identifications
and associated test results, reference range spreads, halves ofreference range spreads, fractional values, equivalent values, and normal reference ranges in the data processing
7. A data processing system for determining relative normalcy and abnormalcy of a plurality of test results, wherein the test includes a normal reference range of test results
associated therewith, comprising:
means for unitizing singularly possible normal test results in the normal reference range to a single number, wherein the normal reference range is bounded by upper and lower values
of normalcy, wherein a single fractional value of each normalvalue in the normal reference range is equal to a single fractional value of every other normal value in the normal
reference range, and wherein a sum of all fractional values equals the single number;
means for determining a total number of singularly possible normal test results within respective halves of the normal reference range, comprising:
converting singularly possible normal test results in the normal reference range having a decimal value to a whole number;
determining a total number of singularly possible test results within the normal reference range to produce a normal reference range spread, comprising:
subtracting the lower value of normalcy from the upper value of normalcy; and
adding an integer to the value obtained by subtracting the lower value of normalcy from the upper value of normalcy; and
dividing the normal reference range spread in half;
means for transforming each of the plurality of test results into respective equilibrated values, wherein each equilibrated value represents relative position of a respective test
result with respect to a mean of the normal reference range so asto yield numerically like data values when the data values are equally less than or greater than the respective lower
and upper values of normalcy;
means for transforming each of the equilibrated values into respective unitized values, wherein each unitized value represents relative normalcy or abnormalcy of a respective test
result with respect to the upper and lower values of normalcy ofthe normal reference range; and
graphically displaying each of the unitized values with the unitized reference range and a unitized analytical variation, wherein each test is displayed on a single line with each
test referenced to a single uniform reference range for all of thetests, wherein the single uniform reference range comprises the single number.
8. A data processing system according to claim 7 wherein the means for transforming each of the plurality of test results into respective equilibrated values comprises:
means for converting each test result having a decimal value to a whole number value;
means for determining a mean of the normal reference range for each of a plurality of tests; and
means for determining a difference between the respective mean and each of the plurality of test results.
9. A data processing system according to claim 7 wherein the means for transforming each of the equilibrated values into respective unitized values comprises means for multiplying
each respective equilibrated value with a fractional value of theplurality of normal test results in one-half of the normal reference range for each of a plurality of tests.
10. A data processing system according to claim 9 wherein the fractional value of the plurality of test results for each of a plurality of tests comprises a reciprocal of one-half of
the total number of singularly possible test results in thenormal reference range.
11. A data processing system according to claim 7 further comprising means for determining an analytical variation of each of the respective unitized values.
12. A data processing system according to claim 7 further comprising means for storing a single unitizing normal reference range number, a plurality of test identifications and
associated test results, reference range spreads, halves ofreference range spreads, fractional values, equivalent values, and normal reference ranges in the data processing system.
13. A computer program product for determining relative normalcy and abnormalcy of a plurality of test results, wherein the test includes a normal reference range of test results
associated therewith, the computer program product comprising acomputer usable storage medium having computer readable program code means embodied in the medium, the computer readable
program code means comprising:
computer readable program code means for unitizing singularly possible normal test results in the normal reference range to a single number, wherein the normal reference range is
bounded by upper and lower values of normalcy, wherein a singlefractional value of each normal value in the normal reference range is equal to a single fractional value of every other
normal value in the normal reference range, and wherein a sum of all fractional values equals the single number;
computer readable program code means for determining a total number of singularly possible normal test results within respective halves of the normal reference range, comprising:
converting singularly possible normal test results in the normal reference range having a decimal value to a whole number;
determining a total number of singularly possible test results within the normal reference range to produce a normal reference range spread, comprising:
subtracting the lower value of normalcy from the upper value of normalcy; and
adding an integer to the value obtained by subtracting the lower value of normalcy from the upper value of normalcy; and
dividing the normal reference range spread in half;
computer readable program code means for transforming each of the plurality of test results into respective equilibrated values, wherein each equilibrated value represents relative
position of a respective test result with respect to a mean ofthe normal reference range so as to yield numerically like data values when the data values are equally less than or
greater than the respective lower and upper values of normalcy;
computer readable program code means for transforming each of the equilibrated values into respective unitized values, wherein each unitized value represents relative normalcy or
abnormalcy of a respective test result with respect to the upperand lower values of normalcy of the normal reference range; and
graphically displaying each of the unitized values with the unitized reference range and a unitized analytical variation, wherein each test is displayed on a single line with each
test referenced to a single uniform reference range for all of thetests, wherein the single uniform reference range comprises the single number.
14. A computer program product according to claim 13 wherein the computer readable program code means for transforming each of the plurality of test results into respective
equilibrated values comprises:
computer readable program code means for converting each test result having a decimal value to a whole number value;
computer readable program code means for determining a mean of the normal reference range for each of a plurality of tests; and
computer readable program code means for determining a difference between the respective mean and each of the plurality of test results.
15. A computer program product according to claim 13 wherein the computer readable program code means for transforming each of the equilibrated values into respective unitized values
comprises computer readable program code means for multiplyingeach respective equilibrated value with a fractional value of the plurality of normal test results in one-half of the
normal reference range for each of a plurality of tests.
16. A computer program product according to claim 15 wherein the fractional value of the plurality of test results for each of a plurality of tests comprises a reciprocal of one-half
of the total number of singularly possible test results in thenormal reference range.
17. A computer program product according to claim 13 further comprising computer readable program code means for determining an analytical variation of each of the respective unitized
18. A computer program product according to claim 13 further comprising computer readable program code means for storing a single unitizing normal reference range number, a plurality
of test identifications and associated test results, referencerange spreads, halves of reference range spreads, fractional values, equivalent values, and normal reference ranges in
the data processing system.
Description: FIELD OF THE INVENTION
The present invention relates generally to data analysis and, more particularly, to reporting and comparison of data analysis results.
BACKGROUND OF THE INVENTION
The method of reporting numerical laboratory test data, such as biological laboratory tests, has essentially remained unchanged since its modern inception, beginning in the first half
of the twentieth century. The traditional method includesreporting a measured value (i.e., a test result) and its relevant set of normal values, known as a reference range. It is
often inadequate to report only a measured value because different tests may have different respective reference ranges. Generally, all reference ranges include a set of two values
with one value designated as an upper reference range limit and another designated as a lower reference range limit.
In the last quarter of the twentieth century the number of available laboratory tests has risen prodigiously and there are now many hundreds of numerically reported tests, each
continuing to have its own unique set of reference ranges. Thismarked proliferation of data has offered an interpreter of the data an abundant variety of tests from which to conduct
physiological as well as disease investigations. However, the sheer volume of available tests has also contributed to informationoverload. An interpreter typically attempts to
remember hundreds of reference ranges when evaluating test data. For example, a single composite tabular listing of lab results on one biological entity can include forty or more
tests, all of which mayhave different reference ranges.
Another aspect of the interpretation and application of measured biological laboratory data is the observation that a test value that falls within the reference range has variable
significance depending on whether the measured value is near theupper limit, the lower limit, or the mean value of the reference range. The relative significance of a test has to be
qualitatively assessed and committed to memory because it is not typically quantified on the traditional report. If multiple tests aresimultaneously reported, an interpreter of the
test data typically tries to retain in his/her memory the relative position of each measured value and make qualitative interpretive decisions among the tests utilizing mentally
calculated relative positionsin the reported test data. For example, one test may have a measured value two points below the upper reference range value and another test may have a
measured value eight points below the upper reference range value. The interpreter may wish to knowif one of these tests is at more risk for being abnormally elevated than the other
test. A qualitative evaluation may be required because the number of points in the reference range for each of these tests may be different. The relative closeness ofone value to the
upper reference range (or the lower reference range for that matter) may be dependent on the number of units in the reference range. Table 1 below illustrates this situation.
TABLE 1 Reference Test MV = -2 MV = -8 Range Sodium 145 139 136-147 Glucose 111 105 68-113 Cholesterol 198 192 100-200
In Table 1, the second column (MV=-2) indicates a measured value two numbers less than the upper limit of the reference range. The third column (MV=-8) indicates a measured value
eight numbers less than the upper limit of the reference range. Each of the six measured values (MVs) in Table 1 are considered normal values because each lies within the reference
range for a respective test. When measured values are viewed in the format of Table 1, which resembles traditional reporting formats, itmay be difficult to determine which measured
value is relatively greater than, or less than, any other measured value.
Consequently, if measured values that fall within a reference range are to be compared among the many different tests, then an interpreter should perform a qualitative analysis on
each test and retain this information in memory for each test. Ifthis type of mental calculation is not performed, then refinement in the application of measured values may not be
possible and diagnostic information may be lost.
The concept of relative normalcy of a measured value that falls within a reference range is also applicable to measured abnormal values that are above or below a reference range. The
same qualitative mental assessment is involved in determiningthe relative abnormalcy of an abnormal value. An example of this would be to determine whether a liver function test that
is elevated ten points above the upper reference range is as qualitatively elevated as another liver function test that is also tenpoints above the upper reference range. Since these
two tests may indicate different parts of the liver, it is reasonable to ask whether one part of the liver is more diseased than the other. This process of test comparison may become
even more complexwhen the interpreter is attempting to assess a panel of many tests that relate to different organs or different diseases. This type of analysis is generally referred
to as multiparametric analysis.
There have been attempts to present multiparametric test data from biological entities in a non-traditional format in order to enhance an interpreter's perception of inter-test
relationships and abnormal values. For example, U.S. Pat. No.4,527,240 to Kvitash describes a process whereby measured patient values are transformed to units referred to as
"Balascopic" units. Unfortunately, in data analysis according to Kvitash, the Balascopic units are plotted on an axial graph. These axialgraphs may be somewhat difficult to use.
Furthermore, the Balascopic process of Kvitash does not distinguish between test data reported as whole integers and decimals. Consequently, interpretative decisions that are made
based on decimal values may bedifficult to make with the Kvitash process. Another drawback of the Kvitash process is that it does not provide analytical variation associated with each
measured value.
U.S. Pat. No. 5,541,854 to Yundt describes displaying conventional multi-level hematology quality control data (three levels) in a complex graphic form. Yundt is concerned with the
presentation of tri-level quality control data and not withthe presentation of measured unknown samples.
Statistical methods utilizing "Z scores" to specify the relative frequency or probability of a random number in a normally distributed set of measurements are known. Unfortunately, Z
scores are somewhat difficult to use to identify the relativevalue of one test result to another. Furthermore Z score techniques are somewhat limited because data beyond the maximum
and minimum limits of normal distribution cannot be used.
SUMMARY OF THE INVENTION
In view of the above discussion, it is an object of the present invention to reduce much of the complexity associated with interpreting laboratory test data.
It is another object of the present invention to facilitate determining relative relationships of measured values from laboratory tests to respective reference ranges.
These and other objects are provided by systems, methods, and computer program products for determining relative normalcy or abnormalcy of a plurality of test results, by transforming
test results into respective unitized values and thengraphically displaying each of the unitized values with a unitized reference range. Additionally, an unitized analytical variation
of each of the respective unitized values may be determined and displayed.
Initially, a reference range for a test is unitized to a single number. A total number of possible test results within respective equal halves of the reference range is then
determined. This is accomplished by determining a total number ofpossible test results within the reference range to produce a reference range spread, and then dividing the reference
range spread in half.
The fractional value of the plurality of test results in the respective halves of the reference range is then determined. The fractional value of the plurality of test results
comprises a reciprocal of one-half of the total number of possibletest results in the reference range. Each of the plurality of separately determined test results are then transformed
into respective equilibrated values. This is accomplished by determining the mean of the reference range and then determining adifference between the mean and each of the plurality of
test results. Each equilibrated value represents relative position of a respective test result with respect to a mean of the reference range.
Each of the equilibrated values is then transformed into respective unitized values by multiplying each respective equilibrated value with a respective fractional value of the
plurality of test results in one half of the reference range. Eachunitized value represents relative normalcy or abnormalcy of a respective test result with respect to the unitized
reference range.
According to the present invention, unitized values with the same numerical value indicate the same quantitative variation from any reference point(s) within the unitized reference
range. According to the present invention, a unitized value of1.5 for a glucose level within a patient and a unitized value of 1.5 for a sodium level within a patient will mean the
same quantitative increase for each test. Furthermore, the present invention may allow an interpreter to recognize problems andundertake corrective actions sooner. By utilizing
unitized values according to the present invention, an interpreter could more easily recognize that a unitized sodium value of 1.5 is more severe than a unitized glucose value of 1.1
and, therefore, takeaction to rectify the sodium level.
The present invention may be applied to the interpretation of any type of laboratory test data, both biological and non-biological, and particularly where the test data is interpreted
by referring to a reference range. The present invention isparticularly useful where multiparametric data is obtained from testing.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 schematically illustrates operations for unitizing test data having different reference ranges, according to an embodiment of the present invention.
FIG. 2 illustrates an exemplary data processing system in which the present invention may be implemented.
DETAILED DESCRIPTION OF THE INVENTION
The present invention now is described more fully hereinafter with reference to the accompanying drawings, in which preferred embodiments of the invention are shown. This invention
may, however, be embodied in many different forms and should notbe construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this
disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Like numbers referto like elements throughout.
As will be appreciated by one of skill in the art, the present invention may be embodied as a method, data processing system, or computer program product. Accordingly, the present
invention may take the form of an entirely hardware embodiment,an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present
invention may take the form of a computer program product on a computer-readable storage medium having computer-readable program codemeans embodied in the medium. Any suitable
computer readable medium may be utilized including hard disks, CD-ROMs, optical storage devices, or magnetic storage devices.
The present invention is described below with reference to flowchart illustrations of methods, apparatus (systems) and computer program products according to an embodiment of the
invention. It will be understood that each block of the flowchartillustrations, and combinations of blocks in the flowchart illustrations, can be implemented by computer program
instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or otherprogrammable data processing apparatus to produce a
machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the
flowchart block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing
apparatus to function in a particular manner, such that the instructions stored in the computer-readablememory produce an article of manufacture including instruction means which
implement the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processingapparatus
to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which
execute on the computer or other programmable apparatus providesteps for implementing the functions specified in the flowchart block or blocks.
Accordingly, blocks of the flowchart illustrations support combinations of means for performing the specified functions, combinations of steps for performing the specified functions
and program instruction means for performing the specifiedfunctions. It will also be understood that each block of the flowchart illustrations, and combinations of blocks in the
flowchart illustrations, can be implemented by special purpose hardware-based computer systems which perform the specified functionsor steps, or combinations of special purpose
hardware and computer instructions.
Referring now to FIG. 1, operations for unitizing test data having different reference ranges, according to an embodiment of the present invention, are schematically illustrated.
Operations include: unitizing a reference range (Block 100);determining the number of measured values in the reference range (Block 102); unitizing the equal halves of the reference
range (Block 104); determining the number of measured values in the upper one half and the lower one half of the reference range(Block 106); determining the fractional value of the
measured values in the equal one halves of the reference range (Block 108); equilibrating the measured values (Block 110); unitizing the equilibrated values (Block 112); unitizing the
analyticalvariation (Block 114); and reporting the unitized values (Block 116). Each of these operations will be described in detail below.
Referring now to FIG. 2, an exemplary data processing system in which the present invention may be implemented is illustrated. As seen in FIG. 2, a data processor 10 may have an
operating system 11 resident therein. An application program 12for performing operations according to the present invention typically executes via the operating system 11. The
processor 10 displays information on a display device 13 which has a plurality of picture elements (collectively referred to as a screen). The information is displayed on the display
device 13, preferably within a graphical user interface. The contents of the screen of the display device 13 and the appearance of a graphical user interface, may be controlled or
altered by an applicationprogram 12 or the operating system 11 either individually or in combination. For obtaining input from a user, the operating system 11 and the application
program 12 may utilize user input devices 14. User input devices 14 may include a pointing device15, such as a mouse, and a keyboard 16 or other input devices known to those of skill
in the art.
Table 2 below defines abbreviations used throughout this disclosure.
TABLE 2 Abbreviation Definition EV equilibrated value FV fractional value HRRS half reference range spread LRRL lower reference range limit MV measured value RR reference range RRS
reference range spread UAV unitized analytical variance URRL upper reference range limit UV unitized value
It is understood that the term "measured value" as used herein is synonymous with the term "test result", such as a blood glucose level produced by a blood test.
Unitizing a Reference Range
An initial operation of the present invention involves unitizing a reference range (Block 100). Unitization of a reference range is defined as grouping all measured values for each
laboratory (or other) test within a reference range into asingle number. By way of explanation, inherent in the definition of a RR is the possibility that any MV that falls within a
RR is as potentially normal, in an equivalent biological sense, as any other MV within the same RR, or any other RR. Since thispotential exists, it is conceptually feasible to
consider all MVs in any RR as equivalent values. As a derivative of this consideration, all of the MVs in any RR can be conceptually consolidated into one number which would have no
need of any assignedconcentration units.
For example, if the number 1.0 is selected, the reference range may be any value greater than 0.0 and equal to or less than 1.0, since this comprises the number 1.0. However, it is to
be understood that any number can be used for this purpose. From this concept, it does follow that if one test has a MV of 100 milligrams per deciliter (mg/dl) in its RR and another
test has a MV of 10 millimoles per liter (mmol/L) in its RR, these different MVs lose the concentration units (mg/dl and mmol/L);and, the 100 MV and the 10 MV are biological
equivalents. This conceptual process is an initial necessary process in order to restructure the ordinarily disparate MVs in RRs into one number. The equation that expresses this
relationship is
Equation 1 states that a unitized RR is greater than zero but less than or equal to one.
Determining the Number of Measured Values in the Reference Range
Next a determination of the number of measured values that are included in the reference range from the lower reference range limit to, and including, the upper reference range limit
is made (Block 102). The number of measured values in thisrange is determined by both the unique analytical properties of the test and by the actual measured values discovered in the
reference (normal) population, and represents a fixed number of MVs peculiar to each test and the methodology used in the testingprocedure. The RR is not only different for each test,
but the RR will also change for any given test if the methodology for that test is changed, which further adds to the memory requirements of an interpreter and the need for advance
notice by alaboratory when methodology is changed. The number of MVs within the RR is calculated from the high and low numbers listed under the heading "Reference Range", or sometimes
otherwise described as "Reference Interval" or "Normal Range". The total numberof measured values in the reference range spread is preferably determined by Equation 2 below.
The URRL and LRRL, when used in Equation 2, refer to the numbers given in a traditional report (e.g., Table 1 above) under the heading of RR, and not to the unitized reference range
of 1.0.
Equation 2 includes all units in the reference range by the addition of a whole integer to the difference between the minimum and maximum of the reference range. For example, the RR
for glucose may be given as 68-113 mg/dl. Since the UV will beunit-less, the concentration units of mg/dl can be dropped. Under this arrangement the URRL=113 and the LRRL=68. The RR
numbers are then inserted into Equation 2, as follows: ##EQU1##
In the process of eventually converting measured values to unitized values, all reported decimal values in the reference ranges and all measured values reported in decimals need to be
converted to their equivalent whole values. For example areference range of 4.6-8.7 is converted to 46-87 and the number of MVs in the RRS is (87-46)+1=42. Measured decimal values are
also converted to their equivalent whole numbers. All subsequent calculations are performed on the converted whole numbers. This modification is required because the interpreter would
have most likely used "tenths" of a number in evaluating increases or decreases in measured values. Once decimal values are converted to their equivalent whole numbers, the
subsequentlycalculated UV reflects the original MV as it was expressed in tenths, or other decimal points.
The traditionally given upper and lower limits of a RR represent .+-.2 standard deviations of the MVs from the mean of the normal population studied, which includes only 95% of the
MVs of the normal population. The 5% that are not included inthe RR are represented by a deletion of 2.5% at both the upper and lower limits of the RR. Consequently, the determination
of the RRS does not include the 2.5% of the MVs at either extreme of the RR. A total number of the MVs in the RRS could bedetermined by returning the deleted 5% to the RRS; however,
since interpreters of MVs are conditioned to interpreting MVs on the traditional basis of a RR defined as .+-.2 standard deviations from the mean, the calculation for the total RRS is
notincorporated herein.
Unitizing the Equal Halves of the Reference Range
The initial unitization of the reference range and the subsequent determinations of the RRS need to be restructured for the following reason. A low abnormal measured value,
subsequently unitized to -0.5 would be the analytical equivalent, in theopposite direction, of a high abnormal unitized value of +1.5, since both values would be a 0.5 units beyond
the lower and upper limits, respectively, of the unitized reference range. In a reporting format a reviewer would find these equidistantanalytical values disconcerting since they are
different numbers.
The above anomaly can be rectified by dividing the reference range into equal halves and then unitizing the separate equal components (Block 104). The mean of the reference range then
becomes 0.0. The range from the mean to the LRRL is definedas 0.0 to -1.0. The range from the mean to the URRL is defined as 0.0 to +1.0. However, as was indicated in the initial
unitization of the reference range, any number can be used. For purposes of this discussion, the number 1.0 is retained, butmodified to -1.0 and +1.0 for the equal halves of the
reference range spread.
Determining the Number of Measured Values in Upper and Lower Halves of Reference Range
Because the RR is divided into equal halves, the previously calculated number of MVs in the RRS (Equation 2) needs to be divided between the two halves. The number of MVs in each half
of the RRS can be determined by the following equation:
For example, the HRRS for glucose is:
Determining the Fractional Value of the Measured Values in the Upper and Lower Halves of the Reference Range
One of the components of the equation used to calculate UVs is the fractional value (FV). FV is defined as the reciprocal value of the MVs that are present in the HRRS. The fractional
value of each of the measured values in the HRRS isdetermined by Equation 4:
For example, the FV for glucose is:
The FV can be expressed as either a fraction or a percent.
The division of the RRS into two equal halves, each containing 2 standard deviations of the normal distribution is based on the conventional definition of normalcy as .+-.2 standard
deviations of the mean. It would be feasible to unitize on thebasis of .+-.1 standard deviation from the mean which would define the unitary values in synchrony with unitary divisions
of the standard deviations. For example +1 standard deviation would equal +1.0 unit, -1 standard deviation would equal -1.0 unit;+2 standard deviations would equal +2.0 units; and, -2
standard deviations would equal -2.0 units. This additional separation of the reference range brings greater sensitivity to the process, and fractional values can be proportioned
accordingly. Alternatively, it would also be feasible to unitize on the basis of .+-.3 or more, standard deviations per equal halves of the RR. This contraction of the reference range
would reduce the sensitivity of the process, if this were desired. However, theseextended processes are not performed herein because the traditional definition of normalcy is .+-.2
standard deviations from the mean.
Equilibrating Measured Values
Prior to the conversion of a measured value to a unitized value, the measured value needs to be equilibrated with the mean of the reference range (Block 110), since the mean now
represents zero. (In the traditional reporting process, zero is thevalue accomplished by removing from the measured value the same number of units that were removed in converting the
mean to zero. Equation 5 below illustrates the conversion of the measured value (MV) to an equilibrated value (EV).
Fundamental to this concept is that equilibrated values less than the mean will show increasingly more negative values, whereas equilibrated values above the mean will show
increasingly more positive values. For example, the mean of the RR forglucose is (68+113)/2=90.5, rounded to 91. A MV of 71 would have an EV of 71-91=-20; a MV of 91 would have an EV
of 91-91=0; and, a MV of 111 would have an EV of 111-91=+20. Because the + or - EV will be multiplied by the always positive FV in asubsequent step to obtain the UV, the UV will carry
the same + or - sign that the EV carried.
Unitizing the Equilibrated Values
Once a measured value has been equilibrated (Block 110), the result can be unitized by determining the number of unitized values (UVs) that are present in the equilibrated values
(Block 112). Equation 6 illustrates how to determine the unitizedvalue (UV) of an equilibrated value (EV).
Utilizing the operations of Block 100-Block 112, all of the different tests that were measured can be converted to UVs. All of the tests will now be fractions, or multiples, of the
unitized upper or lower halves of the reference range. Allmeasured values will be proportionately represented in comparison to one another. For example, a unitized value of +0.25 may
represent a 40 measured value for one test and a 20 measured value for another test. However, when these measured values areunitized they could both represent +0.25 and the
interpreter would then very quickly make a judgment of analytical equivalence between these two measured results.
Although analytical equivalence of two (or more) tests is indicated by the same unitized value, it does not follow that these equivalent values connote the same priority of urgency to
the interpreter. For example, equivalent unitized values forpotassium and alkaline phosphatase do not obviate the greater sense of immediate concern for illness that the evaluation of
an elevated unitized potassium value brings when judged against an analytically equivalent elevated unitized alkaline phosphatasevalue. However, the interpreter is now free to make
decisions without having to deal with the ambiguity of visually non-equivalent reported values. This reporting of unitized values eliminates the need for the interpreter to make
mental calculationsfrom the traditional report on the relative position of any measured value (raw data) to its reference range. All tests will now be proportionately equivalent. The
interpreter can assess whether a glucose of +1.5 is medically more critical than acreatinine of +1.5 without having to do mental calculations on the relative deviation of the measured
value from the upper reference range.
EXAMPLE 1
The measured values obtained from two different biological laboratory tests (i.e., sodium and glucose levels) are set forth below in Table 3. These MVs will be processed according to
the operations represented by Blocks 100-112 described above. The resulting unitized values are tabulated in Table 4.
TABLE 3 MV#1 MV#2 MV#3 Reference Test (-2) (-8) (+8) Range Sodium 145 139 155 136-147 Glucose 111 105 121 68-113
In Table 3, MV#1, MV#2, and MV#3 are -2, -8, and +8, respectively, from the upper limit of the reference range.
1) Unitize reference range: 0.0-1.0
2) Determine number of MVs in reference range: ##EQU2##
3) Determine number of MVs in upper and lower halves of reference range: ##EQU3##
4) Determine fractional value of MVs in upper and lower halves of reference range: ##EQU4##
5) Convert each MV to an equilibrated value (EV): ##EQU5##
6) Unitize equilibrated values: ##EQU6##
TABLE 4 MV#1 MV#2 MV#3 Test (-2) UV#1 (-8) UV#2 (+8) UV#3 Sodium 145 0.51 139 -0.51 155 2.21 Glucose 111 0.80 105 0.56 121 1.20
From the unitized data, an interpreter can readily determine that a sodium value of 145 (a -2 MV) with a UV of 0.51 is essentially the analytical equivalent of a glucose value of 105
(a -8 MV) with a UV of 0.56. Similarly, an interpreter canquickly observe that a sodium value of 139 (a -8 MV) is significantly different from a glucose of 105 (also a -8 MV), which
have UVs of -0.51 and 0.56, respectively.
The traditional method of reporting laboratory data, as seen in Table 4 under the columns giving the MVs, does not allow one to determine the relative relationships that can be
readily perceived by evaluating the unitized values seen in thecolumns listing the UVs. After many years of interpreting the traditionally reported raw data (MVs), users may develop
variably refined cognitive perceptions of relative normalcy and abnormalcy of MVs. However, with the reporting of UVs, the relativenormalcy and abnormalcy are quantified on the report
and a new user will more quickly develop a refined capacity to engage in multiparametric analyses, with sundry benefits to the diagnostic process(es), many years prior to that learned
by only utilizingthe current state of the art.
Unitizing the Analytical Variation
All measured values have a degree of analytical variability, due to imprecision that is inherent in all laboratory and other testing instruments and the reagents used to determine the
MVs. This variation is determined by running controls ofknown values with the samples from which measured values are determined. The analytic variation of control samples are usually
subjected to statistical analysis wherein means and standard deviations are calculated. Often the mean is divided into onestandard derivation to yield a value defined as the
coefficient of variation. In conventional reports neither the standard deviation nor the coefficient of variation are reported with the measured value. The only values traditionally
reported withmeasured values are the URRL and the LRRL, which constitute the reference range. If the analytical variations were available, the interpreter would be better able to make
interpretive judgments on measured values that are near the URRL and the LRRL.
In most analytical systems at least two levels of known control are measured, in order to assess the analytical variability of the testing system near its lower and upper linear
limits. The control closest to the measured value may be the mostapplicable value; however, in practice it may be a complex process to align the measured value and the closest control
value. The average of the standard deviations of the controls can be used. The standard deviations of the controls are traditionallycalculated from the MVs of the controls in such a
manner that they do not need to be equilibrated. Since there is no need to perform the equilibration step on the standard deviation of the control value, and the FV has already been
calculated, theconversion of the unitized analytical variation (Block 114) (UAV) may be accomplished by Equation 7.
Instead of using the standard deviation(s) of the control values as the measure of the analytical variability of a testing system, other measures of variance can be used and could
include the entities of 1) the mean absolute deviation from themean, or 2) the mean deviation from the median, or 3) any other useful approach to determining variance. Due to the
traditional use of standard deviations to express the analytical variability of the control values, the standard variation is usedherein.
Reporting the Unitized Values
The following eight examples represent formats that can be used in reporting the unitized values, as well as traditional values (Block 116). The MVs, RRs and the standard deviations
(used to calculate the UAVs) set forth below are representativeand not specific to an individual, a particular testing system, or a designated lab site.
EXAMPLE 2
This is a traditional report in which the abnormal measured values have been accentuated for instant recognition. The abnormal values have been notated as HI or LO. Abnormal values
may also be printed in bold, offset or printed in a differentcolor in order to alert an interpreter to the abnormal value. The tests are listed alphabetically. They are not ranked by
high or low values because the data in this traditional report does not allow the interpreter to determine the relative valueamong the various tests.
TABLE 5 TRADITIONAL REPORT TEST RESULT REFERENCE RANGE Albumin LO 3.3 g/dL 3.4-5.3 Bilirubin, total HI 6.2 mg/dL 0.2-1.3 Calcium 9.4 mg/dL 8.9-10.8 Chloride LO 93 mEq/L 97-109
Creatinine 0.7 mg/dL 0.6-1.4 Glucose HI 270 mg/dL 68-113 Phosphatase, alkaline 62 IU/L 23-140 Potassium 4.4 mEq/L 3.7-5.3 Protein, total 7.7 g/dL 6.1-8.2 Sodium LO 131 mEq/L 136-147
Transferase (GOT) HI 70 IU/L 4-39 Urea Nitrogen 17 mg/dL 7-24
EXAMPLE 3
Example 3 represents the unitized report in an enhanced manner, with the tests ranked highest to lowest. This example demonstrates the deletion of the reference ranges and their
replacement with a .+-.1.0 unitized reference range. Also reportedis the unitized analytical variation. These tests can be ranked because all of the tests have been unitized and the
unitized results represent the true relationships among the tests. A greater enhancement of the data could be accomplished by colorprinting the background so that the higher normal
values (greater +0.5 to +1.0) and the lower normal values (less than -0.5 to -1.0) would be in amber; mid normal ranges (-0.5 to +0.5) would be in green; abnormally low values would
be in blue; and,abnormally high values would be in red, or in any other color enhanced schemes.
TABLE 6 UNITARY REPORT TEST RESULT .+-.UAV Bilirubin, total HI +9.1 0.3 Glucose HI +7.8 0.2 Transferase (GOT) HI +2.7 0.2 Protein, total +0.5 0.1 Urea Nitrogen +0.2 0.2 Potassium -0.1
0.1 Phosphatase, alkaline -0.3 0.1 Calcium -0.5 0.1 Creatinine -0.7 0.1 Albumin LO -1.1 0.0 Chloride LO -1.5 0.2 Sodium LO -1.8 0.2 Reference Range = 0.0 .+-. 1.0 UAV = Unitized
Analytical Variation
EXAMPLE 4
Example 4 represents the unitized data restructured to present the tests in a horizontal graphic format. The hierarchical ranking is retained. This manner of presentation is allowed
due to the unitization of the data. The data could also bereversed ranked if desired. And, the background could be color enhanced, if desired. ##STR1##
EXAMPLE 5
Example 5 displays a composite report including both the traditional report and the newly invented unitized report. This example represents the combining of Examples 2 and 3. In this
manner the interpreter could compare the conventionally usedreporting format with the new Unitary Report, without any loss of informational content. In this example the traditional
report format has been altered to place the tests in the same rank that can be found in the Unitary Report. This minorrearrangement of the traditional alphabetically formatted data
further enhances the interpreter's ability to compare the results.
COMBINED TRADITIONAL AND UNITARY REPORTS Traditional Report TEST RESULT REFERENCE RANGE Bilirubin, total HI 6.2 mg/dL 0.2-1.3 Glucose HI 270 mg/dL 68-113 Transferase(GOT) HI 70 IU/L
4-39 Protein, total 7.7 g/dL 6.1-8.2 Urea Nitrogen 17mg/dL 7-24 Potassium 4.4 mEq/L 3.7-5.3 Phosphatase, alkaline 62 IU/L 23-140 Calcium 9.4 mg/dL 8.9-10.8 Creatinine 0.7 mg/dL
0.6-1.4 Albumin LO 3.3 g/dL 3.4-5.3 Chloride LO 93 mEq/L 97-109 Sodium LO 131 mEq/L 136-147 Unitary Report TEST RESULT.+-.UAV Bilirubin, total HI +9.1 0.3 Glucose HI +7.8 0.2
Transferase(GOT) HI +2.7 0.2 Protein, total +0.5 0.1 Urea Nitrogen +0.2 0.2 Potassium -0.1 0.1 Phosphatase, alkaline -0.3 0.1 Calcium -0.5 0.1 Creatinine -0.7 0.1 Albumin LO -1.1 0.0
Chloride LO -1.5 0.2 Sodium LO -1.8 0.2 Reference Range = 0.0 .+-. 1.0 .+-.UAV = .+-. Unitized Analytical Variation
EXAMPLE 6
Example 6 represents the traditional data and the graphed unitized data combined into a composite report. This example essentially combines Examples 2, 3, and 4 into one report,
allowing maximal extraction of information from the data.
COMBINED TRADITIONAL AND UNITARY REPORTS - WITH GRAPH Traditional Report TEST RESULT REFERENCE RANGE Bilirubin, total HI 6.2 mg/dL 0.2-1.3 Glucose HI 270 mg/dL 68-113 Transferase(GOT)
HI 70 IU/L 4-39 Protein, total 7.7 g/dL 6.1-8.2 UreaNitrogen 17 mg/dL 7-24 Potassium 4.4 mEq/L 3.7-5.3 Phosphatase, alkaline 62 IU/L 23-140 Calcium 9.4 mg/dL 8.9-10.8 Creatinine 0.7
mg/dL 0.6-1.4 Albumin LO 3.3 g/dL 3.4-5.3 Chloride LO 93 mEq/L 97-109 Sodium LO 131 mEq/L 136-147 UNITARY REPORT ABN LO HI ABN TEST RESULT LO (NORMAL/NORMAL) HI Bili, T HI +9.1 :....
(....:..../....:....)....: X Glucose HI +7.8 :....(....:..../....:....)....: X Trans(GOT) HI +2.7 :....(....:..../....:....)....: X Prot, T +0.5 .sup.:....(....:..../....X....)....:
Urea N +0.2 .sup. :....(....:..../.X..:....)....: Potassium -0.1 :....(....:...X/....:....)....: .sup. Phos, Alk -0.3 :....(....:.X../....:....)....: .sup. Calcium -0.5 :....
(....X..../....:....)....: .sup. Creat -0.7:....(..X.:..../....:....)....: .sup. Albumin LO -1.1 :...X(....:..../....:....)....: .sup. Chloride LO -1.5 X....(....:..../....:....)....:
.sup. Sodium LO -1.8 X :....(....:..../....:....)....: -1.0 0.0 +1.0
EXAMPLE 7
Example 7 illustrates the combination of two different styles of reporting unitized data, according to the present invention. The traditional reporting format has not been
incorporated into this report. In some circumstances, it may not benecessary to utilize the traditional format.
UNITARY REPORT - TABULAR AND GRAPH TEST RESULT .+-.UAV Bilirubin, total HI +9.1 0.3 Glucose HI +7.8 0.2 Transferase(GOT) HI +2.7 0.2 Protein, total +0.5 0.1 Urea Nitrogen +0.2 0.2
Potassium -0.1 0.1 Phosphatase, alkaline -0.3 0.1 Calcium -0.5 0.1 Creatinine -0.7 0.1 Albumin LO -1.1 0.0 Chloride LO -1.5 0.2 Sodium LO -1.8 0.2 ABN LO HI ABN TEST RESULT LO (NORMAL
/NORMAL) HI Bili, T HI +9.1 :....(....:..../....:....)....: X Glucose HI +7.8 :....(....:..../....:....)....: X Trans(GOT) HI +2.7 :....(....:..../....:....)....: X Prot, T +0.5 .sup.
:....(....:..../....X....)....: Urea N +0.2 .sup. :....(....:..../.X..:....)....: Potassium -0.1 :....(....:...X/....:....)....: .sup. Phos, Alk -0.3:....(....:.X../....:....)....:
.sup. Calcium -0.5 :....(....X..../....:....)....: .sup. Creat -0.7 :....(..X.:..../....:....)....: .sup. Albumin LO -1.1 :...X(....:..../....:....)....: .sup. Chloride LO -1.5 X....
(....:..../....:....)....: .sup. Sodium LO -1.8 X :....(....:..../....:....)....: -1.0 0.0 +1.0
EXAMPLE 8
This example illustrates the utilization of unitized data in the setting of repeat analyses of the same test. Both traditional and unitized data according to the present invention
have been incorporated into the report. Also, day-to-day runningaverages of both types of data and a graphic report of the unitized data have been included. The data may also be
presented without averaging or a cumulative type of running average may be used. The type of averaging used can be adapted to accommodatethe user's requirements. Again, maximal
information has been extracted from the primary data.
REPEAT ANALYSES OF THE SAME TEST TRADITIONAL REPORT UNITARY REPORT TEST RESULT AVG RESULT AVG Glucose 270 -- +7.8 -- 200 235 +6.3 +7.0 160 180 +3.9 +5.1 111 136 +2.0 +2.9 94 103 +0.5
+1.2 90 92 +0.1 +0.3 89 90 0.0 0.0 80 85 -0.3 -0.2 78 79 -0.5 -0.4 80 79 -0.5 -0.5 REFERENCE RANGE 68-113 -1.0-+1.0 UNITARY GRAPHIC REPORT ABN LO HI ABN LO (NORMAL/NORMAL) HI TEST
RESULT -1.0 0.0 +1.0 AVG Glucose HI +7.8 ...(....:..../....:....)... X -- +6.3 ...(....:..../....:....)... X +7.0 +3.9 ...(....:..../....:....)... X +5.1 +2.0 ...(....:....
/....:....)... X +2.9 +0.5 .sup. ...(....:..../....X....)... +1.2 +0.1 .sup. ...(....:..../X...:....)... +0.3 +0.0 ...(....:....X....:....)... 0.0 -0.3 ...(....:.X../....:....)...
.sup. -0.2 -0.5 ...(....X..../....:....)... .sup. -0.4 -0.5 ...(....X..../....:....)... .sup. -0.5
EXAMPLE 9
A vertical graphic presentation is illustrated in this example. As was mentioned in earlier examples, the background could be color enhanced. This type of presentation may be
preferred in some settings.
EXAMPLE 9 VERTICAL GRAPH sodium chloride albumin creat calcium alk phos potassium urea nitr t proteIn transfer glucose t bili X X X +2.0 -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- ---- -- -- -- -- -- -- -- -- -- -- -- -- -- +1.5 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- +1.0 r hi -- -- -- -- -- -- -- -- -- -- -- -- e -- -- -- ---- -- -- -- -- -- -- -- f -- -- -- -- -- -- -- -- -- -- -- --
e -- -- -- -- -- -- -- -- -- -- -- -- r n +0.5 X e o -- -- -- -- -- -- -- -- -- -- -- -- n r -- -- -- -- -- -- -- -- -- -- -- -- c m -- -- -- -- -- -- -- X -- -- -- -- e a -- -- --
---- -- -- -- -- -- -- -- . l 0.0 . . -- -- -- -- -- -- X -- -- -- -- -- . n -- -- -- -- -- -- -- -- -- -- -- -- . o -- -- -- -- -- X -- -- -- -- -- -- . r -- -- -- -- -- -- -- -- --
-- -- -- . m -0.5 X . a -- -- -- -- -- -- -- -- -- -- -- -- r l -- -- -- X -- -- -- -- -- -- -- -- a -- -- -- -- -- -- -- -- -- -- -- -- n -- -- -- -- -- -- -- -- -- -- -- -- g -1.0 e
lo -- -- X -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- ---- -- -- -- -- -- -1.5 X -- -- -- -- -- -- -- -- -- --
-- -- -- -- -- -- -- -- -- -- -- -- -- -- X -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -2.0
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof. Although a few exemplary embodiments of this invention have been described,
those skilled in the art will readily appreciate that manymodifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages
of this invention. Accordingly, all such modifications are intended to be included within the scope of this invention as definedin the claims. In the claims, means-plus-function
clause are intended to cover the structures described herein as performing the recited function and not only structural equivalents but also equivalent structures. Therefore, it is to
be understood thatthe foregoing is illustrative of the present invention and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the
disclosed embodiments, as well as other embodiments, are intended to be included within thescope of the appended claims. The invention is defined by the following claims, with
equivalents of the claims to be included therein.
* * * * *
Randomly Featured Patents | {"url":"http://www.patentgenius.com/patent/6292761.html","timestamp":"2014-04-21T08:23:36Z","content_type":null,"content_length":"76059","record_id":"<urn:uuid:8d20b566-b9c4-4fee-95cb-785b01b843b0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
representable fibered category
representable fibered category
Given an object $X$ in a category $B$ the domain functor $(Y\stackrel{f}\to X)\mapsto Y$ from the slice category $B/X$ to $B$ is a fibered category (i.e. Grothendieck fibration).
Any fibered category isomorphic to the $dom:B/X\to B$ is said to be representable. This is because under the Grothendieck construction representable fibered categories correspond precisely to
representable functors $B^{op} \to Set \hookrightarrow Cat$: the category $B/X$ is the category of elements of the representable functor $B(-,X)$.
Revised on April 27, 2011 13:03:29 by
Urs Schreiber | {"url":"http://www.ncatlab.org/nlab/show/representable+fibered+category","timestamp":"2014-04-17T09:34:53Z","content_type":null,"content_length":"14302","record_id":"<urn:uuid:0835c288-13a6-4b8c-9e5c-9d46bf665d11>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dynamical systems
January 23rd 2011, 06:49 PM
Dynamical systems
Suppose that $y' = ay - H$$y(0) = y0$, models a fish population where a and H are positive constants and 0 <y0 < H/a. Find the time t* when the population dies out. [Hint: set y = 0 in solution
formula (9) and solve for t.]
The solution formula, which I have confirmed both algebraically and through the answer book, is $y = y0e^[at]$
How can I solve for t in this problem when y = 0? $e^[at]$ never reaches zero. The only way I know to do this is to make y0 = 0. That does not solve for t, however.
The hint aside, I thought I would look for ways to make y' (the rate) = 0.
Therefore, y=H/a.
Where am I going wrong?
January 23rd 2011, 07:17 PM
Hi there, I get a solution to the D.E. as
$\displaystyle y(t) = y_0 e^{at}-\frac{H}{a} e^{at}+\frac{H}{a}$ | {"url":"http://mathhelpforum.com/differential-equations/169150-dynamical-systems-print.html","timestamp":"2014-04-18T17:14:50Z","content_type":null,"content_length":"4921","record_id":"<urn:uuid:377f3fd7-6c7c-4eb1-b106-09afa7315129>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00272-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Question?
Hi Everyone,
I know that this is not a project related question and I promise not to make it a habit, but my physics teacher stumped our class with a question today and I thought of all the smart people on
Instructables one of you would know the answer. I have already search google to no avial. Is it possible to determine the volume of a flask given only the starting temperature/pressure and the final
temperature/pressure? The number of moles are unknown. If it is possible how do you determine it? Thanks for your help!
I think your Question is Done!
i think you will use this method:
you will use the law of Avogadros!
Avogadro's law (sometimes referred to as Avogadro's hypothesis or Avogadro's principle) is a gas law which states that, under the same condition of temperature and pressure, equal volumes of all
gases contain the same number of molecules. The law is named after Amedeo Avogadro who, in 1811,[1] hypothesized that two given samples of an ideal gas, of the same volume and at the same temperature
and pressure, contain the same number of molecules. Thus, the number of molecules or atoms in a specific volume of ideal gas is independent of their size or the molar mass of the gas. As an example,
equal volumes of molecular hydrogen and nitrogen contain the same number of molecules when they are at the same temperature and pressure, and observe ideal gas behavior. In practice, real gases show
small deviations from the ideal behavior and the law holds only approximately, but is still a useful approximation for scientists.
Avogadro's law is stated mathematically as:
V is the volume of the gas. n is the amount of substance of the gas.
k is a proportionality constant.
The most significant consequence of Avogadro's law is that the ideal gas constant has the same value for all gases.
This means that:
p is the pressure of the gas in the cell
T is the temperature in kelvin of the gas
The n- is the Number of mole!
see the second formula!
Are you saying that K in the first formula is equal to the constant in the second formula?
yeah! it might be the given actually!
Can you write down the version of the ideal gas law with which you're already familiar? If you can, then just rearrange the terms to isolate the volume as a function of the other parameters.
Since both the volume and quantity of gas are unknown, you need two equations to solve for both of them. You're given two different (T,P) values, so you can write down two equations, one for (T0,P0)
and one for (T1,P1). Now you can solve them for V and n, and just don't bother evaluating n :-)
If you don't know how to solve a set of simultaneous equations (you would have learned it in your linear algebra course), that's okay. Once you've gotten the two equations above, come back here and
post them, and we can walk you through the rest of the process.
It seems like everything that I am trying cancels everything out.
V/n = RT1/P1 = k (k from V/n = k)
P1(nk)=nRT1 -> P1k = RT1 (which does not seem to help at all)
Can you please explain the process of simultaneous equations for this application? I have worked with PV=nRT and the other gas laws before and know how to rearrange them.
P1V=nRT1 -> V/n = RT1/P1
P2V=nRT2 -> V/n = RT2/P2
V and n are unknown, but are constant throughout.
I think the answer is "no".
I mean supposing I've got this flask. Also the flask closed so that its internal volume V and the number of moles n of gas inside are both constant. Also the flask has two sensors attached to it. A
thermometer measures T and an pressure gauge measures P.
Starting with the ideal gas law: P*V = n*R*T
I divide both sides by V. Then group together all the constant terms in parentheses, to get:
P = (n*R/V)*T
Now a plot of P as a function of T should be a straight line that intersects the origin (T=0,P=0) and has a slope of (n*R/V). With two or more good data points, you know, measurements for (T,P) I
should be able to find the slope of that line.
But then what? How do I decompose (n*R/V) = constant, without knowing either n or V? (Presumably I can look up R in the back of the book.)
Of course, if this were a real flask in my laboratory, I could just use a tape measure to measure its outside dimensions, and estimate its internal volume. Or maybe I could fill it with jelly beans
(of known density) or something like that. But tricks like these I could perform without the ideal gas law.
Actually, I should be able to find that constant with just one good data point, since:
(P/T) = (n*R/V) = constant
Hi, Jack. The question posed is to extract just V, with N (or n) still unknown. The pair of (T/P) values is needed in order to set up the two equations in two unknowns. Then you can solve for V and n
simultaneously, and just ignore the second as a nuisance parameter.
Check out the ideal gas law. If the process is isentropic, I think you can deduce the answer.
I might add that the volume and # of moles remain constant throughout.
Yep. Ideal gas law problem. | {"url":"http://www.instructables.com/answers/Physics-Question/","timestamp":"2014-04-20T11:19:54Z","content_type":null,"content_length":"137605","record_id":"<urn:uuid:9ef89bf9-1f40-4d2f-ba69-9c9f95294be8>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Sampling distribution of the sample mean
March 13th 2008, 08:48 PM #1
Mar 2008
The Sampling distribution of the sample mean
A bran of water-softener salt comes in packages marked "net weight 40lb"
The company packages the salt claims that the bags contain an average of 40 lb. of salt and the standard deviation of the weights is 1.5 lb. Assume the weights are normally distributed.
Obtain the probability that the weight of one randomly selected bag of water-softner salt will be 39lb or less, if the companys clain is true.
Please explain how to do this. Thanks
A bran of water-softener salt comes in packages marked "net weight 40lb"
The company packages the salt claims that the bags contain an average of 40 lb. of salt and the standard deviation of the weights is 1.5 lb. Assume the weights are normally distributed.
Obtain the probability that the weight of one randomly selected bag of water-softner salt will be 39lb or less, if the companys clain is true.
Please explain how to do this. Thanks
Let X be the random variable amount of salt (measured in lb) in bag.
X ~ Normal $\, (\mu = 40, \, \sigma = 1.5)$.
You need to calculate Pr (X < 39).
Without knowing you're method for calculating probabilities for a normal distribution (tables, calculator etc.), that's as far as I go ....
March 14th 2008, 04:24 AM #2 | {"url":"http://mathhelpforum.com/statistics/30962-sampling-distribution-sample-mean.html","timestamp":"2014-04-19T06:07:09Z","content_type":null,"content_length":"35132","record_id":"<urn:uuid:d511eead-8d36-47ef-8f32-e89c78d389e5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: decoupling/ bypass capacitors at connectors
Wed, 5 Mar 97 09:23:45 -0800
Don't get me wrong! I am an advocate of stitching for general pcb design
guidelines. My point was for a single transmission line. Unless you are
dealing with ECL logic then the structure on the rise/fall edge is not
significant. If you are dealing with ECL then "emiter hang-up" can occur
which will have your logic designers walk around in circles with
unexplained propagation delay being over spec! In the 0.5 ns realm, I still
maintain that stitching will have no effect unless at the source,
end-of-line or during plane changes.
Best Regards,
______________________________ Reply Separator _________________________________
Subject: Re: decoupling/ bypass capacitors at connectors
Author: Non-HP-Larry.smith (Larry.smith@eng.sun.com) at hp-boise,shargw2
Date: 3/4/97 4:10 PM
> For 0.5ns requirements, the stitching has no effect except immediately
> adjacent to the signal trace or connectors. The displacement current will
> find the most proximal structure to propagate the image current.
> Hans Mellberg
> Consultant
The stitching of ground planes every square inch of the board will have
significant effect. Imagine a signal on the top layer of the card
that is referenced to a ground plane immediately below. The signal
goes down a via, through the board and takes off on a trace referenced
to another ground plane on the bottom of the board.
The question is, what happens to the return current. If the rise time
is .5 nSec, the 'length' of the rise time will be 3 inches, assuming 6
inches/nSec. (If these are exterior traces, the velocity may be closer
to 9 inches/nSec, making the distance traveled in the rise time 4.5
Suppose there is a ground plane stitch via within 1 inch of the signal
via (1/3 of a rise time distance). True, there will be an impedance
discontinuity as signal current must depart from the return current as
the currents go through the vias. But, the time of flight (1/6 nSec)
will allow for 3 reflections across the impedance discontinuity during
the rise time. If we can get 3 reflections during the rise time, the
impedance discontinuity has minimal effect on the waveform.
If one of the reference planes is power, then decoupling will be
involved in the current path. The fidelity of the edge will be
degraded if a 'short' path is not provided for the return current.
Ground and power plane bounce will occur at via locations if this
path is not provided.
Larry Smith
Sun Microsystems | {"url":"http://www.qsl.net/wb6tpu/si-list2/pre99/0298.html","timestamp":"2014-04-18T08:06:23Z","content_type":null,"content_length":"4742","record_id":"<urn:uuid:060a50cf-d562-4852-955f-21f36e891cab>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 19
- Bull. Amer. Math. Soc. (N.S
"... Expander graphs are highly connected sparse finite graphs. They play an important role in computer science as basic building blocks for network constructions, error correcting codes, algorithms
and more. In recent years they have started to play an increasing role also in pure mathematics: number th ..."
Cited by 11 (0 self)
Add to MetaCart
Expander graphs are highly connected sparse finite graphs. They play an important role in computer science as basic building blocks for network constructions, error correcting codes, algorithms and
more. In recent years they have started to play an increasing role also in pure mathematics: number theory, group theory, geometry and more. This expository article describes their constructions and
various applications in pure and applied mathematics. This paper is based on notes prepared for the Colloquium Lectures at the
, 2007
"... Let E be an elliptic curve over Q. In 1988, Koblitz conjectured a precise asymptotic for the number of primes p up to x such that the order of the group of points of E over Fp is prime. This is
an analogue of the Hardy and Littlewood twin prime conjecture in the case of elliptic curves. Koblitz’s co ..."
Cited by 7 (3 self)
Add to MetaCart
Let E be an elliptic curve over Q. In 1988, Koblitz conjectured a precise asymptotic for the number of primes p up to x such that the order of the group of points of E over Fp is prime. This is an
analogue of the Hardy and Littlewood twin prime conjecture in the case of elliptic curves. Koblitz’s conjecture is still widely open. In this paper we prove that Koblitz’s conjecture is true on
average over a two-parameter family of elliptic curves. One of the key ingredients in the proof is a short average distribution result in the style of Barban-Davenport-Halberstam,
, 2010
"... In this article we answer a question proposed by Gelfond in 1968. We prove that the sum of digits of prime numbers written in a basis q> 2 is equidistributed in arithmetic progressions (except
for some well known degenerate cases). We prove also that the sequence.˛sq.p/ / where p runs through the p ..."
Cited by 7 (1 self)
Add to MetaCart
In this article we answer a question proposed by Gelfond in 1968. We prove that the sum of digits of prime numbers written in a basis q> 2 is equidistributed in arithmetic progressions (except for
some well known degenerate cases). We prove also that the sequence.˛sq.p/ / where p runs through the prime numbers is equidistributed modulo 1 if and only if ˛ 2 � n �.
- arXive:0712.139, 2008. CIRCLE PACKING 50
"... Abstract. We develop novel techniques using only abstract operator theory to obtain asymptotic formulae for lattice counting problems on infinite-volume hyperbolic manifolds, with error terms
which are uniform as the lattice moves through “congruence ” subgroups. We give the following application to ..."
Cited by 5 (1 self)
Add to MetaCart
Abstract. We develop novel techniques using only abstract operator theory to obtain asymptotic formulae for lattice counting problems on infinite-volume hyperbolic manifolds, with error terms which
are uniform as the lattice moves through “congruence ” subgroups. We give the following application to the theory of affine linear sieves. In the spirit of Fermat, consider the problem of primes in
the sum of two squares, f(c, d) = c2 + d2, but restrict (c, d) to the orbit O = (0, 1)Γ, where Γ is an infinite-index non-elementary finitely-generated subgroup of SL(2, Z) containing unipotent
elements. We show that the set of values f(O) contains infinitely many integers having at most R prime factors for any R> 4/(δ−θ), where θ> 1/2 is the spectral gap and δ < 1 is the Hausdorff
dimension of the limit set of Γ. If δ> 149/150, then we can take θ = 5/6, giving R = 25. The limit of this method is R = 9 for δ − θ> 4/9. This is the same number of prime factors as attained in
Brun’s original attack on the twin prime conjecture. 1.
"... 1 Let E be an elliptic curve over Q. For a prime p of good reduction, let Ep be the reduction of E modulo p. We investigate Koblitz’s Conjecture about the number of primes p for which Ep(Fp) has
prime order. More precisely, our main result is that if E is with Complex Multiplication, then there exis ..."
Cited by 3 (0 self)
Add to MetaCart
1 Let E be an elliptic curve over Q. For a prime p of good reduction, let Ep be the reduction of E modulo p. We investigate Koblitz’s Conjecture about the number of primes p for which Ep(Fp) has
prime order. More precisely, our main result is that if E is with Complex Multiplication, then there exist infinitely many primes p for which #Ep(Fp) has at most 5 prime factors. We also obtain upper
bounds for the number of primes p ≤ x for which #Ep(Fp) is a prime. 1
, 2007
"... Abstract. There is extensive numerical support for the prime-pair conjecture (PPC) of Hardy and Littlewood (1923) on the asymptotic behavior of π2r(x), the number of prime pairs (p, p + 2r) with
p ≤ x. However, it is still not known whether there are infinitely many prime pairs with given even diffe ..."
Cited by 2 (2 self)
Add to MetaCart
Abstract. There is extensive numerical support for the prime-pair conjecture (PPC) of Hardy and Littlewood (1923) on the asymptotic behavior of π2r(x), the number of prime pairs (p, p + 2r) with p ≤
x. However, it is still not known whether there are infinitely many prime pairs with given even difference! Using a strong hypothesis on (weighted) equidistribution of primes in arithmetic
progressions, Goldston, Pintz and Yildirim have recently shown that there are infinitely many pairs of primes differing by at most sixteen. The present author uses a Tauberian approach to derive that
the PPC is equivalent to specific boundary behavior of certain functions involving zeta’s complex zeros. Under Riemann’s Hypothesis (RH) and on the real axis these functions resemble pair-correlation
expressions. A speculative extension of Montgomery’s classical work (1973) would imply that there must be an abundance of prime pairs. 1.
"... The Twin Prime Conjecture states that there are infinitely many primes p such that p + 2 is also prime. A refined version of this conjecture is that π2(x), the number of prime twins lying below
a level x, satisfies ..."
Add to MetaCart
The Twin Prime Conjecture states that there are infinitely many primes p such that p + 2 is also prime. A refined version of this conjecture is that π2(x), the number of prime twins lying below a
level x, satisfies
"... Probabilistic methods are used to prove that for every E> 0 there exists a sequence A, of squares such that every positive integer is the sum of at most four squares in A, and A,(x) = 0(x=8 + y.
Key words and phrases: Sums of squares, additive bases, probabilistic methods in additive number theory. ..."
Add to MetaCart
Probabilistic methods are used to prove that for every E> 0 there exists a sequence A, of squares such that every positive integer is the sum of at most four squares in A, and A,(x) = 0(x=8 + y. Key
words and phrases: Sums of squares, additive bases, probabilistic methods in additive number theory. The set A of positive integers is a basis of order h if every positive integer is the sum of at
most h elements of A. Lagrange proved in 1770 that the set of squares is a basis of order 4. Let A(x) denote the number of elements of
"... ABSTRACT. In these lectures we give an overview of the circle method introduced by Hardy and Ramanujan at the beginning of the twentieth century, and developed by Hardy, Littlewood and
Vinogradov, among others. We also try and explain the main difficulties in proving Goldbach’s conjecture and we giv ..."
Add to MetaCart
ABSTRACT. In these lectures we give an overview of the circle method introduced by Hardy and Ramanujan at the beginning of the twentieth century, and developed by Hardy, Littlewood and Vinogradov,
among others. We also try and explain the main difficulties in proving Goldbach’s conjecture and we give a sketch of the proof of Vinogradov’s three-prime Theorem. 1. ADDITIVE PROBLEMS In the last
few centuries many additive problems have come to the attention of mathematicians: famous examples are Waring’s problem and Goldbach’s conjecture. In general, an additive problem can be expressed in
the following form: we are given s ≥ 2 subsets of the set of natural numbers N, not necessarily distinct, which we call A1,..., As. We would like to determine the number of solutions of the equation
n = a1 + a2 + ·· · + as (1.1) for a given n ∈ N, with the constraint that a j ∈ A j for j = 1,..., s, or, failing that, we would like to prove that the same equation has at least one solution for
“sufficiently large ” n. In fact, we can not expect, in general, that for very small n there will be a solution of equation (1.1). Furthermore, depending on the nature of the sets A j, there may be
some arithmetical constraints
, 2002
"... These are notes of a series of lectures on sieves, presented during the Special ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=6382938","timestamp":"2014-04-17T22:49:01Z","content_type":null,"content_length":"34267","record_id":"<urn:uuid:518f39f9-da0e-4dad-87b1-7c3b4b64d99c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Formulating a hydroponic nutrient solution - Part II: Micro nutrients
Formulating a hydroponic nutrient solution – Part II: Micro nutrients
Micro nutrients are just as important as macro nutrients. Many growers fail to have their water analysed for all elements, and in some area one might have enough of a micro element in the water. By
adding more with fertilizers, toxicity levels may be reached which can reduce yields significantly. The following example will use a sample water analysis to formulate the micro nutrient fertilizer
requirements as according to the micro nutrient analysis in table below.
The fertilizers that will be used in this example are listed in below. The percentages of each relevant micro element are provided in the last column.
Solving for the micro elements is much easier than macro elements since each fertilizer contains only one micro nutrient. There is no sequence or algorithm to follow to determine the amounts of
fertilizers to be used. The first micro element is iron (Fe). Iron is always applied as a chelate otherwise it is not absorbed by the plant in high enough quantities for normal growth. The iron
chelate used in this example is NaFeEDTA, however, there are three other types that can also be used. Their selection will however depend on the water quality and pH. The iron content in NaFeEDTA is
10.5 %, the required concentration is 3.0 mg.L-1. The water analysis shows that there is 0.5 mg.L-1 Fe in the water. However, this Fe is not chelated and if the pH of the water is above 7.0, the iron
uptake will be depressed. So, the water analysis in this case will be ignored for Fe (not for the other elements however). The following method will be used to determine the amount of NaFeEDTA to be
added to 1,000 L of water in order to achieve 3.0 mg.L-1 Fe.
NaFeEDTA has 10.5 % Fe.
The following formula should be used.
where the following applies
C[1] = Concentration of known element
C[2] = Concentration of known fertilizer
C’[1] = Concentration of required element
C’[2] = Concentration of unknown fertilizer
The following values will be replaced in the formula:
C1 = 10.5 mg.L^-1 Fe
C2 = 100 mg.L^-1 NaFeEDTA
C’1 = 0.3 mg.L^-1 Fe
C’2 = X (concentration of NaFeEDTA in solution)
By replacing all the values we have the following
We need to solve for C’2:
Thus in order to have a concentration of 0.30 mg.L^-1 Fe in the solution, we need to add 2.86 g of NaFeEDTA in 1,000 litre of water.
Thus in order to have a concentration of 0.20 mg.L-1 Mn in the solution, we need to add 0.811 g of MnSO4.4H2O in 1,000 litre of water.
H3BO3: 17.80 % B
According to the water analysis there is 0.1 mg.L-1 B in the irrigation water so the amount of B required from fertilizers is 0.4 mg.L-1. The amount of Boric acid to add in the irrigation water (in a
1,000 L tank) will be:
The above procedure can be simplified using the following formula. It is actually the second last step in above procedure:
Now calculate the Manganese (Mn) required.
We will be using ^-1. The amount of manganese sulphate to add in the irrigation water (in a 1,000 L tank) will be:
Thus in order to have a concentration of 0.20 mg.L-1 Mn in the solution, we need to add 0.811 g of
Now we will calculate the Boron (B) requirement
We will use
According to the water analysis there is 0.1 mg.L^-1 B in the irrigation water so the amount of B required from fertilizers is 0.4 mg.L^-1. The amount of Boric acid to add in the irrigation water (in
a 1,000 L tank) will be:
Thus in order to have a concentration of 0.50 mg.L^-1 B in the solution, we need to add 2.25 g of
We will now solve the Cupper (Cu) requirements
We will use Copper sulphate
According to the water analysis there is no Cu in the irrigation water so the amount of Cu required from fertilizers is 0.05 mg.L^-1. The amount of Copper sulphate to add in the irrigation water (in
a 1,000 L tank) will be:
Thus in order to have a concentration of 0.05 mg.L^-1 Cu in the solution, we need to add 0.20 g of
According to the water analysis there is no Mo in the irrigation water so the amount of Mo required from fertilizers is 0.5 mg.L^-1. The amount of Ammonium molybdate to add in the irrigation water
(in a 1,000 L tank) will be:
Thus in order to have a concentration of 0.50 mg.L-1 Mo in the solution, we need to add 0.87 g of
Now calculate the Zink (Zn) requirments.
According to the water analysis there is 0.02 mg.L^-1 Zn in the irrigation water so the amount of Zn required from fertilizers is 0.03 mg.L^-1. The amount of NaZnEDTA to add in the irrigation water
(in a 1,000 L tank) will be:
Thus in order to have a concentration of 0.05 mg.L^-1 Zn in the solution, we need to add 0.2 g of NaZnEDTA in 1,000 litre of water. Like Fe, Zn availability to plants is pH dependant. If the pH is
high, Zn will precipitate as insoluble
The total amount of fertilizers added to supply micro nutrients are listed below. As can be seen from the table each fertilizer supplies only one element in comparison with the macro fertilizers
where each fertilizer contains at least two nutrients. The amounts are also much smaller than with the macro elements. If mixing small volumes such as 1,000 L, it would be easier to by pre-mixed
micro nutrients from a fertilizer company. The reason is that weighing 0.2 g requires special equipment which is very expensive. If however larger amounts are mixed, using either 10,000 L tanks or
mixing stock solutions (as explained in the next section), the amounts are easier to measure and measurement errors will be insignificant.
In above examples a 1,000 L nutrient tank is used to mix the fertilizers. In a small home hydroponic system this would be adequate but for commercial systems a 1,000 L tank would be hopelessly to
small. The minimum size for a bag culture system would be 10,000 L, which would be used for 5,000 m² hydroponic farm. The amount of fertilizers required to supply the macro nutrient as provided in
the ‘sample requirement table’, are listed in table below and the total amount of fertilizers needed to supply the micro nutrients are listed in the table thereafter if a 10,000 L tank is used. As
can be seen, the amounts are 10 times more than in a 1,000 L tank and they are easier to measure. A 1 g mistake, for instance for Fe chelate from 3.0 g would mean either 2g or 4 g being measure, but
for a 10,000 L tank it would mean either 29 g or 31 g being measured. The difference in the small tank is either 33 % over or under measuring or in the case of the large tank a 3.3 % mistake being
either over or under. | {"url":"http://www.commercial-hydroponic-farming.com/hydroponic-micro-nutrients-solution-formulation/","timestamp":"2014-04-19T09:24:23Z","content_type":null,"content_length":"76437","record_id":"<urn:uuid:4d2cd115-25d7-4003-9925-e317a2fe5759>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
X~Exp(x,θ) prove or disprove that E[x^2]=2 (θ)^2
November 23rd 2010, 01:32 PM #1
Jan 2009
Kingston, PA
X~Exp(x,θ) prove or disprove that E[x^2]=2 (θ)^2
X~Exp(x,θ) prove or disprove that E[x^2]=2 (θ)^2.
I know the pdf and how to find E[x^2] but not sure how to show it equals (θ)^2.
For an exponential distribution, $E[X^2]=2{\theta}^{-2}=\dfrac{2}{\theta^2}$
$E[X^2]= \displaystyle \int_0^{\infty} x^2\; \theta\;e^{-\theta{x}}\;dx$
integrate by parts..
Ah Ha! well i was asked to prove or disprove this so i presume the claim is false then ! I'll have to do an counter example
November 23rd 2010, 02:09 PM #2
November 23rd 2010, 03:09 PM #3
Jan 2009
Kingston, PA | {"url":"http://mathhelpforum.com/advanced-statistics/164200-x-exp-x-prove-disprove-e-x-2-2-2-a.html","timestamp":"2014-04-17T02:13:15Z","content_type":null,"content_length":"35177","record_id":"<urn:uuid:6acb7a7a-e97d-41a7-b4b8-389647312a30>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
competing spin dynamics: Topics by Science.gov
Dynamic magnetization switching and spin wave excitations by voltage-induced torque
NASA Astrophysics Data System (ADS)
The effect of electric fields on ultrathin ferromagnetic metal layer is one of the promising approaches for manipulating the spin direction with low-energy consumption, localization, and coherent
behavior. Several experimental approaches to realize it have been investigated using ferromagnetic semiconductors [1], magnetostriction together with piezo-electric materials [2], multiferroic
materials [3], and ultrathin ferromagnetic layer [4-9]. In this talk, we will present a dynamic control of spins by voltage-induced torque. We used the magnetic tunnel junctions with ultrathin
ferromagnetic layer, which shows voltage-induced perpendicular magnetic anisotropy change. By applying the voltage to the junction, the magnetic easy-axis in the ultrathin ferromagnetic layer changes
from in-plane to out-of-plane, which causes a precession of the spins. This precession resulted in a two-way toggle switching by determining an appropriate pulse length [8]. On the other hand, an
application of rf-voltage causes an excitation of a uniform spin-wave [9]. Since the precession of spin associates with an oscillation in the resistance of the junction, the applied rf-signal is
rectified and produces a dc-voltage. From the spectrum of the dc-voltage as a function of frequency, we could estimate the voltage-induced torque.[4pt] [1] H. Ohno, et al., Nature 408, 944-946
(2000), D. Chiba, et al, Science 301, 943-945 (2003). [2] V. Novosad, et al., J. Appl. Phys. 87, 6400-6402 (2000), J. --W. Lee, et al., Appl. Phys. Lett. 82, 2458-2460 (2003). [3] W. Eerenstein, et
al., Nature 442, 759-765 (2006), Y. --H. Chu, et al., Nature Materials 7, 478-482 (2008). [4] M. Weisheit, et al., Science 315, 349-351 (2007). [5] T. Maruyama, et al., Nature Nanotechnology 4,
158-161 (2009). [6] M. Endo, et al., Appl. Phys. Lett. 96, 212503 (2010). [7] D. Chiba, et al., Nature Materials 10, 853 (2011). [8]Y. Shiota, et al., Nature Materials 11, 39 (2012) [9]T. Nozaki, et
al., Nat. Phys. 8, 491 (2012)
Shiota, Yoichi | {"url":"http://www.science.gov/topicpages/c/competing+spin+dynamics.html","timestamp":"2014-04-20T08:22:19Z","content_type":null,"content_length":"1048809","record_id":"<urn:uuid:02e550e0-8dcc-4a97-b032-67f09537eef8>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: August 1995 [00088]
[Date Index] [Thread Index] [Author Index]
Re: Question: how to get Sin[n*Pi]=0 (n integer)
• To: mathgroup at christensen.cybernetics.net
• Subject: [mg1971] Re: Question: how to get Sin[n*Pi]=0 (n integer)
• From: rdieter at mathlab41.unl.edu (Rex Dieter)
• Date: Wed, 30 Aug 1995 01:50:40 -0400
• Organization: University of Nebraska--Lincoln
In article <DDuM67.IuF at wri.com> izovko at dominis.phy.hr (Ilija I Zovko) writes:
> How can one tell Mathematica to simplify Sin[n Pi]=0 or
> Cos[n Pi]=(-1)^n and similar kind of stuff.
That's a good question. Mathematica normally can't make assumptions like "n
is an integer)", and I'm not sure how to do this easily. Perhaps try the
following... say you want to simplify an expression "expr" that contains the
constructs you describe above. To get the simplifications you desire, simply
expr /. { Sin[n Pi] -> (0), Cos[n Pi] -> ( (-1)^n ) }
> Also, how does one tell it "A" & "B" are matrices so it doesn't
> commute them (AB.not equal.BA).
Mathematica won't commute things that can't be commuted. I think you might
specify that you want Matrix multiplication and not plain (component-wise)
A . B matrix multiply (ought not to commute)
A * B component multiply (may commute, because that's ok)
A + B addition (may commute...)
Rex A. Dieter rdieter at math.unl.edu (NeXT/MIME)
Research Associate Voice: (402)472-9747
Department of Mathematics and Statistics FAX: (402)472-8466
University of Nebraska - Lincoln http://www.math.unl.edu/~rdieter/
Prev by Date: Re: Plotting Piecewise Functions with Discontinuites
Previous by thread: Re: Question: how to get Sin[n*Pi]=0 (n integer)
Next by thread: convolution | {"url":"http://forums.wolfram.com/mathgroup/archive/1995/Aug/msg00088.html","timestamp":"2014-04-19T02:20:54Z","content_type":null,"content_length":"35398","record_id":"<urn:uuid:6bdbef47-008f-4a38-80aa-99ee3d880af7>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00355-ip-10-147-4-33.ec2.internal.warc.gz"} |
Name of "slice" category with 2-cells as morphisms ?
up vote 3 down vote favorite
I would like to know whether there is a standard name for the following "slice" category: Let $\mathcal{C}$ be a 2-category and $c \in \mathcal{C}$ an object of $\mathcal{C}$. We can form the
category where an object $(d,f)$ is a pair of an object $d\in\mathcal{C}$ and an arrow $f : d\to c$. A morphism $(h, \alpha)$ from $(d,f)$ to $(e,g)$ is given by $h : d \to e$ and a 2-cell $\alpha :
f \Rightarrow g \circ h$.
Thanks a lot, ben
[Edit: fixed typo mentioned by Martin]
I expect that you mean $h : d \to e$. Well I think that this is just the correct notion of slice category in the context of $2$-categories. I would call it (and actually have called it in one of my
texts) slice category and denote it by $C / c$. – Martin Brandenburg Jun 24 '11 at 12:47
add comment
1 Answer
active oldest votes
See this answer to much the same question. I would call this the 'lax' slice category, although it's not so common a notion that everyone would know what you meant, so maybe you should
keep the scare quotes around 'lax'.
up vote 2 A propos of Martin's comment, the correct notion of slice 2-category depends on what you're doing -- you might want the strict version, with strictly commuting triangles, or the pseudo
down vote version, with invertible 2-cells (this is the strictest one that makes sense for non-strict 2-categories), or this lax version. Or you might want to restrict to (discrete) (op)fibrations
as objects.
You're right, there is some ambiguity. But instead of introducing an extra notion for the case where everything invertible, I would just use $(C/c)_{cart}$ with the above definition of
$C/c$. – Martin Brandenburg Jun 24 '11 at 15:56
I don't think there's any real ambiguity -- what I meant is that there's more than one sensible notion of 'slice 2-category', and which one is correct depends on what you want it for.
But yes, they're all contained in the lax slice, so you could take that as fundamental. – Finn Lawler Jun 24 '11 at 18:37
I feel fairly strongly that the unqualified term "slice 2-category" should refer to the pseudo version. Lax objects are generally less well-behaved and less fundamental than pseudo
ones, and while they have specialized uses, the central role in the theory is usually played by the pseudo version. – Mike Shulman Jun 24 '11 at 21:32
add comment
Not the answer you're looking for? Browse other questions tagged ct.category-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/68728/name-of-slice-category-with-2-cells-as-morphisms","timestamp":"2014-04-25T08:50:56Z","content_type":null,"content_length":"56105","record_id":"<urn:uuid:8df4e340-4a01-4439-955c-de4bd41a1818>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/kaederfds/answered","timestamp":"2014-04-16T13:47:02Z","content_type":null,"content_length":"119998","record_id":"<urn:uuid:8b83e287-a143-4514-965b-34710e11cfc0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Race: Rolling Down a Ramp
We have three objects, a solid disk, a ring, and a solid sphere. If we release them from rest at the top of an incline, which object will win the race? Assume the objects roll down the ramp without
1. The sphere
2. The ring
3. The disk
4. Three-way tie
5. Can't tell - it depends on mass and/or radius.
We can take the winner of this race, if there is one, and race it against a slippery block that slides down the ramp with negligible friction and see which one wins that race.
To analyze the rolling race, let's take an object with a mass M and a radius R, and a moment of inertia of cMR^2. Consider the free-body diagram of such an object.
If there was no friction the object would slide down the ramp without rotating. Friction opposes this motion, so it must be directed up the slope. It's static friction because the object rolls
without slipping.
Because the force of gravity and the normal force pass through the center of the object, the frictional force is the only force producing a torque about the center of the object - that's why the
object rotates.
We'll analyze the problem it from two different perspectives.
Perspective 1 - Energy Conservation
Because the object does not slip as it rolls, there is no loss of mechanical energy. As it rolls, the object is experiencing a combination of straight-line and rotational motion. Applying
conservation of mechanical energy:
U[i] + K[i] = U[f] + K[f]
Take the bottom of the ramp to be the zero for the gravitational potential energy. If the object is released from a height h, the initial potential energy is mgh.
The initial kinetic energy is zero. The final kinetic energy is made up of translational and rotational kinetic energies.
Mgh = ½ Mv^2 + ½ Iw^2
Plugging in I = cMR^2:
Mgh = ½ Mv^2 + ½ cMR^2 w^2
Multiply both sides by 2, and cancel the mass (in other words mass doesn't matter):
2gh = v^2 + cR^2 w^2
For rolling without slipping, the relationship between the velocity of the center-of-mass and the angular velocity is:
The factors of R cancel, so the size of the object doesn't matter.
This gives:
2gh = v^2 + c v^2
So, the larger the value of c, the smaller the speed is. For our different objects we have:
ring c = 1
disk c = 1/2
solid sphere c = 2/5
So, the sphere wins the race.
Perspective 2 - Forces and Torques
The free-body diagram of the object shows two forces parallel to the slope. If the angle of the incline is f, the component of the force of gravity acting down the slope is mgsin(f). f[s], the force
of static friction, acts up the slope.
Summing forces in this direction, with positive down the slope, gives:
SF = Ma
Mgsin(f) - f[s] = Ma
With both the normal force and the force of gravity passing through the center-of-mass of the object, summing torques about the center of mass gives:
St = Ia
f[s]R = Ia
I = cMR^2 and, for rolling without slipping, a =
The torque equation becomes:
The factors of R cancel, leaving:
f[s] = cMa
Substituting this into the force equation gives:
Mgsin(f) - cMa = Ma
The factors of mass cancel. Solving the equation for acceleration gives:
When starting from rest, v = at, so both acceleration and velocity are reduced by a factor of 1+c from what they would be if the object slid down the ramp with no friction. This is why the object
that slides without friction wins the race against any of the rolling objects.
Whichever way you analyze it, the object with the smallest value of c in I = cMR^2 wins the race. The mass and radius don't make any difference - they canceled out in the equations. | {"url":"http://physics.bu.edu/~duffy/sc527_notes06/race.html","timestamp":"2014-04-20T20:56:48Z","content_type":null,"content_length":"7152","record_id":"<urn:uuid:36d43ea3-2118-447e-863d-ab5d3b855c0b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
CRC primer by Ross Williams
home nutchips designs f.a.q.
provokations crc guide who am I ? awards
e-mail disclaimer site map
CRC ERROR DETECTION ALGORITHMS
"Everything you wanted to know about CRC algorithms, but were afraid
to ask for fear that errors in your understanding might be detected."
Version : 3.
Date : 19 August 1993.
Author : Ross N. Williams.
Net : ross@guest.adelaide.edu.au.
FTP : ftp.adelaide.edu.au/pub/rocksoft/crc_v3.txt
Company : Rocksoft^tm Pty Ltd.
Snail : 16 Lerwick Avenue, Hazelwood Park 5066, Australia.
Fax : +61 8 373-4911 (c/- Internode Systems Pty Ltd).
Phone : +61 8 379-9217 (10am to 10pm Adelaide Australia time).
Note : "Rocksoft" is a trademark of Rocksoft Pty Ltd, Australia.
Status : Copyright (C) Ross Williams, 1993. However, permission is
granted to make and distribute verbatim copies of this
document provided that this information block and copyright
notice is included. Also, the C code modules included
in this document are fully public domain.
Thanks : Thanks to Jean-loup Gailly (jloup@chorus.fr) and Mark Adler
(me@quest.jpl.nasa.gov) who both proof read this document
and picked out lots of nits as well as some big fat bugs.
Table of Contents
1. Introduction: Error Detection
2. The Need For Complexity
3. The Basic Idea Behind CRC Algorithms
4. Polynomical Arithmetic
5. Binary Arithmetic with No Carries
6. A Fully Worked Example
7. Choosing A Poly
8. A Straightforward CRC Implementation
9. A Table-Driven Implementation
10. A Slightly Mangled Table-Driven Implementation
11. "Reflected" Table-Driven Implementations
12. "Reversed" Polys
13. Initial and Final Values
14. Defining Algorithms Absolutely
15. A Parameterized Model For CRC Algorithms
16. A Catalog of Parameter Sets for Standards
17. An Implementation of the Model Algorithm
18. Roll Your Own Table-Driven Implementation
19. Generating A Lookup Table
20. Summary
21. Corrections
A. Glossary
B. References
C. References I Have Detected But Haven't Yet Sighted
This document explains CRCs (Cyclic Redundancy Codes) and their
table-driven implementations in full, precise detail. Much of the
literature on CRCs, and in particular on their table-driven
implementations, is a little obscure (or at least seems so to me).
This document is an attempt to provide a clear and simple no-nonsense
explanation of CRCs and to absolutely nail down every detail of the
operation of their high-speed implementations. In addition to this,
this document presents a parameterized model CRC algorithm called the
"Rocksoft^tm Model CRC Algorithm". The model algorithm can be
parameterized to behave like most of the CRC implementations around,
and so acts as a good reference for describing particular algorithms.
A low-speed implementation of the model CRC algorithm is provided in
the C programming language. Lastly there is a section giving two forms
of high-speed table driven implementations, and providing a program
that generates CRC lookup tables.
1. Introduction: Error Detection
The aim of an error detection technique is to enable the receiver of a
message transmitted through a noisy (error-introducing) channel to
determine whether the message has been corrupted. To do this, the
transmitter constructs a value (called a checksum) that is a function
of the message, and appends it to the message. The receiver can then
use the same function to calculate the checksum of the received
message and compare it with the appended checksum to see if the
message was correctly received. For example, if we chose a checksum
function which was simply the sum of the bytes in the message mod 256
(i.e. modulo 256), then it might go something as follows. All numbers
are in decimal.
Message : 6 23 4
Message with checksum : 6 23 4 33
Message after transmission : 6 27 4 33
In the above, the second byte of the message was corrupted from 23 to
27 by the communications channel. However, the receiver can detect
this by comparing the transmitted checksum (33) with the computer
checksum of 37 (6 + 27 + 4). If the checksum itself is corrupted, a
correctly transmitted message might be incorrectly identified as a
corrupted one. However, this is a safe-side failure. A dangerous-side
failure occurs where the message and/or checksum is corrupted in a
manner that results in a transmission that is internally consistent.
Unfortunately, this possibility is completely unavoidable and the best
that can be done is to minimize its probability by increasing the
amount of information in the checksum (e.g. widening the checksum from
one byte to two bytes).
Other error detection techniques exist that involve performing complex
transformations on the message to inject it with redundant
information. However, this document addresses only CRC algorithms,
which fall into the class of error detection algorithms that leave the
data intact and append a checksum on the end. i.e.:
<original intact message> <checksum>
2. The Need For Complexity
In the checksum example in the previous section, we saw how a
corrupted message was detected using a checksum algorithm that simply
sums the bytes in the message mod 256:
Message : 6 23 4
Message with checksum : 6 23 4 33
Message after transmission : 6 27 4 33
A problem with this algorithm is that it is too simple. If a number of
random corruptions occur, there is a 1 in 256 chance that they will
not be detected. For example:
Message : 6 23 4
Message with checksum : 6 23 4 33
Message after transmission : 8 20 5 33
To strengthen the checksum, we could change from an 8-bit register to
a 16-bit register (i.e. sum the bytes mod 65536 instead of mod 256) so
as to apparently reduce the probability of failure from 1/256 to
1/65536. While basically a good idea, it fails in this case because
the formula used is not sufficiently "random"; with a simple summing
formula, each incoming byte affects roughly only one byte of the
summing register no matter how wide it is. For example, in the second
example above, the summing register could be a Megabyte wide, and the
error would still go undetected. This problem can only be solved by
replacing the simple summing formula with a more sophisticated formula
that causes each incoming byte to have an effect on the entire
checksum register.
Thus, we see that at least two aspects are required to form a strong
checksum function:
WIDTH: A register width wide enough to provide a low a-priori
probability of failure (e.g. 32-bits gives a 1/2^32 chance
of failure).
CHAOS: A formula that gives each input byte the potential to change
any number of bits in the register.
Note: The term "checksum" was presumably used to describe early
summing formulas, but has now taken on a more general meaning
encompassing more sophisticated algorithms such as the CRC ones. The
CRC algorithms to be described satisfy the second condition very well,
and can be configured to operate with a variety of checksum widths.
3. The Basic Idea Behind CRC Algorithms
Where might we go in our search for a more complex function than
summing? All sorts of schemes spring to mind. We could construct
tables using the digits of pi, or hash each incoming byte with all the
bytes in the register. We could even keep a large telephone book
on-line, and use each incoming byte combined with the register bytes
to index a new phone number which would be the next register value.
The possibilities are limitless.
However, we do not need to go so far; the next arithmetic step
suffices. While addition is clearly not strong enough to form an
effective checksum, it turns out that division is, so long as the
divisor is about as wide as the checksum register.
The basic idea of CRC algorithms is simply to treat the message as an
enormous binary number, to divide it by another fixed binary number,
and to make the remainder from this division the checksum. Upon
receipt of the message, the receiver can perform the same division and
compare the remainder with the "checksum" (transmitted remainder).
Example: Suppose the the message consisted of the two bytes (6,23) as
in the previous example. These can be considered to be the hexadecimal
number 0617 which can be considered to be the binary number
0000-0110-0001-0111. Suppose that we use a checksum register one-byte
wide and use a constant divisor of 1001, then the checksum is the
remainder after 0000-0110-0001-0111 is divided by 1001. While in this
case, this calculation could obviously be performed using common
garden variety 32-bit registers, in the general case this is messy. So
instead, we'll do the division using good-'ol long division which you
learnt in school (remember?). Except this time, it's in binary:
...0000010101101 = 00AD = 173 = QUOTIENT
9= 1001 ) 0000011000010111 = 0617 = 1559 = DIVIDEND
DIVISOR 0000.,,....,.,,,
0010 = 02 = 2 = REMAINDER
In decimal this is "1559 divided by 9 is 173 with a remainder of 2".
Although the effect of each bit of the input message on the quotient
is not all that significant, the 4-bit remainder gets kicked about
quite a lot during the calculation, and if more bytes were added to
the message (dividend) it's value could change radically again very
quickly. This is why division works where addition doesn't.
In case you're wondering, using this 4-bit checksum the transmitted
message would look like this (in hexadecimal): 06172 (where the 0617
is the message and the 2 is the checksum). The receiver would divide
0617 by 9 and see whether the remainder was 2.
4. Polynomical Arithmetic
While the division scheme described in the previous section is very
very similar to the checksumming schemes called CRC schemes, the CRC
schemes are in fact a bit weirder, and we need to delve into some
strange number systems to understand them.
The word you will hear all the time when dealing with CRC algorithms
is the word "polynomial". A given CRC algorithm will be said to be
using a particular polynomial, and CRC algorithms in general are said
to be operating using polynomial arithmetic. What does this mean?
Instead of the divisor, dividend (message), quotient, and remainder
(as described in the previous section) being viewed as positive
integers, they are viewed as polynomials with binary coefficients.
This is done by treating each number as a bit-string whose bits are
the coefficients of a polynomial. For example, the ordinary number 23
(decimal) is 17 (hex) and 10111 binary and so it corresponds to the
1*x^4 + 0*x^3 + 1*x^2 + 1*x^1 + 1*x^0
or, more simply:
x^4 + x^2 + x^1 + x^0
Using this technique, the message, and the divisor can be represented
as polynomials and we can do all our arithmetic just as before, except
that now it's all cluttered up with Xs. For example, suppose we wanted
to multiply 1101 by 1011. We can do this simply by multiplying the
(x^3 + x^2 + x^0)(x^3 + x^1 + x^0)
= (x^6 + x^4 + x^3
+ x^5 + x^3 + x^2
+ x^3 + x^1 + x^0) = x^6 + x^5 + x^4 + 3*x^3 + x^2 + x^1 + x^0
At this point, to get the right answer, we have to pretend that x is 2
and propagate binary carries from the 3*x^3 yielding
x^7 + x^3 + x^2 + x^1 + x^0
It's just like ordinary arithmetic except that the base is abstracted
and brought into all the calculations explicitly instead of being
there implicitly. So what's the point?
The point is that IF we pretend that we DON'T know what x is, we CAN'T
perform the carries. We don't know that 3*x^3 is the same as x^4 + x^3
because we don't know that x is 2. In this true polynomial arithmetic
the relationship between all the coefficients is unknown and so the
coefficients of each power effectively become strongly typed;
coefficients of x^2 are effectively of a different type to
coefficients of x^3.
With the coefficients of each power nicely isolated, mathematicians
came up with all sorts of different kinds of polynomial arithmetics
simply by changing the rules about how coefficients work. Of these
schemes, one in particular is relevant here, and that is a polynomial
arithmetic where the coefficients are calculated MOD 2 and there is no
carry; all coefficients must be either 0 or 1 and no carries are
calculated. This is called "polynomial arithmetic mod 2". Thus,
returning to the earlier example:
(x^3 + x^2 + x^0)(x^3 + x^1 + x^0)
= (x^6 + x^4 + x^3
+ x^5 + x^3 + x^2
+ x^3 + x^1 + x^0)
= x^6 + x^5 + x^4 + 3*x^3 + x^2 + x^1 + x^0
Under the other arithmetic, the 3*x^3 term was propagated using the
carry mechanism using the knowledge that x=2. Under "polynomial
arithmetic mod 2", we don't know what x is, there are no carries, and
all coefficients have to be calculated mod 2. Thus, the result
= x^6 + x^5 + x^4 + x^3 + x^2 + x^1 + x^0
As Knuth [Knuth81] says (p.400):
"The reader should note the similarity between polynomial
arithmetic and multiple-precision arithmetic (Section 4.3.1), where
the radix b is substituted for x. The chief difference is that the
coefficient u_k of x^k in polynomial arithmetic bears little or no
relation to its neighboring coefficients x^{k-1} [and x^{k+1}], so
the idea of "carrying" from one place to another is absent. In fact
polynomial arithmetic modulo b is essentially identical to multiple
precision arithmetic with radix b, except that all carries are
Thus polynomical arithmetic mod 2 is just binary arithmetic mod 2 with
no carries. While polynomials provide useful mathematical machinery in
more analytical approaches to CRC and error-correction algorithms, for
the purposes of exposition they provide no extra insight and some
encumbrance and have been discarded in the remainder of this document
in favour of direct manipulation of the arithmetical system with which
they are isomorphic: binary arithmetic with no carry.
5. Binary Arithmetic with No Carries
Having dispensed with polynomials, we can focus on the real arithmetic
issue, which is that all the arithmetic performed during CRC
calculations is performed in binary with no carries. Often this is
called polynomial arithmetic, but as I have declared the rest of this
document a polynomial free zone, we'll have to call it CRC arithmetic
instead. As this arithmetic is a key part of CRC calculations, we'd
better get used to it. Here we go:
Adding two numbers in CRC arithmetic is the same as adding numbers in
ordinary binary arithmetic except there is no carry. This means that
each pair of corresponding bits determine the corresponding output bit
without reference to any other bit positions. For example:
There are only four cases for each bit position:
1+1=0 (no carry)
Subtraction is identical:
0-1=1 (wraparound)
In fact, both addition and subtraction in CRC arithmetic is equivalent
to the XOR operation, and the XOR operation is its own inverse. This
effectively reduces the operations of the first level of power
(addition, subtraction) to a single operation that is its own inverse.
This is a very convenient property of the arithmetic.
By collapsing of addition and subtraction, the arithmetic discards any
notion of magnitude beyond the power of its highest one bit. While it
seems clear that 1010 is greater than 10, it is no longer the case
that 1010 can be considered to be greater than 1001. To see this, note
that you can get from 1010 to 1001 by both adding and subtracting the
same quantity:
1010 = 1010 + 0011
1010 = 1010 - 0011
This makes nonsense of any notion of order.
Having defined addition, we can move to multiplication and division.
Multiplication is absolutely straightforward, being the sum of the
first number, shifted in accordance with the second number.
x 1011
1111111 Note: The sum uses CRC addition
Division is a little messier as we need to know when "a number goes
into another number". To do this, we invoke the weak definition of
magnitude defined earlier: that X is greater than or equal to Y iff
the position of the highest 1 bit of X is the same or greater than the
position of the highest 1 bit of Y. Here's a fully worked division
(nicked from [Tanenbaum81]).
10011 ) 11010110110000
1110 = Remainder
That's really it. Before proceeding further, however, it's worth our
while playing with this arithmetic a bit to get used to it.
We've already played with addition and subtraction, noticing that they
are the same thing. Here, though, we should note that in this
arithmetic A+0=A and A-0=A. This obvious property is very useful
In dealing with CRC multiplication and division, it's worth getting a
feel for the concepts of MULTIPLE and DIVISIBLE.
If a number A is a multiple of B then what this means in CRC
arithmetic is that it is possible to construct A from zero by XORing
in various shifts of B. For example, if A was 0111010110 and B was 11,
we could construct A from B as follows:
= .......11.
+ ....11....
+ ...11.....
However, if A is 0111010111, it is not possible to construct it out of
various shifts of B (can you see why? - see later) so it is said to be
not divisible by B in CRC arithmetic.
Thus we see that CRC arithmetic is primarily about XORing particular
values at various shifting offsets.
6. A Fully Worked Example
Having defined CRC arithmetic, we can now frame a CRC calculation as
simply a division, because that's all it is! This section fills in the
details and gives an example.
To perform a CRC calculation, we need to choose a divisor. In maths
marketing speak the divisor is called the "generator polynomial" or
simply the "polynomial", and is a key parameter of any CRC algorithm.
It would probably be more friendly to call the divisor something else,
but the poly talk is so deeply ingrained in the field that it would
now be confusing to avoid it. As a compromise, we will refer to the
CRC polynomial as the "poly". Just think of this number as a sort of
parrot. "Hello poly!"
You can choose any poly and come up with a CRC algorithm. However,
some polys are better than others, and so it is wise to stick with the
tried an tested ones. A later section addresses this issue.
The width (position of the highest 1 bit) of the poly is very
important as it dominates the whole calculation. Typically, widths of
16 or 32 are chosen so as to simplify implementation on modern
computers. The width of a poly is the actual bit position of the
highest bit. For example, the width of 10011 is 4, not 5. For the
purposes of example, we will chose a poly of 10011 (of width W of 4).
Having chosen a poly, we can proceed with the calculation. This is
simply a division (in CRC arithmetic) of the message by the poly. The
only trick is that W zero bits are appended to the message before the
CRC is calculated. Thus we have:
Original message : 1101011011
Poly : 10011
Message after appending W zeros : 11010110110000
Now we simply divide the augmented message by the poly using CRC
arithmetic. This is the same division as before:
1100001010 = Quotient (nobody cares about the quotient)
10011 ) 11010110110000 = Augmented message (1101011011 + 0000)
=Poly 10011,,.,,....
1110 = Remainder = THE CHECKSUM!!!!
The division yields a quotient, which we throw away, and a remainder,
which is the calculated checksum. This ends the calculation.
Usually, the checksum is then appended to the message and the result
transmitted. In this case the transmission would be: 11010110111110.
At the other end, the receiver can do one of two things:
a. Separate the message and checksum. Calculate the checksum for
the message (after appending W zeros) and compare the two
b. Checksum the whole lot (without appending zeros) and see if it
comes out as zero!
These two options are equivalent. However, in the next section, we
will be assuming option b because it is marginally mathematically
A summary of the operation of the class of CRC algorithms:
1. Choose a width W, and a poly G (of width W).
2. Append W zero bits to the message. Call this M'.
3. Divide M' by G using CRC arithmetic. The remainder is the checksum.
That's all there is to it.
7. Choosing A Poly
Choosing a poly is somewhat of a black art and the reader is referred
to [Tanenbaum81] (p.130-132) which has a very clear discussion of this
issue. This section merely aims to put the fear of death into anyone
who so much as toys with the idea of making up their own poly. If you
don't care about why one poly might be better than another and just
want to find out about high-speed implementations, choose one of the
arithmetically sound polys listed at the end of this section and skip
to the next section.
First note that the transmitted message T is a multiple of the poly.
To see this, note that 1) the last W bits of T is the remainder after
dividing the augmented (by zeros remember) message by the poly, and 2)
addition is the same as subtraction so adding the remainder pushes the
value up to the next multiple. Now note that if the transmitted
message is corrupted in transmission that we will receive T+E where E
is an error vector (and + is CRC addition (i.e. XOR)). Upon receipt of
this message, the receiver divides T+E by G. As T mod G is 0, (T+E)
mod G = E mod G. Thus, the capacity of the poly we choose to catch
particular kinds of errors will be determined by the set of multiples
of G, for any corruption E that is a multiple of G will be undetected.
Our task then is to find classes of G whose multiples look as little
like the kind of line noise (that will be creating the corruptions) as
possible. So let's examine the kinds of line noise we can expect.
SINGLE BIT ERRORS: A single bit error means E=1000...0000. We can
ensure that this class of error is always detected by making sure that
G has at least two bits set to 1. Any multiple of G will be
constructed using shifting and adding and it is impossible to
construct a value with a single bit by shifting an adding a single
value with more than one bit set, as the two end bits will always
TWO-BIT ERRORS: To detect all errors of the form 100...000100...000
(i.e. E contains two 1 bits) choose a G that does not have multiples
that are 11, 101, 1001, 10001, 100001, etc. It is not clear to me how
one goes about doing this (I don't have the pure maths background),
but Tanenbaum assures us that such G do exist, and cites G with 1 bits
(15,14,1) turned on as an example of one G that won't divide anything
less than 1...1 where ... is 32767 zeros.
ERRORS WITH AN ODD NUMBER OF BITS: We can catch all corruptions where
E has an odd number of bits by choosing a G that has an even number of
bits. To see this, note that 1) CRC multiplication is simply XORing a
constant value into a register at various offsets, 2) XORing is simply
a bit-flip operation, and 3) if you XOR a value with an even number of
bits into a register, the oddness of the number of 1 bits in the
register remains invariant. Example: Starting with E=111, attempt to
flip all three bits to zero by the repeated application of XORing in
11 at one of the two offsets (i.e. "E=E XOR 011" and "E=E XOR 110")
This is nearly isomorphic to the "glass tumblers" party puzzle where
you challenge someone to flip three tumblers by the repeated
application of the operation of flipping any two. Most of the popular
CRC polys contain an even number of 1 bits. (Note: Tanenbaum states
more specifically that all errors with an odd number of bits can be
caught by making G a multiple of 11).
BURST ERRORS: A burst error looks like E=000...000111...11110000...00.
That is, E consists of all zeros except for a run of 1s somewhere
inside. This can be recast as E=(10000...00)(1111111...111) where
there are z zeros in the LEFT part and n ones in the RIGHT part. To
catch errors of this kind, we simply set the lowest bit of G to 1.
Doing this ensures that LEFT cannot be a factor of G. Then, so long as
G is wider than RIGHT, the error will be detected. See Tanenbaum for a
clearer explanation of this; I'm a little fuzzy on this one. Note:
Tanenbaum asserts that the probability of a burst of length greater
than W getting through is (0.5)^W.
That concludes the section on the fine art of selecting polys.
Some popular polys are:
16 bits: (16,12,5,0) [X25 standard]
(16,15,2,0) ["CRC-16"]
32 bits: (32,26,23,22,16,12,11,10,8,7,5,4,2,1,0) [Ethernet]
8. A Straightforward CRC Implementation
That's the end of the theory; now we turn to implementations. To start
with, we examine an absolutely straight-down-the-middle boring
straightforward low-speed implementation that doesn't use any speed
tricks at all. We'll then transform that program progessively until we
end up with the compact table-driven code we all know and love and
which some of us would like to understand.
To implement a CRC algorithm all we have to do is implement CRC
division. There are two reasons why we cannot simply use the divide
instruction of whatever machine we are on. The first is that we have
to do the divide in CRC arithmetic. The second is that the dividend
might be ten megabytes long, and todays processors do not have
registers that big.
So to implement CRC division, we have to feed the message through a
division register. At this point, we have to be absolutely precise
about the message data. In all the following examples the message will
be considered to be a stream of bytes (each of 8 bits) with bit 7 of
each byte being considered to be the most significant bit (MSB). The
bit stream formed from these bytes will be the bit stream with the MSB
(bit 7) of the first byte first, going down to bit 0 of the first
byte, and then the MSB of the second byte and so on.
With this in mind, we can sketch an implementation of the CRC
division. For the purposes of example, consider a poly with W=4 and
the poly=10111. Then, the perform the division, we need to use a 4-bit
3 2 1 0 Bits
Pop! <-- | | | | | <----- Augmented message
1 0 1 1 1 = The Poly
(Reminder: The augmented message is the message followed by W zero bits.)
To perform the division perform the following:
Load the register with zero bits.
Augment the message by appending W zero bits to the end of it.
While (more message bits)
Shift the register left by one bit, reading the next bit of the
augmented message into register bit position 0.
If (a 1 bit popped out of the register during step 3)
Register = Register XOR Poly.
The register now contains the remainder.
(Note: In practice, the IF condition can be tested by testing the top
bit of R before performing the shift.)
We will call this algorithm "SIMPLE".
This might look a bit messy, but all we are really doing is
"subtracting" various powers (i.e. shiftings) of the poly from the
message until there is nothing left but the remainder. Study the
manual examples of long division if you don't understand this.
It should be clear that the above algorithm will work for any width W.
9. A Table-Driven Implementation
The SIMPLE algorithm above is a good starting point because it
corresponds directly to the theory presented so far, and because it is
so SIMPLE. However, because it operates at the bit level, it is rather
awkward to code (even in C), and inefficient to execute (it has to
loop once for each bit). To speed it up, we need to find a way to
enable the algorithm to process the message in units larger than one
bit. Candidate quantities are nibbles (4 bits), bytes (8 bits), words
(16 bits) and longwords (32 bits) and higher if we can achieve it. Of
these, 4 bits is best avoided because it does not correspond to a byte
boundary. At the very least, any speedup should allow us to operate at
byte boundaries, and in fact most of the table driven algorithms
operate a byte at a time.
For the purposes of discussion, let us switch from a 4-bit poly to a
32-bit one. Our register looks much the same, except the boxes
represent bytes instead of bits, and the Poly is 33 bits (one implicit
1 bit at the top and 32 "active" bits) (W=32).
3 2 1 0 Bytes
Pop! <-- | | | | | <----- Augmented message
1<------32 bits------>
The SIMPLE algorithm is still applicable. Let us examine what it does.
Imagine that the SIMPLE algorithm is in full swing and consider the
top 8 bits of the 32-bit register (byte 3) to have the values:
t7 t6 t5 t4 t3 t2 t1 t0
In the next iteration of SIMPLE, t7 will determine whether the Poly
will be XORed into the entire register. If t7=1, this will happen,
otherwise it will not. Suppose that the top 8 bits of the poly are g7
g6.. g0, then after the next iteration, the top byte will be:
t6 t5 t4 t3 t2 t1 t0 ??
+ t7 * (g7 g6 g5 g4 g3 g2 g1 g0) [Reminder: + is XOR]
The NEW top bit (that will control what happens in the next iteration)
now has the value t6 + t7*g7. The important thing to notice here is
that from an informational point of view, all the information required
to calculate the NEW top bit was present in the top TWO bits of the
original top byte. Similarly, the NEXT top bit can be calculated in
advance SOLELY from the top THREE bits t7, t6, and t5. In fact, in
general, the value of the top bit in the register in k iterations can
be calculated from the top k bits of the register. Let us take this
for granted for a moment.
Consider for a moment that we use the top 8 bits of the register to
calculate the value of the top bit of the register during the next 8
iterations. Suppose that we drive the next 8 iterations using the
calculated values (which we could perhaps store in a single byte
register and shift out to pick off each bit). Then we note three
* The top byte of the register now doesn't matter. No matter how
many times and at what offset the poly is XORed to the top 8
bits, they will all be shifted out the right hand side during the
next 8 iterations anyway.
* The remaining bits will be shifted left one position and the
rightmost byte of the register will be shifted in the next byte
* While all this is going on, the register will be subjected to a
series of XOR's in accordance with the bits of the pre-calculated
control byte.
Now consider the effect of XORing in a constant value at various
offsets to a register. For example:
0100010 Register
...0110 XOR this
..0110. XOR this
0110... XOR this
The point of this is that you can XOR constant values into a register
to your heart's delight, and in the end, there will exist a value
which when XORed in with the original register will have the same
effect as all the other XORs.
Perhaps you can see the solution now. Putting all the pieces together
we have an algorithm that goes like this:
While (augmented message is not exhausted)
Examine the top byte of the register
Calculate the control byte from the top byte of the register
Sum all the Polys at various offsets that are to be XORed into
the register in accordance with the control byte
Shift the register left by one byte, reading a new message byte
into the rightmost byte of the register
XOR the summed polys to the register
As it stands this is not much better than the SIMPLE algorithm.
However, it turns out that most of the calculation can be precomputed
and assembled into a table. As a result, the above algorithm can be
reduced to:
While (augmented message is not exhaused)
Top = top_byte(Register);
Register = (Register << 24) | next_augmessage_byte;
Register = Register XOR precomputed_table[Top];
There! If you understand this, you've grasped the main idea of
table-driven CRC algorithms. The above is a very efficient algorithm
requiring just a shift, and OR, an XOR, and a table lookup per byte.
Graphically, it looks like this:
3 2 1 0 Bytes
+-----<| | | | | <----- Augmented message
| +----+----+----+----+
| ^
| |
| XOR
| |
| 0+----+----+----+----+ Algorithm
v +----+----+----+----+ ---------
| +----+----+----+----+ 1. Shift the register left by
| +----+----+----+----+ one byte, reading in a new
| +----+----+----+----+ message byte.
| +----+----+----+----+ 2. Use the top byte just rotated
| +----+----+----+----+ out of the register to index
+----->+----+----+----+----+ the table of 256 32-bit values.
+----+----+----+----+ 3. XOR the table value into the
+----+----+----+----+ register.
+----+----+----+----+ 4. Goto 1 iff more augmented
+----+----+----+----+ message bytes.
In C, the algorithm main loop looks like this:
while (len--)
byte t = (r >> 24) & 0xFF;
r = (r << 8) | *p++;
where len is the length of the augmented message in bytes, p points to
the augmented message, r is the register, t is a temporary, and table
is the computed table. This code can be made even more unreadable as
r=0; while (len--) r = ((r << 8) | *p++) ^ t[(r >> 24) & 0xFF];
This is a very clean, efficient loop, although not a very obvious one
to the casual observer not versed in CRC theory. We will call this the
TABLE algorithm.
10. A Slightly Mangled Table-Driven Implementation
Despite the terse beauty of the line
r=0; while (len--) r = ((r << 8) | *p++) ^ t[(r >> 24) & 0xFF];
those optimizing hackers couldn't leave it alone. The trouble, you
see, is that this loop operates upon the AUGMENTED message and in
order to use this code, you have to append W/8 zero bytes to the end
of the message before pointing p at it. Depending on the run-time
environment, this may or may not be a problem; if the block of data
was handed to us by some other code, it could be a BIG problem. One
alternative is simply to append the following line after the above
loop, once for each zero byte:
for (i=0; i<W/4; i++) r = (r << 8) ^ t[(r >> 24) & 0xFF];
This looks like a sane enough solution to me. However, at the further
expense of clarity (which, you must admit, is already a pretty scare
commodity in this code) we can reorganize this small loop further so
as to avoid the need to either augment the message with zero bytes, or
to explicitly process zero bytes at the end as above. To explain the
optimization, we return to the processing diagram given earlier.
3 2 1 0 Bytes
+-----<| | | | | <----- Augmented message
| +----+----+----+----+
| ^
| |
| XOR
| |
| 0+----+----+----+----+ Algorithm
v +----+----+----+----+ ---------
| +----+----+----+----+ 1. Shift the register left by
| +----+----+----+----+ one byte, reading in a new
| +----+----+----+----+ message byte.
| +----+----+----+----+ 2. Use the top byte just rotated
| +----+----+----+----+ out of the register to index
+----->+----+----+----+----+ the table of 256 32-bit values.
+----+----+----+----+ 3. XOR the table value into the
+----+----+----+----+ register.
+----+----+----+----+ 4. Goto 1 iff more augmented
+----+----+----+----+ message bytes.
Now, note the following facts:
TAIL: The W/4 augmented zero bytes that appear at the end of the
message will be pushed into the register from the right as all
the other bytes are, but their values (0) will have no effect
whatsoever on the register because 1) XORing with zero does not
change the target byte, and 2) the four bytes are never
propagated out the left side of the register where their
zeroness might have some sort of influence. Thus, the sole
function of the W/4 augmented zero bytes is to drive the
calculation for another W/4 byte cycles so that the end of the
REAL data passes all the way through the register.
HEAD: If the initial value of the register is zero, the first four
iterations of the loop will have the sole effect of shifting in
the first four bytes of the message from the right. This is
because the first 32 control bits are all zero and so nothing is
XORed into the register. Even if the initial value is not zero,
the first 4 byte iterations of the algorithm will have the sole
effect of shifting the first 4 bytes of the message into the
register and then XORing them with some constant value (that is
a function of the initial value of the register).
These facts, combined with the XOR property
(A xor B) xor C = A xor (B xor C)
mean that message bytes need not actually travel through the W/4 bytes
of the register. Instead, they can be XORed into the top byte just
before it is used to index the lookup table. This leads to the
following modified version of the algorithm.
+-----<Message (non augmented)
v 3 2 1 0 Bytes
| +----+----+----+----+
XOR----<| | | | |
| +----+----+----+----+
| ^
| |
| XOR
| |
| 0+----+----+----+----+ Algorithm
v +----+----+----+----+ ---------
| +----+----+----+----+ 1. Shift the register left by
| +----+----+----+----+ one byte, reading in a new
| +----+----+----+----+ message byte.
| +----+----+----+----+ 2. XOR the top byte just rotated
| +----+----+----+----+ out of the register with the
+----->+----+----+----+----+ next message byte to yield an
+----+----+----+----+ index into the table ([0,255]).
+----+----+----+----+ 3. XOR the table value into the
+----+----+----+----+ register.
+----+----+----+----+ 4. Goto 1 iff more augmented
255+----+----+----+----+ message bytes.
Note: The initial register value for this algorithm must be the
initial value of the register for the previous algorithm fed through
the table four times. Note: The table is such that if the previous
algorithm used 0, the new algorithm will too.
This is an IDENTICAL algorithm and will yield IDENTICAL results. The C
code looks something like this:
r=0; while (len--) r = (r<<8) ^ t[(r >> 24) ^ *p++];
and THIS is the code that you are likely to find inside current
table-driven CRC implementations. Some FF masks might have to be ANDed
in here and there for portability's sake, but basically, the above
loop is IT. We will call this the DIRECT TABLE ALGORITHM.
During the process of trying to understand all this stuff, I managed
to derive the SIMPLE algorithm and the table-driven version derived
from that. However, when I compared my code with the code found in
real-implementations, I was totally bamboozled as to why the bytes
were being XORed in at the wrong end of the register! It took quite a
while before I figured out that theirs and my algorithms were actually
the same. Part of why I am writing this document is that, while the
link between division and my earlier table-driven code is vaguely
apparent, any such link is fairly well erased when you start pumping
bytes in at the "wrong end" of the register. It looks all wrong!
If you've got this far, you not only understand the theory, the
practice, the optimized practice, but you also understand the real
code you are likely to run into. Could get any more complicated? Yes
it can.
11. "Reflected" Table-Driven Implementations
Despite the fact that the above code is probably optimized about as
much as it could be, this did not stop some enterprising individuals
from making things even more complicated. To understand how this
happened, we have to enter the world of hardware.
DEFINITION: A value/register is reflected if it's bits are swapped
around its centre. For example: 0101 is the 4-bit reflection of 1010.
0011 is the reflection of 1100.
0111-0101-1010-1111-0010-0101-1011-1100 is the reflection of
Turns out that UARTs (those handy little chips that perform serial IO)
are in the habit of transmitting each byte with the least significant
bit (bit 0) first and the most significant bit (bit 7) last (i.e.
reflected). An effect of this convention is that hardware engineers
constructing hardware CRC calculators that operate at the bit level
took to calculating CRCs of bytes streams with each of the bytes
reflected within itself. The bytes are processed in the same order,
but the bits in each byte are swapped; bit 0 is now bit 7, bit 1 is
now bit 6, and so on. Now this wouldn't matter much if this convention
was restricted to hardware land. However it seems that at some stage
some of these CRC values were presented at the software level and
someone had to write some code that would interoperate with the
hardware CRC calculation.
In this situation, a normal sane software engineer would simply
reflect each byte before processing it. However, it would seem that
normal sane software engineers were thin on the ground when this early
ground was being broken, because instead of reflecting the bytes,
whoever was responsible held down the byte and reflected the world,
leading to the following "reflected" algorithm which is identical to
the previous one except that everything is reflected except the input
Message (non augmented) >-----+
Bytes 0 1 2 3 v
+----+----+----+----+ |
| | | | |>----XOR
+----+----+----+----+ |
^ |
| |
XOR |
| |
+----+----+----+----+0 |
+----+----+----+----+ v
+----+----+----+----+ |
+----+----+----+----+ |
+----+----+----+----+ |
+----+----+----+----+ |
+----+----+----+----+ |
* The table is identical to the one in the previous algorithm
except that each entry has been reflected.
* The initial value of the register is the same as in the previous
algorithm except that it has been reflected.
* The bytes of the message are processed in the same order as
before (i.e. the message itself is not reflected).
* The message bytes themselves don't need to be explicitly
reflected, because everything else has been!
At the end of execution, the register contains the reflection of the
final CRC value (remainder). Actually, I'm being rather hard on
whoever cooked this up because it seems that hardware implementations
of the CRC algorithm used the reflected checksum value and so
producing a reflected CRC was just right. In fact reflecting the world
was probably a good engineering solution - if a confusing one.
We will call this the REFLECTED algorithm.
Whether or not it made sense at the time, the effect of having
reflected algorithms kicking around the world's FTP sites is that
about half the CRC implementations one runs into are reflected and the
other half not. It's really terribly confusing. In particular, it
would seem to me that the casual reader who runs into a reflected,
table-driven implementation with the bytes "fed in the wrong end"
would have Buckley's chance of ever connecting the code to the concept
of binary mod 2 division.
It couldn't get any more confusing could it? Yes it could.
12. "Reversed" Polys
As if reflected implementations weren't enough, there is another
concept kicking around which makes the situation bizaarly confusing.
The concept is reversed Polys.
It turns out that the reflection of good polys tend to be good polys
too! That is, if G=11101 is a good poly value, then 10111 will be as
well. As a consequence, it seems that every time an organization (such
as CCITT) standardizes on a particularly good poly ("polynomial"),
those in the real world can't leave the poly's reflection alone
either. They just HAVE to use it. As a result, the set of "standard"
poly's has a corresponding set of reflections, which are also in use.
To avoid confusion, we will call these the "reversed" polys.
X25 standard: 1-0001-0000-0010-0001
X25 reversed: 1-0000-1000-0001-0001
CRC16 standard: 1-1000-0000-0000-0101
CRC16 reversed: 1-0100-0000-0000-0011
Note that here it is the entire poly that is being reflected/reversed,
not just the bottom W bits. This is an important distinction. In the
reflected algorithm described in the previous section, the poly used
in the reflected algorithm was actually identical to that used in the
non-reflected algorithm; all that had happened is that the bytes had
effectively been reflected. As such, all the 16-bit/32-bit numbers in
the algorithm had to be reflected. In contrast, the ENTIRE poly
includes the implicit one bit at the top, and so reversing a poly is
not the same as reflecting its bottom 16 or 32 bits.
The upshot of all this is that a reflected algorithm is not equivalent
to the original algorithm with the poly reflected. Actually, this is
probably less confusing than if they were duals.
If all this seems a bit unclear, don't worry, because we're going to
sort it all out "real soon now". Just one more section to go before
13. Initial and Final Values
In addition to the complexity already seen, CRC algorithms differ from
each other in two other regards:
* The initial value of the register.
* The value to be XORed with the final register value.
For example, the "CRC32" algorithm initializes its register to
FFFFFFFF and XORs the final register value with FFFFFFFF.
Most CRC algorithms initialize their register to zero. However, some
initialize it to a non-zero value. In theory (i.e. with no assumptions
about the message), the initial value has no affect on the strength of
the CRC algorithm, the initial value merely providing a fixed starting
point from which the register value can progress. However, in
practice, some messages are more likely than others, and it is wise to
initialize the CRC algorithm register to a value that does not have
"blind spots" that are likely to occur in practice. By "blind spot" is
meant a sequence of message bytes that do not result in the register
changing its value. In particular, any CRC algorithm that initializes
its register to zero will have a blind spot of zero when it starts up
and will be unable to "count" a leading run of zero bytes. As a
leading run of zero bytes is quite common in real messages, it is wise
to initialize the algorithm register to a non-zero value.
14. Defining Algorithms Absolutely
At this point we have covered all the different aspects of
table-driven CRC algorithms. As there are so many variations on these
algorithms, it is worth trying to establish a nomenclature for them.
This section attempts to do that.
We have seen that CRC algorithms vary in:
* Width of the poly (polynomial).
* Value of the poly.
* Initial value for the register.
* Whether the bits of each byte are reflected before being processed.
* Whether the algorithm feeds input bytes through the register or
xors them with a byte from one end and then straight into the table.
* Whether the final register value should be reversed (as in reflected
* Value to XOR with the final register value.
In order to be able to talk about particular CRC algorithms, we need
to able to define them more precisely than this. For this reason, the
next section attempts to provide a well-defined parameterized model
for CRC algorithms. To refer to a particular algorithm, we need then
simply specify the algorithm in terms of parameters to the model.
15. A Parameterized Model For CRC Algorithms
In this section we define a precise parameterized model CRC algorithm
which, for want of a better name, we will call the "Rocksoft^tm Model
CRC Algorithm" (and why not? Rocksoft^tm could do with some free
advertising :-).
The most important aspect of the model algorithm is that it focusses
exclusively on functionality, ignoring all implementation details. The
aim of the exercise is to construct a way of referring precisely to
particular CRC algorithms, regardless of how confusingly they are
implemented. To this end, the model must be as simple and precise as
possible, with as little confusion as possible.
The Rocksoft^tm Model CRC Algorithm is based essentially on the DIRECT
TABLE ALGORITHM specified earlier. However, the algorithm has to be
further parameterized to enable it to behave in the same way as some
of the messier algorithms out in the real world.
To enable the algorithm to behave like reflected algorithms, we
provide a boolean option to reflect the input bytes, and a boolean
option to specify whether to reflect the output checksum value. By
framing reflection as an input/output transformation, we avoid the
confusion of having to mentally map the parameters of reflected and
non-reflected algorithms.
An extra parameter allows the algorithm's register to be initialized
to a particular value. A further parameter is XORed with the final
value before it is returned.
By putting all these pieces together we end up with the parameters of
the algorithm:
NAME: This is a name given to the algorithm. A string value.
WIDTH: This is the width of the algorithm expressed in bits. This
is one less than the width of the Poly.
POLY: This parameter is the poly. This is a binary value that
should be specified as a hexadecimal number. The top bit of the
poly should be omitted. For example, if the poly is 10110, you
should specify 06. An important aspect of this parameter is that it
represents the unreflected poly; the bottom bit of this parameter
is always the LSB of the divisor during the division regardless of
whether the algorithm being modelled is reflected.
INIT: This parameter specifies the initial value of the register
when the algorithm starts. This is the value that is to be assigned
to the register in the direct table algorithm. In the table
algorithm, we may think of the register always commencing with the
value zero, and this value being XORed into the register after the
N'th bit iteration. This parameter should be specified as a
hexadecimal number.
REFIN: This is a boolean parameter. If it is FALSE, input bytes are
processed with bit 7 being treated as the most significant bit
(MSB) and bit 0 being treated as the least significant bit. If this
parameter is FALSE, each byte is reflected before being processed.
REFOUT: This is a boolean parameter. If it is set to FALSE, the
final value in the register is fed into the XOROUT stage directly,
otherwise, if this parameter is TRUE, the final register value is
reflected first.
XOROUT: This is an W-bit value that should be specified as a
hexadecimal number. It is XORed to the final register value (after
the REFOUT) stage before the value is returned as the official
CHECK: This field is not strictly part of the definition, and, in
the event of an inconsistency between this field and the other
field, the other fields take precedence. This field is a check
value that can be used as a weak validator of implementations of
the algorithm. The field contains the checksum obtained when the
ASCII string "123456789" is fed through the specified algorithm
(i.e. 313233... (hexadecimal)).
With these parameters defined, the model can now be used to specify a
particular CRC algorithm exactly. Here is an example specification for
a popular form of the CRC-16 algorithm.
Name : "CRC-16"
Width : 16
Poly : 8005
Init : 0000
RefIn : True
RefOut : True
XorOut : 0000
Check : BB3D
16. A Catalog of Parameter Sets for Standards
At this point, I would like to give a list of the specifications for
commonly used CRC algorithms. However, most of the algorithms that I
have come into contact with so far are specified in such a vague way
that this has not been possible. What I can provide is a list of polys
for various CRC standards I have heard of:
X25 standard : 1021 [CRC-CCITT, ADCCP, SDLC/HDLC]
X25 reversed : 0811
CRC16 standard : 8005
CRC16 reversed : 4003 [LHA]
CRC32 : 04C11DB7 [PKZIP, AUTODIN II, Ethernet, FDDI]
I would be interested in hearing from anyone out there who can tie
down the complete set of model parameters for any of these standards.
However, a program that was kicking around seemed to imply the
following specifications. Can anyone confirm or deny them (or provide
the check values (which I couldn't be bothered coding up and
Name : "CRC-16/CITT"
Width : 16
Poly : 1021
Init : FFFF
RefIn : False
RefOut : False
XorOut : 0000
Check : ?
Name : "XMODEM"
Width : 16
Poly : 8408
Init : 0000
RefIn : True
RefOut : True
XorOut : 0000
Check : ?
Name : "ARC"
Width : 16
Poly : 8005
Init : 0000
RefIn : True
RefOut : True
XorOut : 0000
Check : ?
Here is the specification for the CRC-32 algorithm which is reportedly
used in PKZip, AUTODIN II, Ethernet, and FDDI.
Name : "CRC-32"
Width : 32
Poly : 04C11DB7
Init : FFFFFFFF
RefIn : True
RefOut : True
XorOut : FFFFFFFF
Check : CBF43926
17. An Implementation of the Model Algorithm
Here is an implementation of the model algorithm in the C programming
language. The implementation consists of a header file (.h) and an
implementation file (.c). If you're reading this document in a
sequential scroller, you can skip this code by searching for the
string "Roll Your Own".
To ensure that the following code is working, configure it for the
CRC-16 and CRC-32 algorithms given above and ensure that they produce
the specified "check" checksum when fed the test string "123456789"
(see earlier).
/* Start of crcmodel.h */
/* */
/* Author : Ross Williams (ross@guest.adelaide.edu.au.). */
/* Date : 3 June 1993. */
/* Status : Public domain. */
/* */
/* Description : This is the header (.h) file for the reference */
/* implementation of the Rocksoft^tm Model CRC Algorithm. For more */
/* information on the Rocksoft^tm Model CRC Algorithm, see the document */
/* titled "A Painless Guide to CRC Error Detection Algorithms" by Ross */
/* Williams (ross@guest.adelaide.edu.au.). This document is likely to be in */
/* "ftp.adelaide.edu.au/pub/rocksoft". */
/* */
/* Note: Rocksoft is a trademark of Rocksoft Pty Ltd, Adelaide, Australia. */
/* */
/* */
/* How to Use This Package */
/* ----------------------- */
/* Step 1: Declare a variable of type cm_t. Declare another variable */
/* (p_cm say) of type p_cm_t and initialize it to point to the first */
/* variable (e.g. p_cm_t p_cm = &cm_t). */
/* */
/* Step 2: Assign values to the parameter fields of the structure. */
/* If you don't know what to assign, see the document cited earlier. */
/* For example: */
/* p_cm->cm_width = 16; */
/* p_cm->cm_poly = 0x8005L; */
/* p_cm->cm_init = 0L; */
/* p_cm->cm_refin = TRUE; */
/* p_cm->cm_refot = TRUE; */
/* p_cm->cm_xorot = 0L; */
/* Note: Poly is specified without its top bit (18005 becomes 8005). */
/* Note: Width is one bit less than the raw poly width. */
/* */
/* Step 3: Initialize the instance with a call cm_ini(p_cm); */
/* */
/* Step 4: Process zero or more message bytes by placing zero or more */
/* successive calls to cm_nxt. Example: cm_nxt(p_cm,ch); */
/* */
/* Step 5: Extract the CRC value at any time by calling crc = cm_crc(p_cm); */
/* If the CRC is a 16-bit value, it will be in the bottom 16 bits. */
/* */
/* */
/* Design Notes */
/* ------------ */
/* PORTABILITY: This package has been coded very conservatively so that */
/* it will run on as many machines as possible. For example, all external */
/* identifiers have been restricted to 6 characters and all internal ones to */
/* 8 characters. The prefix cm (for Crc Model) is used as an attempt to avoid */
/* namespace collisions. This package is endian independent. */
/* */
/* EFFICIENCY: This package (and its interface) is not designed for */
/* speed. The purpose of this package is to act as a well-defined reference */
/* model for the specification of CRC algorithms. If you want speed, cook up */
/* a specific table-driven implementation as described in the document cited */
/* above. This package is designed for validation only; if you have found or */
/* implemented a CRC algorithm and wish to describe it as a set of parameters */
/* to the Rocksoft^tm Model CRC Algorithm, your CRC algorithm implementation */
/* should behave identically to this package under those parameters. */
/* */
/* The following #ifndef encloses this entire */
/* header file, rendering it indempotent. */
#ifndef CM_DONE
#define CM_DONE
/* The following definitions are extracted from my style header file which */
/* would be cumbersome to distribute with this package. The DONE_STYLE is the */
/* idempotence symbol used in my style header file. */
#ifndef DONE_STYLE
typedef unsigned long ulong;
typedef unsigned bool;
typedef unsigned char * p_ubyte_;
#ifndef TRUE
#define FALSE 0
#define TRUE 1
/* Change to the second definition if you don't have prototypes. */
#define P_(A) A
/* #define P_(A) () */
/* Uncomment this definition if you don't have void. */
/* typedef int void; */
/* CRC Model Abstract Type */
/* ----------------------- */
/* The following type stores the context of an executing instance of the */
/* model algorithm. Most of the fields are model parameters which must be */
/* set before the first initializing call to cm_ini. */
typedef struct
int cm_width; /* Parameter: Width in bits [8,32]. */
ulong cm_poly; /* Parameter: The algorithm's polynomial. */
ulong cm_init; /* Parameter: Initial register value. */
bool cm_refin; /* Parameter: Reflect input bytes? */
bool cm_refot; /* Parameter: Reflect output CRC? */
ulong cm_xorot; /* Parameter: XOR this to output CRC. */
ulong cm_reg; /* Context: Context during execution. */
} cm_t;
typedef cm_t *p_cm_t;
/* Functions That Implement The Model */
/* ---------------------------------- */
/* The following functions animate the cm_t abstraction. */
void cm_ini P_((p_cm_t p_cm));
/* Initializes the argument CRC model instance. */
/* All parameter fields must be set before calling this. */
void cm_nxt P_((p_cm_t p_cm,int ch));
/* Processes a single message byte [0,255]. */
void cm_blk P_((p_cm_t p_cm,p_ubyte_ blk_adr,ulong blk_len));
/* Processes a block of message bytes. */
ulong cm_crc P_((p_cm_t p_cm));
/* Returns the CRC value for the message bytes processed so far. */
/* Functions For Table Calculation */
/* ------------------------------- */
/* The following function can be used to calculate a CRC lookup table. */
/* It can also be used at run-time to create or check static tables. */
ulong cm_tab P_((p_cm_t p_cm,int index));
/* Returns the i'th entry for the lookup table for the specified algorithm. */
/* The function examines the fields cm_width, cm_poly, cm_refin, and the */
/* argument table index in the range [0,255] and returns the table entry in */
/* the bottom cm_width bytes of the return value. */
/* End of the header file idempotence #ifndef */
/* End of crcmodel.h */
/* Start of crcmodel.c */
/* */
/* Author : Ross Williams (ross@guest.adelaide.edu.au.). */
/* Date : 3 June 1993. */
/* Status : Public domain. */
/* */
/* Description : This is the implementation (.c) file for the reference */
/* implementation of the Rocksoft^tm Model CRC Algorithm. For more */
/* information on the Rocksoft^tm Model CRC Algorithm, see the document */
/* titled "A Painless Guide to CRC Error Detection Algorithms" by Ross */
/* Williams (ross@guest.adelaide.edu.au.). This document is likely to be in */
/* "ftp.adelaide.edu.au/pub/rocksoft". */
/* */
/* Note: Rocksoft is a trademark of Rocksoft Pty Ltd, Adelaide, Australia. */
/* */
/* */
/* Implementation Notes */
/* -------------------- */
/* To avoid inconsistencies, the specification of each function is not echoed */
/* here. See the header file for a description of these functions. */
/* This package is light on checking because I want to keep it short and */
/* simple and portable (i.e. it would be too messy to distribute my entire */
/* C culture (e.g. assertions package) with this package. */
/* */
#include "crcmodel.h"
/* The following definitions make the code more readable. */
#define BITMASK(X) (1L << (X))
#define MASK32 0xFFFFFFFFL
#define LOCAL static
LOCAL ulong reflect P_((ulong v,int b));
LOCAL ulong reflect (v,b)
/* Returns the value v with the bottom b [0,32] bits reflected. */
/* Example: reflect(0x3e23L,3) == 0x3e26 */
ulong v;
int b;
int i;
ulong t = v;
for (i=0; i<b; i++)
if (t & 1L)
v|= BITMASK((b-1)-i);
v&= ~BITMASK((b-1)-i);
return v;
LOCAL ulong widmask P_((p_cm_t));
LOCAL ulong widmask (p_cm)
/* Returns a longword whose value is (2^p_cm->cm_width)-1. */
/* The trick is to do this portably (e.g. without doing <<32). */
p_cm_t p_cm;
return (((1L<<(p_cm->cm_width-1))-1L)<<1)|1L;
void cm_ini (p_cm)
p_cm_t p_cm;
p_cm->cm_reg = p_cm->cm_init;
void cm_nxt (p_cm,ch)
p_cm_t p_cm;
int ch;
int i;
ulong uch = (ulong) ch;
ulong topbit = BITMASK(p_cm->cm_width-1);
if (p_cm->cm_refin) uch = reflect(uch,8);
p_cm->cm_reg ^= (uch << (p_cm->cm_width-8));
for (i=0; i<8; i++)
if (p_cm->cm_reg & topbit)
p_cm->cm_reg = (p_cm->cm_reg << 1) ^ p_cm->cm_poly;
p_cm->cm_reg <<= 1;
p_cm->cm_reg &= widmask(p_cm);
void cm_blk (p_cm,blk_adr,blk_len)
p_cm_t p_cm;
p_ubyte_ blk_adr;
ulong blk_len;
while (blk_len--) cm_nxt(p_cm,*blk_adr++);
ulong cm_crc (p_cm)
p_cm_t p_cm;
if (p_cm->cm_refot)
return p_cm->cm_xorot ^ reflect(p_cm->cm_reg,p_cm->cm_width);
return p_cm->cm_xorot ^ p_cm->cm_reg;
ulong cm_tab (p_cm,index)
p_cm_t p_cm;
int index;
int i;
ulong r;
ulong topbit = BITMASK(p_cm->cm_width-1);
ulong inbyte = (ulong) index;
if (p_cm->cm_refin) inbyte = reflect(inbyte,8);
r = inbyte << (p_cm->cm_width-8);
for (i=0; i<8; i++)
if (r & topbit)
r = (r << 1) ^ p_cm->cm_poly;
if (p_cm->cm_refin) r = reflect(r,p_cm->cm_width);
return r & widmask(p_cm);
/* End of crcmodel.c */
18. Roll Your Own Table-Driven Implementation
Despite all the fuss I've made about understanding and defining CRC
algorithms, the mechanics of their high-speed implementation remains
trivial. There are really only two forms: normal and reflected. Normal
shifts to the left and covers the case of algorithms with Refin=FALSE
and Refot=FALSE. Reflected shifts to the right and covers algorithms
with both those parameters true. (If you want one parameter true and
the other false, you'll have to figure it out for yourself!) The
polynomial is embedded in the lookup table (to be discussed). The
other parameters, Init and XorOt can be coded as macros. Here is the
32-bit normal form (the 16-bit form is similar).
unsigned long crc_normal ();
unsigned long crc_normal (blk_adr,blk_len)
unsigned char *blk_adr;
unsigned long blk_len;
unsigned long crc = INIT;
while (blk_len--)
crc = crctable[((crc>>24) ^ *blk_adr++) & 0xFFL] ^ (crc << 8);
return crc ^ XOROT;
Here is the reflected form:
unsigned long crc_reflected ();
unsigned long crc_reflected (blk_adr,blk_len)
unsigned char *blk_adr;
unsigned long blk_len;
unsigned long crc = INIT_REFLECTED;
while (blk_len--)
crc = crctable[(crc ^ *blk_adr++) & 0xFFL] ^ (crc >> 8));
return crc ^ XOROT;
Note: I have carefully checked the above two code fragments, but I
haven't actually compiled or tested them. This shouldn't matter to
you, as, no matter WHAT you code, you will always be able to tell if
you have got it right by running whatever you have created against the
reference model given earlier. The code fragments above are really
just a rough guide. The reference model is the definitive guide.
Note: If you don't care much about speed, just use the reference model
19. Generating A Lookup Table
The only component missing from the normal and reversed code fragments
in the previous section is the lookup table. The lookup table can be
computed at run time using the cm_tab function of the model package
given earlier, or can be pre-computed and inserted into the C program.
In either case, it should be noted that the lookup table depends only
on the POLY and RefIn (and RefOt) parameters. Basically, the
polynomial determines the table, but you can generate a reflected
table too if you want to use the reflected form above.
The following program generates any desired 16-bit or 32-bit lookup
table. Skip to the word "Summary" if you want to skip over this code.
/* Start of crctable.c */
/* */
/* Author : Ross Williams (ross@guest.adelaide.edu.au.). */
/* Date : 3 June 1993. */
/* Version : 1.0. */
/* Status : Public domain. */
/* */
/* Description : This program writes a CRC lookup table (suitable for */
/* inclusion in a C program) to a designated output file. The program can be */
/* statically configured to produce any table covered by the Rocksoft^tm */
/* Model CRC Algorithm. For more information on the Rocksoft^tm Model CRC */
/* Algorithm, see the document titled "A Painless Guide to CRC Error */
/* Detection Algorithms" by Ross Williams (ross@guest.adelaide.edu.au.). This */
/* document is likely to be in "ftp.adelaide.edu.au/pub/rocksoft". */
/* */
/* Note: Rocksoft is a trademark of Rocksoft Pty Ltd, Adelaide, Australia. */
/* */
#include <stdio.h>
#include <stdlib.h>
#include "crcmodel.h"
/* TABLE PARAMETERS */
/* ================ */
/* The following parameters entirely determine the table to be generated. You */
/* should need to modify only the definitions in this section before running */
/* this program. */
/* */
/* TB_FILE is the name of the output file. */
/* TB_WIDTH is the table width in bytes (either 2 or 4). */
/* TB_POLY is the "polynomial", which must be TB_WIDTH bytes wide. */
/* TB_REVER indicates whether the table is to be reversed (reflected). */
/* */
/* Example: */
/* */
/* #define TB_FILE "crctable.out" */
/* #define TB_WIDTH 2 */
/* #define TB_POLY 0x8005L */
/* #define TB_REVER TRUE */
#define TB_FILE "crctable.out"
#define TB_WIDTH 4
#define TB_POLY 0x04C11DB7L
#define TB_REVER TRUE
/* Miscellaneous definitions. */
#define LOCAL static
FILE *outfile;
#define WR(X) fprintf(outfile,(X))
#define WP(X,Y) fprintf(outfile,(X),(Y))
LOCAL void chk_err P_((char *));
LOCAL void chk_err (mess)
/* If mess is non-empty, write it out and abort. Otherwise, check the error */
/* status of outfile and abort if an error has occurred. */
char *mess;
if (mess[0] != 0 ) {printf("%s\n",mess); exit(EXIT_FAILURE);}
if (ferror(outfile)) {perror("chk_err"); exit(EXIT_FAILURE);}
LOCAL void chkparam P_((void));
LOCAL void chkparam ()
if ((TB_WIDTH != 2) && (TB_WIDTH != 4))
chk_err("chkparam: Width parameter is illegal.");
if ((TB_WIDTH == 2) && (TB_POLY & 0xFFFF0000L))
chk_err("chkparam: Poly parameter is too wide.");
if ((TB_REVER != FALSE) && (TB_REVER != TRUE))
chk_err("chkparam: Reverse parameter is not boolean.");
LOCAL void gentable P_((void));
LOCAL void gentable ()
WR("/* */\n");
WR("/* CRC LOOKUP TABLE */\n");
WR("/* ================ */\n");
WR("/* The following CRC lookup table was generated automagically */\n");
WR("/* by the Rocksoft^tm Model CRC Algorithm Table Generation */\n");
WR("/* Program V1.0 using the following model parameters: */\n");
WR("/* */\n");
WP("/* Width : %1lu bytes. */\n",
(ulong) TB_WIDTH);
if (TB_WIDTH == 2)
WP("/* Poly : 0x%04lX */\n",
(ulong) TB_POLY);
WP("/* Poly : 0x%08lXL */\n",
(ulong) TB_POLY);
if (TB_REVER)
WR("/* Reverse : TRUE. */\n");
WR("/* Reverse : FALSE. */\n");
WR("/* */\n");
WR("/* For more information on the Rocksoft^tm Model CRC Algorithm, */\n");
WR("/* see the document titled \"A Painless Guide to CRC Error */\n");
WR("/* Detection Algorithms\" by Ross Williams */\n");
WR("/* (ross@guest.adelaide.edu.au.). This document is likely to be */\n");
WR("/* in the FTP archive \"ftp.adelaide.edu.au/pub/rocksoft\". */\n");
WR("/* */\n");
switch (TB_WIDTH)
case 2: WR("unsigned short crctable[256] =\n{\n"); break;
case 4: WR("unsigned long crctable[256] =\n{\n"); break;
default: chk_err("gentable: TB_WIDTH is invalid.");
int i;
cm_t cm;
char *form = (TB_WIDTH==2) ? "0x%04lX" : "0x%08lXL";
int perline = (TB_WIDTH==2) ? 8 : 4;
cm.cm_width = TB_WIDTH*8;
cm.cm_poly = TB_POLY;
cm.cm_refin = TB_REVER;
for (i=0; i<256; i++)
WR(" ");
WP(form,(ulong) cm_tab(&cm,i));
if (i != 255) WR(",");
if (((i+1) % perline) == 0) WR("\n");
WR("/* End of CRC Lookup Table */\n");
main ()
printf("Rocksoft^tm Model CRC Algorithm Table Generation Program V1.0\n");
printf("Output file is \"%s\".\n",TB_FILE);
outfile = fopen(TB_FILE,"w"); chk_err("");
if (fclose(outfile) != 0)
chk_err("main: Couldn't close output file.");
printf("\nSUCCESS: The table has been successfully written.\n");
/* End of crctable.c */
20. Summary
This document has provided a detailed explanation of CRC algorithms
explaining their theory and stepping through increasingly
sophisticated implementations ranging from simple bit shifting through
to byte-at-a-time table-driven implementations. The various
implementations of different CRC algorithms that make them confusing
to deal with have been explained. A parameterized model algorithm has
been described that can be used to precisely define a particular CRC
algorithm, and a reference implementation provided. Finally, a program
to generate CRC tables has been provided.
21. Corrections
If you think that any part of this document is unclear or incorrect,
or have any other information, or suggestions on how this document
could be improved, please context the author. In particular, I would
like to hear from anyone who can provide Rocksoft^tm Model CRC
Algorithm parameters for standard algorithms out there.
A. Glossary
CHECKSUM - A number that has been calculated as a function of some
message. The literal interpretation of this word "Check-Sum" indicates
that the function should involve simply adding up the bytes in the
message. Perhaps this was what early checksums were. Today, however,
although more sophisticated formulae are used, the term "checksum" is
still used.
CRC - This stands for "Cyclic Redundancy Code". Whereas the term
"checksum" seems to be used to refer to any non-cryptographic checking
information unit, the term "CRC" seems to be reserved only for
algorithms that are based on the "polynomial" division idea.
G - This symbol is used in this document to represent the Poly.
MESSAGE - The input data being checksummed. This is usually structured
as a sequence of bytes. Whether the top bit or the bottom bit of each
byte is treated as the most significant or least significant is a
parameter of CRC algorithms.
POLY - This is my friendly term for the polynomial of a CRC.
POLYNOMIAL - The "polynomial" of a CRC algorithm is simply the divisor
in the division implementing the CRC algorithm.
REFLECT - A binary number is reflected by swapping all of its bits
around the central point. For example, 1101 is the reflection of 1011.
ROCKSOFT^TM MODEL CRC ALGORITHM - A parameterized algorithm whose
purpose is to act as a solid reference for describing CRC algorithms.
Typically CRC algorithms are specified by quoting a polynomial.
However, in order to construct a precise implementation, one also
needs to know initialization values and so on.
WIDTH - The width of a CRC algorithm is the width of its polynomical
minus one. For example, if the polynomial is 11010, the width would be
4 bits. The width is usually set to be a multiple of 8 bits.
B. References
[Griffiths87] Griffiths, G., Carlyle Stones, G., "The Tea-Leaf Reader
Algorithm: An Efficient Implementation of CRC-16 and CRC-32",
Communications of the ACM, 30(7), pp.617-620. Comment: This paper
describes a high-speed table-driven implementation of CRC algorithms.
The technique seems to be a touch messy, and is superceded by the
Sarwete algorithm.
[Knuth81] Knuth, D.E., "The Art of Computer Programming", Volume 2:
Seminumerical Algorithms, Section 4.6.
[Nelson 91] Nelson, M., "The Data Compression Book", M&T Books, (501
Galveston Drive, Redwood City, CA 94063), 1991, ISBN: 1-55851-214-4.
Comment: If you want to see a real implementation of a real 32-bit
checksum algorithm, look on pages 440, and 446-448.
[Sarwate88] Sarwate, D.V., "Computation of Cyclic Redundancy Checks
via Table Look-Up", Communications of the ACM, 31(8), pp.1008-1013.
Comment: This paper describes a high-speed table-driven implementation
for CRC algorithms that is superior to the tea-leaf algorithm.
Although this paper describes the technique used by most modern CRC
implementations, I found the appendix of this paper (where all the
good stuff is) difficult to understand.
[Tanenbaum81] Tanenbaum, A.S., "Computer Networks", Prentice Hall,
1981, ISBN: 0-13-164699-0. Comment: Section 3.5.3 on pages 128 to 132
provides a very clear description of CRC codes. However, it does not
describe table-driven implementation techniques.
C. References I Have Detected But Haven't Yet Sighted
Boudreau, Steen, "Cyclic Redundancy Checking by Program," AFIPS
Proceedings, Vol. 39, 1971.
Davies, Barber, "Computer Networks and Their Protocols," J. Wiley &
Sons, 1979.
Higginson, Kirstein, "On the Computation of Cyclic Redundancy Checks
by Program," The Computer Journal (British), Vol. 16, No. 1, Feb 1973.
McNamara, J. E., "Technical Aspects of Data Communication," 2nd
Edition, Digital Press, Bedford, Massachusetts, 1982.
Marton and Frambs, "A Cyclic Redundancy Checking (CRC) Algorithm,"
Honeywell Computer Journal, Vol. 5, No. 3, 1971.
Nelson M., "File verification using CRC", Dr Dobbs Journal, May 1992,
Ramabadran T.V., Gaitonde S.S., "A tutorial on CRC computations", IEEE
Micro, Aug 1988.
Schwaderer W.D., "CRC Calculation", April 85 PC Tech Journal,
Ward R.K, Tabandeh M., "Error Correction and Detection, A Geometric
Approach" The Computer Journal, Vol. 27, No. 3, 1984, pp.246-253.
Wecker, S., "A Table-Lookup Algorithm for Software Computation of
Cyclic Redundancy Check (CRC)," Digital Equipment Corporation
memorandum, 1974.
(back to alberto's web pages at www.riccibitti.com)
home nutchips designs f.a.q.
provokations crc guide who am I ? awards | {"url":"http://www.riccibitti.com/crcguide.htm","timestamp":"2014-04-20T20:55:10Z","content_type":null,"content_length":"104532","record_id":"<urn:uuid:5fde7be8-e494-4a57-ae68-bb35405f9d66>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
ORF523: Nesterov’s Accelerated Gradient Descent
In this lecture we consider the same setting than in the previous post (that is we want to minimize a smooth convex function over
We present now a beautiful algorithm due to Nesterov, called Nesterov’s Accelerated Gradient Descent, which attains a rate of order
(Note that
In other words, Nesterov’s Accelerated Gradient Descent performs a simple step of gradient descent to go from
The intuition behind the algorithm is quite difficult to grasp, and unfortunately the analysis will not be very enlightening either. Nonetheless Nesterov’s Accelerated Gradient is an optimal method
(in terms of oracle complexity) for smooth convex optimization, as shown by the following theorem.
Theorem (Nesterov 1983) Let
We follow here the proof by Beck and Teboulle from the paper ‘A fast iterative shrinkage-thresholding algorithm for linear inverse problems‘.
Proof: We start with the following observation, that makes use of Lemma 1 and Lemma 2 from the previous lecture: let
Now let us apply this inequality to
Similarly we apply it to
Now multiplying (1) by
Multiplying this inequality by
Now one can verify that
Next remark that, by definition, one has
Putting together (3), (4) and (5) one gets with
Summing these inequalities from
By induction it is easy to see that
3 Responses to "ORF523: Nesterov’s Accelerated Gradient Descent"
This entry was posted in Optimization. Bookmark the permalink. | {"url":"http://blogs.princeton.edu/imabandit/2013/04/01/acceleratedgradientdescent/","timestamp":"2014-04-21T10:22:10Z","content_type":null,"content_length":"60679","record_id":"<urn:uuid:82175c0e-ebd2-4709-b013-82d19ad6a10c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Q & A--THE PERFECT STATISTICS GIFT?
Michael R. Frey
Bucknell University
Newsletter for the Section on Statistical Education
Volume 2, Number 1 (Winter 1996)
We invite contributions to future Q & A articles. If you have a question about teaching statistics, send it to us and we can solicit answers. If you have both a question and answers, write a short
article and send it to us. Send correspondence to Tom Moore. (Eds.) In September of this year a man asked me to suggest a gift suitable for his son who was enthusiastically studying statistics at
college. I made a few suggestions but was satisfied with none of them. I teach statistics at Bucknell University, a small school in central Pennsylvania, and have few colleagues with whom to share
these sorts of questions. So I turned to the internet and sent a message to a distribution list of about fifty statisticians, each like myself, professionally isolated. This list, maintained by Jeff
Witmer at Oberlin College, can be reached at "isostat@oberlin.edu." Here is the message I sent to the group:
Hello fellow isolated statisticians! I was asked this morning by a student's father to recommend a statistics/ probability book to be given as a gift to his son. The son is a third-year student at
William and Mary, loves statistics (who doesn't?!), and plans to go on to graduate study in statistics. The father asked me to suggest a book that I find indispensable and that his son would
appreciate. OK everyone, help me out. I thought to suggest The Handbook of Small Data Sets by Hand et al. or maybe Counterexamples in Probability. Of course, a gift ASA membership might also work.
Does anybody have other suggestions? Thanks, Mike Frey.
This question evidently sparked some interest because of the number and rapidity of the responses. Included were several requests that I assemble the results and share them back with the group. After
brief editing this is what I sent back to the group:
Hello isolated statisticians! I recently asked for suggestions for a book gift that might be appreciated by an undergraduate student of statistics. Here's an edited, cut-and-paste summary of your
recommendations. Thanks to everyone - Mike Frey, Bucknell University.
1. Statistics for Experimenters by Box, Hunter & Hunter or The History of Statistics by Stigler. These books were the most often recommended.
2. Edward Tufte's two books on graphics make nice gifts, since they are so artistically put together--one is The Visual Display of Quantitative Information and the other is Envisioning Information.
These books are truly works of art and contain information that beginning statisticians should be required to learn. Moreover, they are not books that a student is likely to buy for a course.
These two books received several recommendations.
3. A useful reference set is the Johnson and Kotz series on distributions--four volumes. A couple of them are available in new editions; these would be the ones to purchase if one doesn't want to
lay out for all four. Two recommendations for this set.
4. ASA gift membership including subscriptions to The American Statistician , Stats, and Chance.
5. Statistics: A Guide to the Unknown, by Tanur et. al. --a largely nontechnical collection of applications of prob & stats to a wide variety of areas (predicting the chance of an earthquake in the
next few years, estimating whale population sizes, forecasting elections, looking at the bunt strategy in baseball). Put out by Wadsworth Brooks/Cole.
6. Fisher's Statistical Methods, Experimental Design and Scientific Inference or Student (a biography of Gossett), both published by Oxford University Press. R.A. Fisher: The Life of a Scientist
would be good too. Historical/biographical materials regarding statisticians are hard to come by and make valued gifts.
7. Thisted, The Elements of Statistical Computing; Kendall et. al., The Advanced Theory of Statistics; De Finetti, Theory of Probability; Kotz and Johnson, Breakthroughs in Statistics; Mosteller and
Wallace, Applied Bayesian and Classical Inference: The Case of the Federalist Papers; Tanner, Tools for Statistical Inference; and Feller, An Introduction to Probability Theory and Its
8. Problem Solving: A Statistician's Guide by C. Chatfield or Counting for Something: Statistical Principles and Personalities, by W.S. Peters (Springer-Verlag).
9. Don't underestimate C.R. Rao's Linear Statistical Inference and Its Applications or a matrix-theory- useful-for-statistics book like Graybill's or Searle's. Everybody takes a linear models course
but nobody has had all the linear algebra that graduate school assumes they've had.
10. Two recent books offer numerous datasets to supplement an introductory statistics course. The books differ in level and the extent to which sample analyses for the data are supplied. One is A
Casebook for a First Course in Statistics and Data Analysis by Chatterjee, Handcock and Simonoff (Wiley, 1995). The other is A Handbook of Small Data Sets by Hand et. al. (Routledge, 1994).
Robert W. Hayden of Plymouth State College has written a joint review of these books which will soon appear in The American Statistician.
11. Of course, a dream gift suitable for the newly-minted stats Ph.D. is the multi-volume Statistical Encyclopedia. But it is oh-so-expensive!
The father, when I gave him this list, was amazed and delighted. He's decided to give his son a student ASA membership. Oh, and now he wants me to suggest a gift for his wife ...
Michael Frey
Department of Mathematics
Bucknell University
Lewisburg, PA 17837
Phone: (717)524-1598
FAX: (717)524-3760
Return to V2N1 Contents
Return to Newsletter Home Page | {"url":"http://www.amstat.org/sections/educ/newsletter/v2n1/Frey.html","timestamp":"2014-04-19T19:46:20Z","content_type":null,"content_length":"6824","record_id":"<urn:uuid:12721435-3804-46e4-953b-e567e97ec2eb>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 265
Bubble wrap sells in sheets, and each sheet has 1 million individual spherical bubbles, each with a diameter of 0.01 m. What is the total volume of air contained in the bubbles of two sheets of
bubble wrap? Round to the nearest tenth.
Molality is a unit of concentration that measures the moles of solute per
Language Arts-check answer-
Language Arts-check answer-
Yay. :-)
Language Arts-check answer-
The article is at the bottom for this question: Skim the article Cruise Control to find the name of the bumper sticker service found in Arlington, Texas. a) 1-866-2TELLMOM b) 800-4-MYTEEN ********
c) Big Mother d) Dad s Eyes Anne Rekerdres likes to call "...
Language Arts ~CHECK MY ANSWERS~
Yay :-) Thanks!
Language Arts ~CHECK MY ANSWERS~
for question: ? ? ? Skim the article Cruise Control to find the name of the bumper sticker service found in Arlington, Texas. 1-866-2TELLMOM 800-4-MYTEEN <----- Big Mother Dad s Eyes
Language Arts ~CHECK MY ANSWERS~
oops the 2nd one. Skim the article Cruise Control to find the name of the bumper sticker service found in Arlington, Texas. 1-866-2TELLMOM 800-4-MYTEEN <----- Big Mother Dad s Eyes
Language Arts ~CHECK MY ANSWERS~
Here is the article for the 1st one. Anne Rekerdres likes to call "the everlasting punishment". When the 17-year-old North Dallas senior came home with a speeding ticket in March, her dad, Randy, 47
slapped the back bumper of her beloved '92 red Ford Explorer wit...
Language Arts ~CHECK MY ANSWERS~
Language Arts ~CHECK MY ANSWERS~
So..., the correct answer is Discuss? Which verb correctly completes the below sentence? The collection written by several of my favorite authors _____ research in the rain forest. Discuss <-----
Language Arts ~CHECK MY ANSWERS~
Which verb correctly completes the below sentence? The collection written by several of my favorite authors _____ research in the rain forest. Discuss <----- Discusses Skim the article Cruise
Control to find the name of the bumper sticker service found in Arling...
Ms.Sue- I’m so sorry, but this time is real, Can you check! Please!?
I m sorry, but this time is real, Can you check! Please!? 2. Why were Native Americans forced to leave their lands during the 1830s? d.) their new lands were better for farming. 4. How did President
Jackson respond to the Supreme Court s ruling in Worcester v. Georgi...
Ms. Sue (Check My Answers)
2. Why were Native Americans forced to leave their lands during the 1830s? a.) settlers wanted to settle the land<--- b.) U.S. citizens settled the land first c.) the Supreme Court ordered their
removal d.) their new lands were better for farming. 4. How did President Jack...
Social Studies--Check My Answers Please--
1. What act did Congress pass in order to relocate Native Americans? a.) Naturalization Act b.) Alien Act c.) Relocation Act d.) Indian Removal Act <--- 2. Why were Native Americans forced to leave
their lands during the 1830s? a.) settlers wanted to settle the land b.) U.S...
math fractions and decimals
the decimal has a digit greater than 4 but less than 8 in the hundreds place, what could the decimal be?
Is carbon phosphate an ionic, polar covalent, or non polar covalent bond?
What is y if 6y+1=21y+13
Marlon has 4 cards, jake has 4 cards, and Sam has 3 cards. Can u write a multiplication sentence to find how many cards they have in all? Explain
what is 30m=6000
if i bought a dress that cost 55.00 dollars and the sales tax was 8% exatcly how much will the dress cost?
How many grams of CaCl2 are required to prepare 2.00 liters of 7.00 M CaCl2?
OOPS, SORRY #9 IS C SORRY FOR THE MISTAKE.
I REALLY NEED HELP PLEASE PLEASE HELP ME WITH THIS JUST CHECK IF THERE RIGHT IF NOT THEN GIVE ME THE RIGHT ANSWER PLEASE PLEASE !!!!! :( :( 1) The length of a room is 5 . 048 ×102 cm. Which number is
equivalent to this length? A 0.005048 cm B 0.05048 cm C 504.8 cm D 504,...
Well, Can u give me the answer please it because i'm in a rush..... please and later i tell u the rest please please :( :( :( :( ??????? please please
:( :( :( :( :( :(
PLEASE HELP *I'M SORRY*
I'm not guessing YOU KNOW .!
1) Find the surface area for a sphere with a radius of 10 feet. Round to the nearest whole number. a. 1,256 ft2 b. 4,189 ft2 c. 1,089 ft2 d. 1,568 ft2 2) Find the volume of a sphere with a radius of
10 feet. Round to the nearest whole number. a. 1,257 ft3 b. 4,187 ft3 c. 1,089...
language arts
whats a sentence for lukewarm that includes a synonym for the word to help provide a contect clue?
Analyze the causes of anorexia in hospitalized patients. Provide two patient scenarios that commonly result in decreased food intake during prolonged hospitalizations. For each scenario: Name a
specific disease or condition. State specific medical interventions (...
8th grade math for Ms. Sue - last question please
How is an inequality different from an equation? Give a real-world scenario in which you would write an inequality rather than an equation. I really need to know how an inequality is different from
an equation. I know the signs used are different, but what else? I am very conf...
8th grade math for Ms. Sue please check
Thank you so much!
8th grade math for Ms. Sue please check
How is an inequality different from an equation? Give a real-world scenario in which you would write an inequality rather than an equation. Daphne has $25. She wants to buy two new paint brushes for
her art teacher. Daphne needs to know the average price so she doesn't spe...
8th grade math Ms. Sue
How is an inequality different from an equation? Give a real-world scenario in which you would write an inequality rather than an equation. I don't know how to solve this or what the answer is.
Please help! You said: Denise has $50 and wants to buy two new shirts. She need...
8th grade math for Ms. Sue
How is an inequality different from an equation? Give a real-world scenario in which you would write an inequality rather than an equation. I don't know how to solve this or what the answer is.
Please help!
Math emergancy
find the two ratios that are eequivalent to each given ratio 9/12 4/20 15/25 7/12 14/7 11/22 10/3 18/28 12/27 i dont no how to do this so yea please answer
Math emergancy
find the two ratios that are eequivalent to each given ratio 9/12 4/20 15/25 7/12 14/7 11/22 10/3 18/28 12/27 i dont no how to do this so yea please answer
Urgent Science!!!
Your right on all
8th grade Language Arts
I am writing a persuasive essay and I need some tips on making it persuasive. It is turning out to be just an essay, not a persuasive essay. Help!
what is 1/7 + 2/4 in simplest form
8th grade math
Answer the following question in your discussion group: Sometimes people say Give 110% of your effort. Is it possible to give 110% of your effort? Is there ever a situation where you might have a
percent over 100? If no, explain why. If yes, give a scenario where a...
7th grade math Ms. Sue please
Sorry! Meant "Delilah"!
7th grade math Ms. Sue please
OK thanks. I will see if its correct.
7th grade math Ms. Sue please
Find the percent of markup. Round to the nearest whole percent. 6. Store's Cost: $136.50 Selling Price: $184.70 (1 point) 48% 26% 14% 35%6. Store's Cost: $136.50 Selling Price: $184.70 (1 point) 48%
26% 14% 35% So you previously said 48% wasn't correct..... So is t...
8th grade math
Oh my gosh its getting sooooo annoying sorry Ms. Sue! I keep forgetting my sister's name is still on here (we share a computer)
8th grade math
The answer is $82.45 right?
8th grade math
Find the interest earned on each account. $970 at 4 1/2% simple interest for 2 years. I don't know if I solved the problem correct please help: I = p x r x t = 970 x 0.42 x 2 = 814.8 I don't know
what number to convert 1/4 to!
7th grade math Ms. Sue please
are all others but 6 correct?
7th grade social studies
7th grade social studies
7th grade social studies
In the Earth's Sourthern Hemisphere, what season begins in March? Spring Summer Fall Winter
write the number in standard notation 1.42 * 10 to the -2 power
simplify 10 to the -4 power
14 +(-2-(-2)to the 3rd) power
What are Two examples of Internet Vandalism
7th grade math Ms. Sue please
1. What are the equivalent fraction and decimal for thirty-three and one-third%? (1 point) one over thirty-three and 0.3 one over thirty-three and 3 3three and one-third and CE_point thirty three
repeating_large one-third and point thirty-three repeating 2. What are the equiva...
8th grade social studies
8th grade social studies
Hi, I am doing a paper for school and I need to know what are some Negative and Positive Impacts of the Telegraph? I know some positives but I really need help with the negative effect of the
telegraph. Can you please give me 2 or 3 positive and negative impacts/effects? Thanks!
7th grade social studies help please
I don't know the answer! I cannot find the answer so I can't post my answer. Can you help me FIND my answer please? Thanks!
7th grade math help Ms. Sue please
Sorry thanks
If a stone is tossed from the of a 310 meter building, the height of the stone as a function of time is given by h(t)= 9.8t^2 -10t +310
i need help finding a map of the 7 natural wonders of the world.
art need help
The answer to 3 is content view and 4 is composition view. :) You must be in Connections Academy then :D LOL I guessed on all of them and got 'em all right :)
what are some of the natural resources found at the grand canyon
what are some of the natural resources found at the grand canyon?
what is 67 in m overn form were m and n are integers
what can happen if there is a large solar flare like there was in 1989 and 1997 ?
drwls thank you
what does an albedo of 1.0 mean ??
what type of radiation causes skin damage ??
where does the earth loose energy to ??
9.18/x for x =-1.2
10.8/x for x = 0.03
18/x for x = 0.12 answer ?
i mean in the middle of 5 and 8
- 5/8 divided by 3/4 the negative sighn is in the middle of the 1 and 5
1/5 divided by 3/10 ?? need asap thanks :)
-3/8 times -4/9
In order to travel from Valley town to mountain City, you have to travel 1 3/4 part way and 2 1/5 miles from partway to mountain city . what is the total distance that you travel? A. 3 4/9 B. 3 17/30
c. 3 19/20 d. 4 miles * need answer asap
Angela practices her long jump before a big track meet. At the beginning of the week ,she jumps 1.56 meters and at the end of the week she jumps 1.63 meters .How much farther did she jump at the end
of the week than at the beginning of the week answers A.0.03 meters B.0.07 met...
simplify the fraction - 104/320 the negative sighn is in the middle not on top or bottom what is the answer ?
thanks so much :)
what is 1/5 divided by 3/10 simplified ? i need this asap !!! by today
Pre-Algebra Adv.
Find the change in temperature or elevation. 1) From -16 degrees C to 23 degrees C 2) From -47 degrees C to -38 degrees C 3) From 9 degrees F to -12 degrees F 4) From -16 degrees F to -27 degrees F
1) x-4-9 2) 15-y-7 3) 31-35-y
I need to observe and identify variables and I'm in 6th grade!!! Help, this work is killing me....It's scenario 1-4 and I have Mr Jones plz help plz
World History
Thanks I was thinking 1. False 2. True 3. True 4. True 5. False 6. False 7. True 8. True 9. False 10. False
World History
All are True/False 1. After WWI, Ho Chi Minh tried to get the U.S. to support independence for Vietnam. 2. Ho Chi Minh's forces and the U.S. were allies during WWII. 3. America's First direct
involvement in Vietnam came during the Johnson administration. 4.The 1954 Gen...
Which shows the gymnastic scores in order from least to greatest? 9.72, 9.8, 9.78, 9.87 9.78, 9.72. 9.87, 9.8 9.78, 9.8, 9.72, 9.87 9.72, 9.78, 9.8, 9.87
Angie drew a number line and labeled 0 and 1. To show 5/12, how many parts should she divide the distance from 0 to 1?
Louise is creating a 1-foot long comic strip. If she has marked 0.5 on her paper, what should she do to find 1 foot? Subtract 0.5 Add 0.2 Multiply 0.5 Add 0.5
Which of the following has the least value? 5.45 8.02 4.99 13.2
Ms. Woods painted 174 watercolor pictures. Each painting took about 11 hours to finish. Which could be the total number of hours it took Ms. Woods to paint all 174 pictures?
ummmmmm wats the problem
5x-9y=113x+7y=19.......plz help serious..........
plz help with equation 5x-9y=11=3x+7y=19
Simplify 5m - 13n - n + 4m.
Pages: 1 | 2 | 3 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=destiny","timestamp":"2014-04-16T13:48:58Z","content_type":null,"content_length":"24658","record_id":"<urn:uuid:080d70da-ecf3-4d9e-b86b-b550d3ee0493>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00205-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebras with countable chains only
up vote 3 down vote favorite
Is there an example of an uncountable Boolean algebra $B$ in which every chain is countable and such that $\ell_\infty$ embeds into the Banach space $C(\mbox{Stone }B)$? The latter requirement is not
very important, I just want to exclude some trivial cases, like the algebra of finite/cofinite subsets on some uncountable set.
boolean-algebras banach-spaces set-theory gn.general-topology
add comment
1 Answer
active oldest votes
The second requirement is too strict: it makes the first one impossible.
The commutative von Neumann algebra $\ell^\infty(\mathbb{N})$ has as Gelfand spectrum the Stone-Cech compactification of $\mathbb{N}$ (with the discrete topology). This, in turn, is the
up vote 1 Stone space of the Boolean algebra $\mathcal{P}(\mathbb{N})$, the powerset of the natural numbers. So $\ell^\infty(\mathbb{N}) \cong C(\mathop{Stone}(\mathcal{P}(\mathbb{N})))$ embeds in
down vote $C(\mathop{Stone}(B))$ if and only if $\mathcal{P}(\mathbb{N})$ embeds in $B$. But the former has uncountable chains. So if $B$ satisfies the second requirement, it has uncountable
accepted chains, and cannot satisfy the first requirement.
2 $P(\mathbb{N})$ has uncountable chains. Think of Dedekind cuts in $\mathbb{Q}$. – Joel David Hamkins Nov 30 '12 at 0:03
Every well-ordered chain is countable, but there are uncountable chains isomorphic to the real numbers. – Asaf Karagila Nov 30 '12 at 1:04
Good point, thanks! I'll edit the answer. – Chris Heunen Nov 30 '12 at 1:14
add comment
Not the answer you're looking for? Browse other questions tagged boolean-algebras banach-spaces set-theory gn.general-topology or ask your own question. | {"url":"http://mathoverflow.net/questions/114938/algebras-with-countable-chains-only","timestamp":"2014-04-21T10:05:38Z","content_type":null,"content_length":"53461","record_id":"<urn:uuid:7324ba0c-b7c8-4f6b-be41-d7e59405b894>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/lawls/asked","timestamp":"2014-04-18T10:37:33Z","content_type":null,"content_length":"104479","record_id":"<urn:uuid:96c26fb7-8fda-48db-a3b6-2e3a4c6e23ed>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
what happens if you take double dose of synthroid
Alternate wordings
what happens if you take two dises of synthroid? i took double dose of synthroid? what happens if you double the dose of levothyroxine? What happen if you gave synthroid twice? what happens if you
take double dosage of synthroid? what if you double dose synthroid? What if took Synthroid double dose? what happens if I made a mistake a triple dose of levothyroxine 50mcg? if u miss dose of
synthroid can u double up? what if I double dose on synthroid? what will happen if u take synthroid twice? what happens if you take double synthroid? what happens when you take a doible dose of
synthroid? what happens if I took double dose of syntroid? what willhappen if i take a double dose of syrthroid?
Follow this question
By Email:
Once you sign in you will be able to subscribe for any updates here
By RSS:
Markdown Basics
• *italic* or __italic__
• **bold** or __bold__
• link:[text](http://url.com/ "title")
• image?
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported
Asked: Dec 10 '12 at 10:15
Seen: 396 times
Last updated: Dec 10 '12 at 10:15 | {"url":"http://qnapal.com/questions/26148/what-happens-if-you-take-double-dose-of-synthroid","timestamp":"2014-04-18T03:29:35Z","content_type":null,"content_length":"42181","record_id":"<urn:uuid:ec4cf44f-54a8-4fe6-99d4-faa94da20adc>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
Test Your Intuition (1)
Question: Suppose that we sequentially place $n$ balls into $n$ boxes by putting each ball into a randomly chosen box. It is well known that when we are done, the fullest box has with high
probability $(1+o(1))\ln n/\ln \ln n$ balls in it. Suppose instead that for each ball we choose two boxes at random and place the ball into the one which is less full at the time of placement. What
will be the maximum occupancy?
Test your intuition before reading the rest of the entry.
Answer: A beautiful theorem of Yossi Azar, Andrei Broder, Anna Karlin, and Eli Upfal asserts that with high probability, the fullest box contains only $\ln \ln n/\ln 2+O(1)$ balls—exponentially less
than before. (The descripion follows Xueliang Li’s mathscinet review.) And here is a link to the paper. Here is a related post “Balls and Bins on Graphs” on Windows on Theory.
11 Responses to Test Your Intuition (1)
1. Actually, I am not aware of a simple explanation (even a heuristic one) why we go down to log log n. I will be happy to hear such an explanation.
Also, this theorem is related to many developments in theoretical CS and probabilistic combinatorics and remarks on these are most welcomed.
2. There is in fact more to this! Suppose now you throw m balls in to n bins, where m is much larger than n. For one choice, the imbalance is easily seen to grow as \tilde{O}(\sqrt{m/n}).
For the two choice process, the imbalance is in fact independent of n. It stays at O(log log n) w.h.p. See this paper:
Petra Berenbrink, Artur Czumaj, Angelika Steger, and Berthold Vöcking. Balanced allocations: The heavily loaded case. In Proceedings of the 32th ACM Symposium on Theory of Computing (STOC’00),
pages 745-754, 2000.
3. Dear Kunal, indeed this is part of a large story both in TCS and probabilistic combinatorics. I was motivated by a lecture of Michael Krivelevich on “Achlioptas processes”. In these processes you
build a random graph by choosing at each round one out of r random edges in order to postpone or embrace a certain graph proprty.
Another connection that comes to mind (which perhaps fits a series of posts under the name: “difficult proofs for easy theorems”) is to find a probabilistic proof that if m is not too large
compare to n, it is possible to put n balls into m boxes so that no box contains more than one ball. A simple union bound works for something like $m \ge n^2$ and using Lovasz local lemma you can
get it down to $m \ge 6n$ or so. I do not know what is the world record.
Anyway, what I am most curious about is a simple conceptual explanation in a few words for the loglogn in the theorem of Azar, Broder, Karlin, and Upfal.
4. Ori pointed your question out and we came up with some explanation.
Suppose you place the ball in the less populated set. At time t you have $\sim t$ occupied boxes, so the rate of introducing doubly occupied boxes is $(t/n)^2$. This shows that at time $t$ you
expect $\approx t^3/n^2$ boxes with two balls. The first appears at time $\approx n^{2/3}$
Repeating the argument, the rate at which boxes with three balls are created is $(t^3/n^3)^2$, so the number of such boxes is $\approx t^t/n^6$. In general, the first box with $k$ balls appears
at time $n\cdot n^{-1/(2^k-1)}$. If $k=\log\log n/\log 2$ then this is of order $n$, giving the claimed asymptotics.
More formally, if $X_i(t)$ is the number of boxes with at least k balls at time t then in expectation, $X_i' = (X_{i-1}/n)^2$. If there are $M$ choices at each step the same scheme gives $\log\
log n/\log M$.
5. Many thanks, Omer!
6. Thanks for this post! So, as suggested by the informal explanation given above, is it really true that taking M = log n gives a constant number of balls in the fullest bin with high probability?
7. aranb: taking anything asymptotically less then $\sqrt{n}$ will give you only 1 ball in the fullest bin with high probability, even if you have not choice (bin selected randomly). In the case you
have a choice between 2 bins, this goes up to $n^{1/3}$.
8. Gil,
I thought you might be interested in the following paper (http://arxiv.org/abs/0901.4056), by Noga Alon, Eyal Lubetzky and yours truly, in which we consider the above problem under memory
□ Many thanks, Ori
This entry was posted in Computer Science and Optimization, Probability, Test your intuition and tagged Test. Bookmark the permalink. | {"url":"http://gilkalai.wordpress.com/2008/12/07/test-your-intuition-1/?like=1&source=post_flair&_wpnonce=f9365a8c97","timestamp":"2014-04-20T00:41:21Z","content_type":null,"content_length":"118696","record_id":"<urn:uuid:d6be12fa-2206-41dd-8c6c-42ac25c1f50d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bases: sage.structure.sage_object.SageObject, list
A mutable list of elements with a common guaranteed universe, which can be set immutable.
A universe is either an object that supports coercion (e.g., a parent), or a category.
□ x - a list or tuple instance
□ universe - (default: None) the universe of elements; if None determined using canonical coercions and the entire list of elements. If list is empty, is category Objects() of all objects.
□ check – (default: True) whether to coerce the elements of x into the universe
□ immutable - (default: True) whether or not this sequence is immutable
□ cr - (default: False) if True, then print a carriage return after each comma when printing this sequence.
use_sage_types – (default: False) if True, coerce the
built-in Python numerical types int, long, float, complex to the corresponding Sage types (this makes functions like vector() more flexible)
sage: v = Sequence(range(10))
sage: v.universe()
<type 'int'>
sage: v
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
We can request that the built-in Python numerical types be coerced to Sage objects:
sage: v = Sequence(range(10), use_sage_types=True)
sage: v.universe()
Integer Ring
sage: v
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
You can also use seq for “Sequence”, which is identical to using Sequence:
sage: v = seq([1,2,1/1]); v
[1, 2, 1]
sage: v.universe()
Rational Field
sage: v.parent()
Category of sequences in Rational Field
sage: v.parent()([3,4/3])
[3, 4/3]
Note that assignment coerces if possible,
sage: v = Sequence(range(10), ZZ)
sage: a = QQ(5)
sage: v[3] = a
sage: parent(v[3])
Integer Ring
sage: parent(a)
Rational Field
sage: v[3] = 2/3
Traceback (most recent call last):
TypeError: no conversion of this rational to integer
Sequences can be used absolutely anywhere lists or tuples can be used:
sage: isinstance(v, list)
Sequence can be immutable, so entries can’t be changed:
sage: v = Sequence([1,2,3], immutable=True)
sage: v.is_immutable()
sage: v[0] = 5
Traceback (most recent call last):
ValueError: object is immutable; please change a copy instead.
Only immutable sequences are hashable (unlike Python lists), though the hashing is potentially slow, since it first involves conversion of the sequence to a tuple, and returning the hash of that.
sage: v = Sequence(range(10), ZZ, immutable=True)
sage: hash(v)
1591723448 # 32-bit
-4181190870548101704 # 64-bit
If you really know what you are doing, you can circumvent the type checking (for an efficiency gain):
sage: list.__setitem__(v, int(1), 2/3) # bad circumvention
sage: v
[0, 2/3, 2, 3, 4, 5, 6, 7, 8, 9]
sage: list.__setitem__(v, int(1), int(2)) # not so bad circumvention
You can make a sequence with a new universe from an old sequence.
sage: w = Sequence(v, QQ)
sage: w
[0, 2, 2, 3, 4, 5, 6, 7, 8, 9]
sage: w.universe()
Rational Field
sage: w[1] = 2/3
sage: w
[0, 2/3, 2, 3, 4, 5, 6, 7, 8, 9]
Sequences themselves live in a category, the category of all sequences in the given universe.
sage: w.category()
Category of sequences in Rational Field
This is also the parent of any sequence:
sage: w.parent()
Category of sequences in Rational Field
The default universe for any sequence, if no compatible parent structure can be found, is the universe of all Sage objects.
This example illustrates how every element of a list is taken into account when constructing a sequence.
sage: v = Sequence([1,7,6,GF(5)(3)]); v
[1, 2, 1, 3]
sage: v.universe()
Finite Field of size 5
sage: v.parent()
Category of sequences in Finite Field of size 5
sage: v.parent()([7,8,9])
[2, 3, 4] | {"url":"http://sagemath.org/doc/reference/structure/sage/structure/sequence.html","timestamp":"2014-04-21T04:37:23Z","content_type":null,"content_length":"70899","record_id":"<urn:uuid:26dae079-15fb-4b24-b2f1-a33e588ef599>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newport, DE SAT Math Tutor
Find a Newport, DE SAT Math Tutor
...The goal of Precalculus is to prepare students for Calculus by exposing them to a variety of graphs and functions which will be used or seen in higher level math. It is a great deal of fun,
especially if you like puzzles. SAT math is all about the logic, not just the problems.
30 Subjects: including SAT math, chemistry, statistics, ACT Reading
...I am highly committed to students' performances and to improve their comprehension of all areas of mathematics.I have excelled in courses in Ordinary Differential Equations in both
undergraduate and graduate school, as well as partial differential equations at the graduate level. Also, I have tu...
19 Subjects: including SAT math, calculus, econometrics, logic
...I've had the pleasure of working with students from pre-school through high school. Prior to that, I worked as an elementary school tutor and high school mentor throughout my years in college.
I enjoy making learning fun and relevant because that is when it is most interesting.
23 Subjects: including SAT math, English, reading, writing
...Regardless of the setting, the subject, or the level of instruction, my goal remains the same: I strive to motivate and inspire students to discover a learning style that will facilitate their
academic growth and success. I value and respect the incredible variety of personalities that I encount...
17 Subjects: including SAT math, English, grammar, literature
...My credentials include over 10 years tutoring experience and over 4 years professional teaching experience. I received 800/800 on the GRE math section and perfect marks on the Praxis I math
section, as well as the Award for Excellence on the Praxis II mathematics content test. I possess clean FBI/criminal history and Child Abuse clearances.
58 Subjects: including SAT math, chemistry, reading, biology
Related Newport, DE Tutors
Newport, DE Accounting Tutors
Newport, DE ACT Tutors
Newport, DE Algebra Tutors
Newport, DE Algebra 2 Tutors
Newport, DE Calculus Tutors
Newport, DE Geometry Tutors
Newport, DE Math Tutors
Newport, DE Prealgebra Tutors
Newport, DE Precalculus Tutors
Newport, DE SAT Tutors
Newport, DE SAT Math Tutors
Newport, DE Science Tutors
Newport, DE Statistics Tutors
Newport, DE Trigonometry Tutors | {"url":"http://www.purplemath.com/Newport_DE_SAT_Math_tutors.php","timestamp":"2014-04-20T23:33:16Z","content_type":null,"content_length":"24079","record_id":"<urn:uuid:765b1eee-b33a-4ff0-8a67-f7822539f3de>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help finding formulas for f o g and g o f?
July 20th 2010, 12:36 PM #1
Jul 2010
Need help finding formulas for f o g and g o f?
I stumbled upon this question in my text book :
Let f and g be polynomials defined by f(x)=x-1 and g(x)=x^2 -1.
Find formulas for f o g and g o f.
I dont fully understand what they mean by finding a formula.
so we no fog = (x^2-1)-1 = x^2-2
and gof =(x-1)^2 -1= x^2-2x
Soo how exactaly do we find a formula?
As far as I know, you've already found them.
composition of mappings $f :X \to Y$ and $g:Y \to Z$ is mapping $h=f°g:X \to Z$ defined by:
so i think should be :
Ohh haha i thought the question would be harder. Thanks
hmmm... sorry but now i'm confused. First I thought that i forgot definition of composition of mappings, so I double check it in few books I jused to learn from... And I didn't forgot it. Maybe
my books are wrong ? I doubt that because authors are OK... not some "funny" people. Hehehehe
i forgot one line... but it's the same after defined by :
(for every $x \in X ) : h(x)=g(f(x))$
hmmm... sorry but now i'm confused. First I thought that i forgot definition of composition of mappings, so I double check it in few books I jused to learn from... And I didn't forgot it. Maybe
my books are wrong ? I doubt that because authors are OK... not some "funny" people. Hehehehe
You are probably wrong somewhere because indeed, composite functions are defined as $(f \circ g)(x) = f(g(x))$.
Last edited by mr fantastic; July 21st 2010 at 02:34 AM.
let's look at it this way...
By mapping (or function ) f , pile X to pile Y we imply every method (algorithm or procedure or . . . ) by the which for every element $x \in X$ associates one and onley one element $y \in Y$.
That's definition of mapping...
Let we say that X,Y,Z were not empty piles and that we have functions $f : X \to Y$ and $g: Y \to Z$. Now meaning that f will map X to Y (and f is function of X) $y = f(x)$ and g now will map Y
to Z (and g is function of y) meaning $g(y) =g(f(x))$
so when u have $(f \circ g)(x) = g(f(x))$
and that way u'll have maping $h=(f \circ g) : X \to Z$
defined for every $x \in X$
where do u think i'm wrong ?
P.S. sorry if I'm boring with this but I'm confused
July 20th 2010, 01:31 PM #2
July 20th 2010, 01:52 PM #3
July 20th 2010, 02:41 PM #4
July 20th 2010, 03:06 PM #5
Jul 2010
July 21st 2010, 12:25 AM #6
July 21st 2010, 02:03 AM #7
July 21st 2010, 05:11 AM #8 | {"url":"http://mathhelpforum.com/pre-calculus/151504-need-help-finding-formulas-f-o-g-g-o-f.html","timestamp":"2014-04-21T07:35:03Z","content_type":null,"content_length":"56380","record_id":"<urn:uuid:55e669f2-6e09-4a8a-8e3d-3b688ee0b2ea>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
c^2-2c+1-9f^2 - WyzAnt Answers
Ashley- what are you supposed to do with this equation? i.e. does it ask you to solve for f, factor, graph, etc. ?
I don't know where to start with this either. What was the prompt for all of the problems in the section this was under?
Tutors, please sign in to answer this question.
1 Answer
Knowing that I don't see an equal sign on this polynomial, it is meant that it must be treated as an expression, so I assume the objective with this expression is to factor it.
If factoring is the objective in this exercise, consider there are two areas of interest in this expression:
=> First, c^2 - 2c + 1 is a perfect square trinomial, so it is factored this way:
c^2 - 2c + 1 = c^2 - c - c + 1
= c(c - 1) - 1(c - 1)
= (c - 1)(c - 1)
= (c - 1)^2
Note: The above trinomial was factored by grouping.
Second, the term 9f ^2 can be written as (3f)^2 knowing that both 9 and f^ 2 are squares.
With both areas of interest being covered, the original expression can be rewritten as (c - 1)^2 - (3f )^2.
Notice that the rewritten expression has the following notation: x^2 - y^2, which is a difference of squares. In this case, x = (c - 1)^2 and y = (3f)^2.
Since, for any difference of squares, x^2 - y^2 = (x - y)(x + y), the expression
(c - 1)^2 - (3f)^2 = [(c - 1) - 3f] [(c - 1) + 3f].
The use of brackets was needed to distinguish one factor from the other. | {"url":"http://www.wyzant.com/resources/answers/1179/c_2_2c_1_9f_2","timestamp":"2014-04-19T06:53:34Z","content_type":null,"content_length":"45031","record_id":"<urn:uuid:7d82afbd-231a-4d7a-858b-153e9f78927b>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
Puzzle 1
The Problem
Some time ago the following puzzle was posed in the newsgroup sci.math.symbolic.
Let E, F, H, N, O, R, T, U, and W be unique, positive, single-digit integers. They fulfill the following conditions:
ONE must be a square, and
TWO or THREE or FOUR must be a square, and
ONE+1 or TWO+1 or THREE+1 or FOUR+1 must be a square, and
ONE+2 or TWO+2 or THREE+2 or FOUR+2 must be a square.
In this puzzle the expression "THREE+1" is defined as the sum
Determine E, F, H, N, O, R, T, U, and W.
The Solution
A brute force solution would be to check for all possible values of E, F, H, N, O, R, T, U, and W (each ranging independently from 0 to 9) to fulfill the conditions. This would mean to check nearly
one billion cases. Although certainly doable on a 1999-vintage computer, it is not a solution that can be efficiently generalized to more variables. In the following, we will implement a solution
that is not specifically tailored to this problem, but which can be used to solve any problem of this kind.
Let us first calculate all possible sets of conditions arising from the puzzle. Every row in the following table is one possibility for the four conditions to be fulfilled.
These are 48 possible combinations.
The function allSquares generates all square numbers between imin and imax.
Here is an example of all nonnegative integer square numbers between 0 and 1234.
Given an expression of the form word + integer, the function allPossibleDigitRealizations generates all possible realizations for the digits, taking into account the condition that every letter
represents a different digit. The result of allPossibleDigitRealizations is a list of possible realizations in the form of replacement rules.
Here is an example.
allPossibleDigitRealizations[OON + 4]
Both realizations for the letters give a square.
The function compatibleDigitRealizations calculates the compatible letter realization for the two letter realizations HoldAll attribute here to keep the arguments from being evaluated in case one
argument is empty, to avoid unnecessary computations.
Let us again look at an example. Here are the possible digit realizations for one+1 to be a square or two+3 to be a square.
The following 14-digit realizations are the compatible realizations.
Now we can take a set of conditions from allConditionCombinations and recursively determine the compatible digit realizations. Here, the 30th row of allConditionCombinations chooses
We get a nontrivial solution. Now let us check all other combinations of allConditionCombinations, too.
No other solutions have been found. (The last evaluation effectively searched the entire solution space of the puzzle in less than a second, on a 1999-vintage computer.) Let us finish by substituting
the calculated result into the original condition to check that the solution is OK.
Converted by Mathematica October 5, 1999 [Next Page] | {"url":"http://www.mathematica-journal.com/issue/v7i3/columns/trott/contents/html/Links/index_lnk_1.html","timestamp":"2014-04-16T14:14:22Z","content_type":null,"content_length":"20837","record_id":"<urn:uuid:2864c33a-d9d2-488c-802a-7d4515b84fd2>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
Category Archives: Math
Via the Seriously, Science? blog comes what looks like a pretty bad paper: Is poker a game of skill or chance? A quasi-experimental study Gerhard Meyer, Marc von Meduna, Tim Brosowski, Tobias Hayer
Due to intensive marketing and the rapid … Continue reading
Chances are you’ve seen Conway’s Game of Life, the checkerboard cellular automaton featuring stable structures, replicators, and all sorts of cool designs. (Plenty of implementations available
online.) It’s called “life” because the processes of movement and evolution bear some tangential … Continue reading
A short two-person dance, with a twist. Or more accurately, a shear: time is remapped so that there is a delay that increases as you move from the top of the frame down to the bottom. Or in math:
(x’, … Continue reading
This is an actual TV show in the UK (based on a Japanese program), broadcast on a channel called Dave. In it, Dara O Briain and mathematician Marcus du Sautoy, along with special comedy guests, take
on math puzzles (and … Continue reading
From Barry Greenstein’s insightful poker book, Ace on the River: Someone shows you a coin with a head and a tail on it. You watch him flip it ten times and all ten times it comes up heads. What is …
Continue reading
Yes, I know, I’m not very good at this hiatus thing. But there is important news that needs to be promulgated widely — the news of calculus. No more will innocent citizens cower in fear at the
thought of derivatives … Continue reading
Ah, not this one again. The folks at Iglu Cruises have put together a helpful infographic to explain various features of the Gulf of Mexico oil spill (via Deep Sea News). Here’s the bit where they
compare the recent spill … Continue reading
Here’s a fun logic puzzle (see also here; originally found here). There’s a family resemblance to the Monty Hall problem, but the basic ideas are pretty distinct. An eccentric benefactor holds two
envelopes, and explains to you that they each … Continue reading
Physicists and mathematicians who think about higher-dimensional spaces are, if they allow their interest to somehow become public knowledge, inevitably asked: “How can you visualize more than three
dimensions of space?” There are at least three correct answers: (1) You … Continue reading
I sometimes forget that we don’t all read the same blogs, and that it’s good to recommend some of the fun stuff out there on the internets. So let me give a shout-out to Matt Springer at Built on
Facts, … Continue reading | {"url":"http://www.preposterousuniverse.com/blog/category/math/","timestamp":"2014-04-18T15:52:33Z","content_type":null,"content_length":"67860","record_id":"<urn:uuid:d618dd41-2942-47b2-bb92-631efd60e9c2>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
San Gregorio Algebra 2 Tutor
Find a San Gregorio Algebra 2 Tutor
...My AP students solve practice questions from previous exams created by CollegeBoard on each topic. AP chemistry is usually an easier course to approach due to the plethora of information
available online and the types of questions on the test administered by CollegeBoard. The SAT chemistry subj...
18 Subjects: including algebra 2, chemistry, physics, calculus
...Although I am studying to be a doctor of chiropractic, I used to study quite a different field in undergrad. For my undergraduate degree I studied at the University of California, Berkeley
where I majored in Physical Sciences field major. It included everything from astrophysics, physical chemistry, and mathematics.
17 Subjects: including algebra 2, chemistry, calculus, physics
...My credential is in mathematics, science and English, so I have a broad base of knowledge in most academic subjects. I have been tutoring students to prepare to take other tests, such as the
SATs, the GRE and the GMAT. I administer the ISEE for students wishing to enter private school in the Bay Area, so I am very familiar with the test.
35 Subjects: including algebra 2, English, reading, physics
...Tutorial Sessions are offered in two hour blocks. Early morning sessions between 5:00 AM & 7:00 AM and afternoon sessions between 4:00PM & 8:00PM are available.I have been teaching physics at
Los Altos High School for over 15 years. I have a BS in Physics from UCSC.
4 Subjects: including algebra 2, physics, algebra 1, precalculus
...I also had other philosophy classes that taught logic and feel comfortable teaching the subject. I tutored for symbolic logic, via the introduction to advanced math class that was a requirement
for the math major. I've written many proofs both in math and philosophy and taught how to do so.
35 Subjects: including algebra 2, reading, calculus, geometry | {"url":"http://www.purplemath.com/san_gregorio_algebra_2_tutors.php","timestamp":"2014-04-19T17:41:00Z","content_type":null,"content_length":"24114","record_id":"<urn:uuid:27370fdb-05cf-4e13-a51b-9ea64ed027f6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of guide way
A mechanical linkage is a series of rigid links connected with joints to form a closed chain, or a series of closed chains. Each link has two or more joints, and the joints have various degrees of
freedom to allow motion between the links. A linkage is called a mechanism if two or more links are movable with respect to a fixed link. Mechanical linkages are usually designed to take an input and
produce a different output, altering the motion, velocity, acceleration, and applying mechanical advantage.
A linkage designed to be stationary is called a structure.
Mechanical linkages are a fundamental part of machine design, and yet many simple linkages were not well understood nor invented until the 19^th century. Consider a stick: it has six degrees of
freedom, three of which are the coordinates of its centre in space, the other three describing its rotation. Once nudged between a boulder and fulcrum it is constrained to a particular motion, to act
as a lever to move the boulder. When more links are added and joined in various ways their collective motion can be further defined. Very complicated and precise motions can be designed into a
linkage with only a few parts.
The Industrial Revolution was the golden age of mechanical linkages. Mathematical, engineering and manufacturing advances provided both the need and the ability to create new mechanisms. Many simple
mechanisms that seem obvious today required some of the greatest minds of the era to create. Leonhard Euler was one of the first mathematicians to study linkage synthesis, and James Watt worked very
hard to invent the Watt linkage to support his steam engine's piston. Chebyshev worked on mechanical linkage design for over thirty years, which led to his work on polynomials^2. New linkage
inventions, designed by need, were instrumental in cloth making, power conversion and speed regulation. Even the ability of a mechanism to produce accurate linear motion, without a reference guide
way, took years to solve.
Scientists, mostly German, Russian and English, have researched this domain over the last 200 years, so that today most traditional analysis or synthesis problems (e.g. planar movement) have been
solved (see online libraries under External links). Recently, compliant structures have come to the fore.
Electronic technology has replaced many linkage applications taken for granted today, such as mechanical computation, typewriting and machining. However, modern linkage design continues to advance,
and designs that used to occupy an engineer for days are now optimized with a computer in seconds.
Even though servomechanisms with digital control are common, and at first glance easy to use, some motion problems (especially for quick and accurate movements) are still only soluble using linkages
and cams.
The most common linkages have one degree of freedom, meaning that there is one input motion that produces one output motion. Most linkages are also planar, meaning all the motion takes place in one
plane. Spatial linkages (non-planar) are more difficult to design and therefore not as common.
Kutzbach-Gruebler's equation is used to calculate the degrees of freedom of linkages. The number of degrees of freedom of a linkage is also called its mobility.
A simplified version of the Kutzbach-Gruebler's equation for planar linkages :
$m = 3\left(n-1\right)-2j ,$
$m ,$= mobility = degrees of freedom
$n ,$= number of links (including a single ground link)
$j ,$= number of one-degree-of-freedom kinematic pairs (pin or slider joints)
A more general form of the Kutzbach-Gruebler equation for planar linkages involving more complex joints:
$m = 3\left(n-j-1\right)+ sum_\left\{n=1\right\}^j f_i,$
Or, for
spatial linkages
(linkages involving 3D motion):
$m = 6\left(n-j-1\right)+ sum_\left\{n=1\right\}^j f_i,$
$m ,$= mobility (degrees of freedom)
$n ,$= number of links (including a single ground link)
$j ,$= number of total joints, regardless of connectivity or degree-of-freedom
$sum_\left\{n=1\right\}^j f_i$= sum of each joint's individual degree of freedom
The mobility of hydraulic machinery can easily be identified by counting the number of independently controlled hydraulic cylinders.
Types of common joints:
• Revolute or pin, one DOF rotation. Examples are; bushings, bearings, bolted joints, rivets and hinges.
• Prismatic or slider, one or two DOF linear motion. Examples are; linear bearings, hydraulic cylinders, rollers and pistons.
• Spherical or ball and socket, three DOF rotation, usually restricted to one DOF by other joints in the mechanism.
Designers will synthesize a linkage by starting with the required output motion, mechanical advantage, velocity and acceleration. A type of linkage is chosen and modified to deliver the required
Each link is treated as a vector and the vectors can be combined into a system of equations because they form a loop. The matrix is solved to create a closed form equation that relates input motion
to output motion. The same is done for mechanical advantage, or any other important quantity. The equations of motion are differentiated with respect to time to find velocity and acceleration of the
mechanism parts.
Types of linkages
Four bar linkages
are the simplest closed loop kinematic linkage. They perform a wide variety of motions with a few simple parts. They were also popular in the past due to the ease of calculations, prior to computers,
compared to more complicated mechanisms.
Other notable types of linkages;
• Pantograph (four-bar, two DOF)
• Crank-slider, (four-bar, one DOF)
• Grashof, (four-bar, one DOF) At least one link can rotate 360°
• Five bar linkages often have meshing gears for two of the links, creating a one DOF linkage. They can provide greater power transmission with more design flexibility than four bar linkages.
• Six bar, single DOF linkages offer greater design flexibility than four bar linkages, but require more parts and are more difficult to design:^3
□ Watt kinematic chain
□ Watt I, II
□ Stephenson kinematic chain
□ Stephenson I, II, III
• Parallel and Straight line mechanisms:
Linkages are primarily used as machine components and tools. Typical examples are automotive suspensions and bolt cutters. The internal combustion engine's piston/rod/crank is a classic four-bar
linkage with one degree of freedom. Linkages are often the simplest, least expensive and most efficient mechanism to perform complicated motions.
One highly visible application is the windshield wiper: a four bar linkage changes the motor's rotary motion to oscillation. Some wipers also have a second set of four bar linkages to keep the wiper
blades oriented correctly as they sweep. Another visible application is heavy equipment which makes extensive use of four and six bar linkages.
Spatial linkages are becoming more common due to computer aided design.
"The 4-Bar Linkage" is an adapted mechanical linkage used on bicycles. With a normal full-suspension bike the back wheel moves in a very tight arc shape. This means that more power is lost when going
uphill. With a bike fitted with a 4-Bar Linkage, the wheel moves in such a large arc that it is moving almost vertically. This way the power loss is reduced by up to 30%.
1. Erdman, Arthur G.; Sandor, George N. (1984). Mechanism Design: Analysis and Synthesis. Prentice-Hall. ISBN 0-13-572396-5.
See also
External links | {"url":"http://www.reference.com/browse/guide+way","timestamp":"2014-04-20T21:11:33Z","content_type":null,"content_length":"88113","record_id":"<urn:uuid:39cdeaae-2094-4e91-b5af-6c8003a6c5ca>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stochastic discounting in repeated games: Awaiting the almost inevitable
Barlo, Mehmet and Urgun, Can (2011): Stochastic discounting in repeated games: Awaiting the almost inevitable.
Download (244Kb) | Preview
This paper studies repeated games with pure strategies and stochastic discounting under perfect information. We consider infinite repetitions of any finite normal form game possessing at least one
pure Nash action profile. The period interaction realizes a shock in each period, and the cumulative shocks while not affecting period returns, determine the probability of the continuation of the
game. We require cumulative shocks to satisfy the following: (1) Markov property; (2) to have a non-negative (across time) covariance matrix; (3) to have bounded increments (across time) and possess
a denumerable state space with a rich ergodic subset; (4) there are states of the stochastic process with the resulting stochastic discount factor arbitrarily close to 0, and such states can be
reached with positive (yet possibly arbitrarily small) probability in the long run. In our study, a player’s discount factor is a mapping from the state space to (0,1) satisfying the martingale
In this setting, we, not only establish the (subgame perfect) folk theorem, but also prove the main result of this study: In any equilibrium path, the occurrence of any finite number of consecutive
repetitions of the period Nash action profile, must almost surely happen within a finite time window. That is, any equilibrium strategy almost surely contains arbitrary long realizations of
consecutive period Nash action profiles.
Item Type: MPRA Paper
Original Stochastic discounting in repeated games: Awaiting the almost inevitable
English Stochastic discounting in repeated games: Awaiting the almost inevitable
Language: English
Keywords: Repeated Games; Stochastic Discounting; Stochastic Games; Folk Theorem; Stopping Time
C - Mathematical and Quantitative Methods > C7 - Game Theory and Bargaining Theory > C79 - Other
Subjects: C - Mathematical and Quantitative Methods > C7 - Game Theory and Bargaining Theory > C72 - Noncooperative Games
C - Mathematical and Quantitative Methods > C7 - Game Theory and Bargaining Theory > C73 - Stochastic and Dynamic Games; Evolutionary Games; Repeated Games
Item ID: 28537
Depositing Mehmet Barlo
Date 02. Feb 2011 02:20
Last 19. Feb 2013 07:47
Abreu, D. (1988): “On the Theory of Infinitely Repeated Games with Discounting,” Econometrica, 56, 383–396.
Abreu, D., D. Pearce, and E. Stachetti (1990): “Toward a Theory of Discounted Repeated Games with Imperfect Monitoring,” Econometrica, 58(5), 1041–1063
Aumann, R., and L. Shapley (1994): “Long-Term Competition – A Game-Theoretic Analysis,” in Essays in Game Theory in Honor of Michael Maschler, ed. by N. Megiddo. Springer-Verlag, New
Barlo, M., G. Carmona, and H. Sabourian (2007): “Bounded Memory with Finite Action Spaces,” Sabancı University, Universidade Nova de Lisboa and University of Cambridge.
Barlo, M., G. Carmona, and H. Sabourian (2009): “Repeated Games with One – Memory,” Journal of Economic Theory, 144, 312–336.
Baye, M., and D. W. Jansen (1996): “Repeated Games with Stochastic Discounting,” Economica, 63(252), 531–541.
Dutta, P. (1995): “A Folk Theorem for Stochastic Games,” Journal of Economic Theory, 66, 1–32.
Feller, W. (1950): An Introduction to Probability Theory and Its Applications Volume I. John Wiley and Sons, 3rd edn.
Fudenberg, D., D. Levine, and E. Maskin (1994): “The Folk Theorem with Imperfect Public Information,” Econometrica, 62(5), 997–1039.
Fudenberg, D., and E. Maskin (1986): “The Folk Theorem in Repeated Games with Discounting or with Incomplete Information,” Econometrica, 54, 533–554.
Fudenberg, D., and E. Maskin (1991): “On the Dispensability of Public Randomization in Discounted Repeated Games,” Journal of Economic Theory, 53, 428–438.
Fudenberg, D., and Y. Yamamato (2010): “The Folk Theorem for Irreducible Stochastic Games with Imperfect Public Monitoring,” Harvard University.
References: Hansen, L., and S. Richard (1987): “The Role of Conditioning Information in Deducing Testable Restrictions Implied by Dynamic Asset Pricing Models,” Econometrica, 55, 587– 614.
Harrison, J. M., and D. Kreps (1979): “Martingale and Arbitrage in Multi-Period Securities Markets,” Journal of Economic Theory, 20, 381–408.
Horner, J., and W. Olszewski (2006): “The Folk Theorem for Games with Private Almost-Perfect Monitoring,” Econometrica, 74, 1499–1544.
Horner, J., T. Sugaya, S. Takahashi, and N. Vieille (2010): “Recursive Methods in Discounted Stochastic Games: An Algorithm for ! 1 and a Folk Theorem,” Yale University, Princeton
University, Princeton University, and HEC.
Kalai, E., and W. Stanford (1988): “Finite Rationality and Interpersonal Complexity in Repeated Games,” Econometrica, 56, 397–410.
Karlin, S., and H. M. Taylor (1975): A First Course in Stochastic Processes. Academic Press, 2 edn.
Mailath, G., and W. Olszewski (2008): “Folk Theorems with Bounded Recall under (Almost) Perfect Monitoring,” University of Pennsylvania and Northwestern University.
Osborne, M., and A. Rubinstein (1994): A Course in Game Theory. MIT Press, Cambridge.
Ross, S. A. (1976): “The Arbitrage Theory of Capital Asset Pricing,” Journal of Economic Theory, 13, 341–360.
Rubinstein, A. (1982): “Perfect Equilibrium in a Bargaining Model,” Econometrica, 50(1), 97–109.
Sabourian, H. (1998): “Repeated Games with M-period Bounded Memory (Pure Strategies),” Journal of Mathematical Economics, 30, 1–35.
Sugaya, T. (2010): “Characterizing the Limit Set of PPE Payoffs with Unequal Discounting,” Princeton University.
Willams, D. (1991): Probability with Martingales. Cambridge University Press.
URI: http://mpra.ub.uni-muenchen.de/id/eprint/28537
Available Versions of this Item
• Stochastic discounting in repeated games: Awaiting the almost inevitable. (deposited 02. Feb 2011 02:20) [Currently Displayed] | {"url":"http://mpra.ub.uni-muenchen.de/28537/","timestamp":"2014-04-18T06:28:06Z","content_type":null,"content_length":"28286","record_id":"<urn:uuid:27f78fe8-cee3-475c-b3cf-6a113746f0fe>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What are the formulas for Dinitrogen pentoxide, Alluminum sulfide, and iron 2 nitride
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50a4012de4b0f1696c136c43","timestamp":"2014-04-17T18:27:51Z","content_type":null,"content_length":"89506","record_id":"<urn:uuid:de550e8e-ab0d-4c63-9fc6-91b4b6345635>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
Difficult Differential Equation System
August 8th 2009, 05:17 PM #1
Junior Member
Sep 2008
Difficult Differential Equation System
The following is a model for HIV.
Infected cells = T
Concentration of viral particles = C
$\dot{T} = 0.06V - \frac{T}{2} , \dot{V} = 100T - cV$
where $c > 0$ is the rate constant for viral clearance.
1) Show that there are 2 possible types of critical points at the origin and one dividing case, and state the values of c which correspond to each case. In each case clearly state the stability
of characteristics of the critical point at the origin.
2) If c = 5.5 what will the phase portrait of the system look like. (teacher said to look at the phase portrait non-uniformly, say take (-0.01, 0.01) * (-1,-1)).
3) Under treatment, the model changes to
$\dot{T} = 0.06v - \frac{T}{3}, \dot{V} = -cV$
What are the possible types of critical point at the origin in this model?
4) Introducing a new variable, $V_N$ which is the non-infected particles. The new system is
$\dot{T} = 0.06v - \frac{T}{3}, \dot{V} = -cV , \dot{V_N} = 100T - cV_N$
Prove that the eigenvalues of the linearized matrix are all negative in this case and find out what they are.
Why are the solutions to this no simply exponentials?
I know this is a massive question, so I am going to show my working in a separate post shortly.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/differential-equations/97387-difficult-differential-equation-system.html","timestamp":"2014-04-19T05:42:40Z","content_type":null,"content_length":"31405","record_id":"<urn:uuid:80274c21-dd9d-476a-8422-b788370bebe9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability of a Straight Flush
Date: 5/15/96 at 14:44:22
From: Art Mabbott
Subject: Probability Problem
My high school math topics class is trying to work through a
probability problem involving counting and trees. The question
involves finding the probability of drawing 5 spades, that is
P(a flush) = (13 12 11 10 9)/(52 51 50 49 48) P(flush of any suit)
= 4 P(a flush)
The question is, what is the P(Straight flush) = ? (knowing that
there are only 10 ways in each suit to draw a straight flush).
Art Mabbott
Date: 5/17/96 at 19:0:51
From: Doctor Ken
Subject: Re: Probability Problem
Something that will help in these problems is the "choose" formula.
If you want to know the number of ways you can choose 5 cards from a
deck of 52 cards, when the order of the cards doesn't matter, then the
answer is 52 choose 5, which is
52!/(5!(52-5)!) = (52 51 50 49 48)/(5 4 3 2 1).
So that's the total number of different hands you could be dealt.
Since you've already figured out how many different ways you can get a
straight flush (actually, are you sure it's 10 in each suit? I get 9,
because A2345 doesn't count, right?), you can just divide the number
of straight flushes by the number of possible hands to get the
probability you're dealt a straight flush.
-Doctor Ken, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/56525.html","timestamp":"2014-04-20T03:22:03Z","content_type":null,"content_length":"6423","record_id":"<urn:uuid:eba37389-497e-4103-a80b-c07b71be471d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is Everyday Math The Worst Math Program Ever?
If you’re the parent of an elementary school-age child, chances are good (OK, about one in ten, although actual numbers are hard to pin down) that you’ve encountered Everyday Math. Chances are
reasonably good that you hate it. The math program, developed by educators at the University of Chicago, strikes some parents so very much the wrong way that a Facebook page exists for them.
Mathematicians are not very fond of it either, with one rather famously (in Everyday Math hater circles) saying that the program “fails to develop the standard algorithms of arithmetic to support
California’s requirements for student proficiency in later grades.” When the program entered the national scene as National Science Foundation-supported curriculum, the government got an earful on
that from mathematicians and scientists who strongly disapproved.
Some readers might remember whole language, adopted as a reform movement for teaching reading and writing and a counterpoint to phonics-based methods of teaching. Phonics is boring, the complaints
went, and ruins learning for children. As a teacher who has seen a degradation of student classroom writing from the 1990s to today, I’ll say that my anecdotal observations suggest that sometimes,
wading through boring means reaching the deep waters of mastery. Schools now seem to be shifting back to phonics-based learning to the point that my seven-year-old now comes home excited about new
phonemes he’s learned. He’s less excited about his math program, though.
Everyday Math is another form of rebellion, in this case against what people who seem to dislike numbers think of as “boring”: drills, algorithms, practice … all the things that work memory and
executive function mental muscle while a child learns math. But they’re borrrrrring.
Enter the hodgepodge of confusion known as Everyday Math with its fact triangles and function machines and other vague jargon that no parent understands. Ask me what my second grader or my fifth
grader is learning in math right now. Go ahead.
I can’t answer you because it’s Tuesday, and they’re using Everyday Math at their school. So yesterday, one of them was learning percents and the other one cut out some triangles for reasons that
escape me, but today, it could be anything from estimation to telling time. That’s because the “everyday” in Everyday Math seems to mean “learn something new every day before having a chance to lock
down the new thing you learned yesterday.”
Everyday Math is drill free. It’s jargon full. Complaints are widespread that it is confusing for parents and children. And it doesn’t build on concepts or scaffold understanding. It has children
learn 2 plus 2 in 500 different ways, many of which involve answering questions like, “How did Tanya add two plus two?” Um, with her brain? The program refers to the path the students trace through
this maze as “spiraling.” But to where does the spiral lead?
How effective or ineffective is this new math, this famously fuzzy math? I’m glad you asked.
Andy Isaacs made the case for Everyday Math here [updated], although some of the assertions are outdated; for example, he wrote in 2009 that Everyday Math is the only elementary school program with a
“potentially positive effect” by the What Works Clearinghouse, but Saxon math currently also has that rating, with better evidence. Isaacs, who leads development of the program, is also cited in a
2012 Chicago Tribune story on Everyday Math:
The traditional way of learning math follows a formal sequence of learning that began with addition, followed by subtraction, and stresses mastery of the traditional math algorithms over their
meaning. In contrast, Everyday Math teaches children that there are many ways to get to the same answer, Isaacs said.
Two plus two is four. How many ways are there to get to that answer? Evidently, expending household goods and using cubes is one way, but what about that giant calculator that’s always in our heads,
the one that we’ll have to rely on when there aren’t any M&Ms around to help us solve 2+2?
This program comes across like a curriculum written and adopted by people who don’t like math, who think that an understanding of processes for the sake of grasping how to understand processes isn’t
a worthy endeavor on its own. Who think that drills and algorithms aren’t math for the new century, never mind the fact that success in this new century requires more agility with rapid, structured
mental calculation and algorithms — not necessarily using numbers — than ever before.
What is Everyday Math like on the ground? One elementary school teacher’s experience reflects that of many educators with whom I’ve spoken, although not all of them have viscerally detested the
program as much; indeed, teachers who don’t seem very math oriented have liked it a lot. As elementary math instructor Matthew Clavel writes
The curriculum’s failure was undeniable: not one of my students knew his or her times tables, and few had mastered even the most basic operations; knowledge of multiplication and division was
abysmal. Perhaps you think I shouldn’t have rejected a course of learning without giving it a full year (my school had only recently hired me as a 23-year-old Teach for America corps member). But
what would you do, if you discovered that none of your fourth graders could correctly tell you the answer to four times eight?
Clavel notes that the curriculum might pass muster in a “wealthy suburban classroom,” but that its demands—from burning through expendable goods to intense parental involvement—“obscure the realities
of inner-city life.”
Anecdotes aside, how do students who’ve been saddled with curriculum end up performing? California adopted the program in the early ‘90s and watched math scores bottom out in the ensuing years,
Clavel says. The US, where Everyday Math has been taught to millions of public school children for years, continues to lag behind its developed world counterparts in math achievement, although its
placement has improved compared to 2003 stats. Maybe Everyday Math isn’t the evening-wrecking demon that many parents consider it, but it’s clearly not lifting up the nation, either. Indeed, the most
comprehensive review I could find on follow-up studies of the effectiveness of this curriculum concluded that it seemed to make no real difference (PDF). These studies did not include surveys of
parent attitudes. [ETA] I think that the authors’ conclusions are worth quoting here, and I agree with them:
The findings of this review suggest that educators as well as researchers might do well to focus more on how mathematics is taught, rather than expecting that choosing one or another textbook by
itself will move their students forward.
At its core, the Everyday Math curriculum fails in three critical ways not directly related to math. First, it buys into this notion that all learning must be fun and engaging. How valid is that idea
? Is it not just possible that some learning can be perhaps less than scintillating but still useful, like PBS News Hour? Is it not possible that the very work of working at learning is, in itself,
an important lesson about living in the real world? Little humans need to learn what responsibility is as much as they need to understand 2+2. Part of that lesson is understanding that responsibility
means working at something, for something, as a means to an end because it’s necessary, because we have to, because it is important. Even if sometimes, it can be a slog.
Second, the program completely overlooks the need for the human mind to systematize and to learn to systematize. This innate requirement is one reason my youngest son likes phonemes. It’s the reason
we want labels and categories for everything and why taxonomy and the DSM-5 exist. It’s why I’ve numbered this list of reasons. I’d argue that little minds need frameworks—yea, verily, algorithms—as
starting points for logical reasoning. When they’ve built those frameworks solidly enough, then they can fill them in and expand them outward in any directions human creativity can take them.
Meanwhile, they have a frame of reference to generalize to new scenarios.
But the basic structure must be in place, and Everyday Math deprives learners of that, giving them instead a spiral that never forms lateral connections to solidify the structure. This fuzzy approach
to math can be spectacularly bad for children like my oldest, who is on the autism spectrum. He needs repetition and reinforcement to address his executive function deficits, not a dizzying spiral
from one imprecise estimation to another. And that takes me to my third critique of the program: For other learners, such as my very concrete-thinking middle and youngest sons, Everyday Math is an
enormous failure. If its “real world” approach had anything to do with their real world–like, say, creatively incorporating Minecraft–they’d love it. But they detest its demands for estimation and
ballparking and fooling around with cubes when a simple calculation is so much more obvious, accurate, and precise. My children like math and play math games at home for entertainment. But they hate
Everyday Math, every day.
• Wonderful Article!! Thank you so much for such an excellent, well thought, overview of the problems with Everyday Math. I appreciate you getting the word out there!
• I think one of the biggest reason math is being changed is to make it visible in everyday life rather than the abstract ideas this math inherently is. When I was growing up, and when I do math
homework with my cousin, we use uncooked macaroni noodles to show how 2+2 always =4. (We used to use M&Ms but all the cousin could do was add them to his mouth. )
Once I got into high school it was always easier to learn a concept once I knew how it would be used (compound interest is an example), but regardless of what it was, I ALWAYS came down to the
rote memorization acquired in my early years. Like it or not, the basics of math are both boring and relatively abstract. I applaud their attempts at making it real and applicable, but they are
failing to teach that 2+2=4 every time.
• As a parent with a particular interest in math and science education, I am horrified with the current trend against teaching and establishing basic skills. Today’s curriculum has plenty of
language on problem solving and patterns but none of the basic necessities, including times tables and understanding how a calculator actually gets you an answer (long division and multiplication
of large numbers). The ability to recognize sight words is one of the building blocks of reading. Well times tables are the sight words of the math world. If you need a calculator to do simple
math then you will simply not be in a position to recognize patterns and have a sense of numbers. How is a child supposed to figure out the factors of 144 if the child has to randomly choose
numbers to see where the number comes from? I taught university level Chemistry and tutored Grade 11 and 12 students for their chemistry provincials and can assure you that innumerate children
cannot survive in the world of advanced science and from my experience helping students who have been taught under this curriculum I can state that without the basic skills our children are
approaching a level of innumeracy that frightens me.
To put it a bit differently, the current curriculum fails to recognize that mathematics is not only about understanding it is also about skills. Like any skill mathematics has to be practiced. We
do not expect a professional athlete to win a medal solely by visualizing himself/herself winning. We expect them to put in hours of training to develop the skills and competency. Similarly,
children need to practice their math. That means doing math, without calculators. Learning the methods to come to an answer and then practicing so they won’t forget two weeks later and be unable
to repeat the outcome.
• Everyday Math is more “math appreciation” than anything else. The “fun methods” of EM taught in is something that would be great for an after school Math Club but is not good for the basics. The
basics should be taught and mastered IN class as they are the foundation for all other math. Both of my sons had EM taught to them in elementary school and every teacher they had supplemented
with basic math. They still used the spiral method, though, so by the time the boys got to middle school, they weren’t quite ready for harder math. Our school system finally dumped EM, but it’s
too late for my boys.
• The “spiral method” was not invented by EM, and in fact is a feature of a program that many of the critics of EM adore: Saxon Math. But it turns out that spiraling is only bad when a progressive
program uses it. When Saxon Math uses it, it’s, well, something else, or it’s okay, by magic. The EM haters are some of the most dishonest people I’ve ever encountered in the world of education.
• Emily Willingham, Contributor
Those are strong words and accusations to make regarding opinions about a math curriculum.
• Michael is absolutely right. They might be “strong words” but they are absolutely accurate. There are people who absolutely despise EM, but much of it has little to do with actual content. For
example, Mr. Clavel in your column papers over his own failures by blaming the program. The fact is, that teachers like Mr. Clavel sabotage EM by completely ignoring the pedagogical specifics of
the program that are integral to its success. The usual excuse for this is rarely philosophical–it’s not that EM pedagogical precepts are considered foreign, it’s that teachers like Mr. Clavel
find them to be “too much work”. It’s like turning on a modern kitchen dishwasher without hooking it up for water and drainage and without putting in detergent. When the machine starts smoking
and the dishes come out still dirty, one could explain the failure by appealing to the tradition, “You just can’t clean dishes like you would by hand–there’s no substitute!” Or, in a simpler
analogy, Mr. Clavel is trying to drive a bus after learning how to get around in a horse buggy. In his mind, it’s the bus’s fault.
• Emily Willingham, Contributor
I take it you have personally sat in Mr. Clavel’s classroom and have thus acquired this information? And I don’t care about how strong the words are. It just makes me wonder what’s left for the
urgent problems in life.
• In reply to Buck
I certainly agree that many people find new pathways of learning and doing confronting and the stress that this produces often results in a typical psychological response. Flight or fight.
Educators and by that I mean people ranging from doctors, mechanics, engineers, artists and sportspeople that are the genuine masters of their craft have differing journeys. Most of them though
have gone beyond that simplistic linear learning.
There’s a lot more to art, as there is to education than paint by the numbers.
• Well, given that you’ve written a transparently-biased critique about EM, you should be familiar and comfortable with strong words. And expect that there will be some strong words offered in
response. As long as words are all that’s being exchanged, no one gets hurt. But in my experience, some folks in the Mathematically Correct/NYC-HOLD world aren’t satisfied until someone loses a | {"url":"http://www.forbes.com/sites/emilywillingham/2013/12/10/is-everyday-math-the-worst-math-program-ever/","timestamp":"2014-04-16T13:13:57Z","content_type":null,"content_length":"114467","record_id":"<urn:uuid:8835f97f-2b1a-41ff-9d66-f160ceedd84d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Today I decided to open a new chapter entitled "Mathematics". This new chapter will be mainly about Algebraic Coding Theory and Algebraic Geometry. If you are a mathematician, then this is not what
your are looking for because I will try to avoid all the unnecessary mathematical notations for a basic understanding. However, if you are just interested in learn some curious facts about
mathematics, then, probably this is the right place for you.
We all know that $(-2)^2 = 4$ and $2^2 = 4$, but what happens when we want to take the square root of a negative number? Until now, we simply left it as "undefined", since we had no numbers which
were negative after squared. Therefore, we couldn't "go backwards" by taking the square root.
Now, you can take the square root of a negative number, but it involves using a new number to do it. This new number was invented around the time of the Reformation. Don’t be surprised! If you think
about it, aren't all numbers inventions? It's not like numbers grow on trees! They live in our heads. Why not invent a new one, as long as it works with what we already have?
Anyway, this new number is called $i$ (stands for "imaginary") and is defined to be:
$i = \sqrt{-1}$
Now, we have a number $i$ that has the property that $i^2 = -1$. Using this, we can now find the square roots of negative numbers in terms of real numbers and $i$. Therefore:
$i^2 = (\sqrt{-1})^2 = -1$
Now, you may think you can do this:
$i^2 = (\sqrt{-1})^2 = \sqrt{(-1)^2} = \sqrt{1} =1$
But you can't! You already have two numbers that square to 1; namely -1 and +1. And $i$ already squares to -1. So it's not reasonable that $i$ would also square to 1. This points out an important
detail: When dealing with imaginaries, you gain something (the ability to deal with negatives inside square roots), but you also lose something (some of the flexibility and convenient rules you used
to have when dealing with square roots). In particular, you must always do the imaginary part first.
• Simplify $\sqrt{-18}$
$\sqrt{-18} = \sqrt{9\cdot 2 \cdot (-1)} = \sqrt{9}\;\sqrt{2}\;\sqrt{-1} = 3\sqrt{2}i$
• Simplify $-\sqrt{-6}$
$-\sqrt{-6} = -\sqrt{6 \cdot (-1)} = -\sqrt{6}\;\sqrt{-1} = -\sqrt{6}i$
• Simplify $i^9$
$i^9 = i^2\;i^2\;i^2\;i^2\;i = (-1)(-1)(-1)(-1)i = i$
Note that we can't simplify more then this.
• Simplify $\sqrt{-49}$
$\sqrt{-49} = \sqrt{-1}1 \sqrt{49}= \pm 7i$
Now let's analyze that the pattern of powers, signs, 1's, and $i$'s is a cycle:
In other words, to calculate any high power of $i$, you can convert it to a lower power by taking the closest multiple of 4 that's no bigger than the exponent and subtracting this multiple from the
exponent. For example, a common trick question on tests is something along the lines of "Simplify $i^{99}$", the idea being that you'll try to multiply $i$ ninety-nine times and you'll run out of
time, and the teachers will get a good giggle at your expense in the faculty lounge. Here's how the shortcut works:
$i^{99} = i^{96+3} = i^{(4\cdot 24)+3} = i^3 = -i$
That is, $i^{99} = i^3$, because you can just lop off the $i^{96}$. (Ninety-six is a multiple of four, so $i^{96}$ is just 1, which you can ignore.) In other words, you can divide the exponent by 4
(using long division), discard the answer, and use only the remainder. This will give you the part of the exponent that you care above. Here are a few more examples:
• Simplify $i^{120}$
$i^{120} = i^{4 \cdot 30} = i^{4\cdot 30 + 0} = i^0 = 1$
• Simplify $i^{64,002}$
$i^{64,002} = i^{64,000 + 2} = i^{4 \cdot 16,000 + 2} = i^2 = -1$
Now you've seen how imaginaries work; it's time to move on to complex numbers. "Complex" numbers have two parts, a "real" part (being any "real" number that you're used to dealing with) and an
"imaginary" part (being any number with an $i$ in it). The "standard" format for complex numbers is $a + bi$; that is, real-part first and $i$-part last.
Operations on Complex Numbers
Unlike real numbers, complex numbers can produce negative numbers when squared; because of this, all polynomials have complex roots even though some of them may lack real roots.
Complex numbers can always be reduced to the form $a + bi$. If there are any terms with higher powers of $i$, you can factor out $i^2$ as many times as you need.
Furthermore, arithmetic on complex numbers obeys the same laws of algebra real numbers do.
(a + bi) + (c + di) &=& (a + c) + (b + d)i\\
(a + bi) - (c + di) &=& (a - c) + (b - d)i\\
(a + bi) * (c + di) &=& ac + adi + bci + bdi2\\
&=& (ac - bd) + (ad + bc)i
• Simplify $(2 + 3i)(1 - i)$
$(2 + 3i)(1 - i) = 2+3i-2i-3i^2 = 2+i-3(-1) = 5+i$
• Simplify $2i + 3i$
$2i + 3i = (2 + 3)i = 5i$
• Simplify $16i - 5i$
$16i - 5i = (16 - 5)i = 11i$
• Multiply and simplify $(3i)(4i)$
$(3i)(4i) = (3\cdot 4)(i\cdot i) = (12)(i^2) = (12)(-1) = -12$
• Multiply and simplify $(i)(2i)(-3i)$
(i)(2i)(-3i) &=& (2 \cdot -3)(i \cdot i \cdot i)\\
&=& (-6)(i^2 \cdot i)\\
&=& (-6)(-1 \cdot i) = (-6)(-i) = 6i\\
Note this last problem. Within it, you can see that $i^3 = -i$, because $i^2 = -1$.
• Simplify $(2 + 3i) + (1 - 6i)$
$(2 + 3i) + (1 - 6i) = (2 + 1) + (3i - 6i) = 3 + (-3i) = 3 - 3i$
• Simplify $(5 - 2i) - (-4 - i)$
(5 - 2i) - (-4 - i) &=& (5 - 2i) - 1(-4 - i) = 5 - 2i - 1(-4) - 1(-i)\\
& = &5 - 2i + 4 + i = (5 + 4) + (-2i + i)\\
& = & (9) + (-1i) = 9 - i
You may find helpful to insert the "1" in front of the second set of parentheses so you can better keep track of the "minus" being multiplied through the parentheses.
Adding and multiplying complexes isn't too bad. It's when you work with fractions that things turn ugly. Most of the reason for this ugliness is actually arbitrary. Remember back in elementary
school, when you first learned fractions? Your teacher would get her panties in a wad if you used "improper" fractions. For instance, you couldn't say "$\frac{3}{2}$"; you had to convert it to "$1 +
\frac{1}{2}$". But now that you're in algebra, nobody cares, and you've probably noticed that "improper" fractions are often more useful than "mixed" numbers. The issue with complex numbers is that
your professor will get his boxers in a bunch if you leave imaginaries in the denominator. So how do you handle this?
Suppose you have the following exercises:
• Simplify $\frac{3}{2i}$
This is pretty "simple", but they want you to get rid of that $i$ underneath, in the denominator ? To do this, you will use the fact that $i^2 = -1$. If you multiply the fraction, top and bottom,
by $i$, then the $i$ underneath will vanish in a puff of negativity
$\frac{3}{2i} = \frac{3}{2i} \cdot \frac{i}{i} = \frac{3i}{2i^2} = \frac{3i}{2\cdot (-1)} = \frac{3i}{-2} = -\frac{3i}{2} = -\frac{3}{2}i$
So the answer is $-\frac{3}{2}i$.
• Simplify $\frac{3}{2+i}$
If you multiply this fraction, top and bottom, by $i$, I'll get:
$\frac{3}{2+i} = \frac{3}{2+i} \cdot \frac{i}{i} = \frac{3i}{2i+i^2} = \frac{3i}{2i-1} = \frac{3i}{-1 + 2i}$
Since you still have an $i$ underneath, this didn't help much. So how do you handle this simplification? You use something called "conjugates". The conjugate of a complex number $z=a + bi$ is the
same number, but with the opposite sign in the middle: $\bar{z}=a - bi$. When you multiply conjugates, you are, in effect, multiplying to create something in the pattern of a difference of
(a+bi)(a-bi) &=& a^2 -abi+abi-(bi)^2\\
&=& a^2 - b^2(i^2)\\
&=& a^2-b^2(-1)\\
Note that the $i$'s disappeared, and the final result was a sum of squares. This is what the conjugate is for, and here's how it is used:
\frac{3}{2+i}&=&\frac{3}{2+i}\cdot \frac{2-i}{2-i} \\
So the answer is $\frac{6}{5}-\frac{3}{5}i$
In the last step, note how the fraction was split into two pieces. This is because, technically speaking, a complex number is in two parts, the real part and the $i$ part. They aren't supposed to
"share" the denominator. To be sure your answer is completely correct, split the complex-valued fraction into its two separate terms.
As I explained before, the complex conjugate of the complex number $z = x + yi$ is defined to be $x − yi$, written as \bar{z} or z^*. You can imagine $\bar{z}$ to be the "reflection" of $z$ about the
real axis. Therefore, both $z+\bar{z}$ and $z\cdot\bar{z}$ are real numbers.
We also have the square of the absolute value obtained by multiplying a complex number by its conjugate:
• $|z|^2 = z\cdot\bar{z}$
• $|z|=|\bar{z}|$
• $z^{-1} = \frac{\bar{z}}{|z|^{2}}$ if $z$ is non-zero.
The real and imaginary parts of a complex number can also be extracted using the conjugate:
• $\bar{z}=z$ if and only if $z$ is real
• $\bar{z}=-z$ if and only if $z$ is purely imaginary
• Re$\{z\} = \frac{1}{2}(z+\bar{z})$
• Im$\{z\} = \frac{1}{2i}(z-\bar{z})$
Complex Numbers and The Quadratic Formula
You'll probably only use complexes in the context of solving quadratics for their zeroes. However there are many other practical uses for complexes, but for now you'll have to wait my next post on
this topic.
Remember that the Quadratic Formula solves $ax^2 + bx + c = 0$ for the values of $x$. Also remember that this means that you are trying to find the x-intercepts of the graph. When the Formula gives
you a negative inside the square root, you can now simplify that zero by using complex numbers. The answer you come up with is a valid "zero" or "root" or "solution" for $ax^2 + bx + c = 0$, because,
if you plug it back into the quadratic, you'll get zero after you simplify. But you cannot graph a complex number on the x,y-plane. So this "solution to the equation" is not an x-intercept.
As an aside, you can graph complexes, but not in the x,y-plane. You need the "complex" plane. For the complex plane, the x-axis is where you plot the real part, and the y-axis is where you graph the
imaginary part. For instance, you would plot the complex number $3 - 2i$ in the position $(x,yi) = (3,2)$
This leads to an interesting fact: When you learned about regular ("real") numbers, you also learned about their order (this is what you show on the number line). But x,y-points don't come in any
particular order. You can't say that one point "comes after" another point in the same way that you can say that one number comes after another number. For instance, you can't say that (4, 5) "comes
after" (4, 3) in the way that you can say that 5 comes after 3. Pretty much all you can do is compare "size", and, for complex numbers, "size" means "how far from the origin". To do this, you use the
Distance Formula, and compare which complexes are closer to or further from the origin. This "size" concept is called "the modulus". For instance, looking at our complex number plotted above, its
modulus is computed by using the Distance Formula:
$|3-2i| = \sqrt{3^2+2^2} = \sqrt{9+4} = \sqrt{13} \approx 3.61$
Note that all points at this distance from the origin have the same modulus. All the points on the circle with radius $\sqrt{13}$ are viewed as being complex numbers having the same "size" as $3 -
Complex Number in Phasor Form
A complex number $z=x+iy$ can be written in "phasor" form $z=|z|(cos\; \theta + i\; sin\; \theta)=|z|e^{i\theta}$
Here, $|z|$ is known as the complex modulus (or sometimes the complex norm) and $\theta$ is known as the complex argument or phase. The plot above shows what is known as an Argand diagram of the
point z, where the dashed circle represents the complex modulus $|z|$ of $z$ and the angle $\theta$ represents its complex argument. Historically, the geometric representation of a complex number as
simply a point in the plane was important because it made the whole idea of a complex number more acceptable. In particular, "imaginary" numbers became accepted partly through their visualization.
Matrix representation of complex numbers
While usually not useful, alternative representations of the complex field can give some insight into its nature. One particularly elegant representation interprets each complex number as a $2\times
2$ matrix with real entries which stretches and rotates the points of the plane. Every such matrix has the form
$\begin{pmatrix}a & -b\\b & a\end{pmatrix}$
where a and b are real numbers. The sum and product of two such matrices is again of this form, and the product operation on matrices of this form is commutative. Every non-zero matrix of this form
is invertible, and its inverse is again of this form.
Therefore, the matrices of this form are a field, isomorphic to the field of complex numbers. Every such matrix can be written as
$\begin{pmatrix}a & -b\\b & a\end{pmatrix} =a\begin{pmatrix}1 & 0\\0 & 1\end{pmatrix}+b\begin{pmatrix}0 & -1\\1 & 0\end{pmatrix}$
which suggests that we should identify the real number 1 with the identity matrix
$\begin{pmatrix}1 & 0\\0 & 1\end{pmatrix}$
and the imaginary unit $i$ with
$\begin{pmatrix}0 & -1\\1 & 0\end{pmatrix}$
a counter-clockwise rotation by 90 degrees. Note that the square of this latter matrix is indeed equal to the 2x2 matrix that represents -1.
The square of the absolute value of a complex number expressed as a matrix is equal to the determinant of that matrix.
$|z|^2 = det\; \left( \begin{array}{c c}a & -b\\b & a\end{array}\right) = (a^2) - ((-b)(b)) = a^2 + b^2$
Stapel, Elizabeth. "Complex Numbers & The Quadratic Formula." Purplemath. Available from http://www.purplemath.com/modules/complex3.htm. Accessed 28 April 2010
Complex Numbers. Wikipedia. Available from http://en.wikipedia.org/wiki/Complex_number. Accessed 28 April 2010
There are a number of tricks that can make your life easier and help you to squeeze the last bit of performance from your scripts. These tricks won't make your web applications much faster, but can
give you that little edge in performance you may be looking for. More importantly it may give you insight into how PHP internals works allowing you to write code that can be executed in more optimal
fashion by the Zend Engine.
1. Static methods
If a method can be declared static, declare it static. Speed improvement is by a factor of 4.
2. echo() vs. print()
Even both of these output mechanism are language constructs, if you benchmark the two you will quickly discover that print() is slower then echo(). The reason for that is quite simple, print function
will return a status indicating if it was successful or not (note: it does not return the size of the string), while echo simply print the text and nothing more. Since in most cases this status is
not necessary and is almost never used it is pointless and simply adds unnecessary overhead.
echo( 'Hello World' );
// is better than
print( 'Hello World' );
3. echo's multiple parameters
Use echo's multiple parameters instead of string concatenation. It's faster.
echo 'Hello', ' ', 'World';
// is better than
echo 'Hello' . ' ' . 'World';
Read more...
4. Avoid the use of printf
Using printf() is slow for multitude of reasons and I would strongly discourage it's usage unless you absolutely need to use the functionality this function offers. Unlike print and echo printf() is
a function with associated function execution overhead. More over printf() is designed to support various formatting schemes that for the most part are not needed in a language that is typeless and
will automatically do the necessary type conversions. To handle formatting printf() needs to scan the specified string for special formatting code that are to be replaced with variables. As you can
probably imagine that is quite slow and rather inefficient.
echo 'Result:', $result;
// is better than
printf( "Result: %s", $result );
5. Single quotes vs. double quotes
In PHP there is a difference when using either single or double quotes, either ‘ or “. If you use double quotes ” then you are telling the code to check for a variable. If you are using single quotes
‘ then you are telling it to print whatever is between them. This might seem a bit trivial, but if you use the double quotes instead of the single quotes, it will still output correctly, but you will
be wasting processing time.
echo 'Result: ' . $var;
// is better than
echo "Result: $var";
Even the use of sprintf instead of variables contained in double quotes, it’s about 10x faster.
Read more...
6. Methods in derived classes vs. base classes
Methods in derived classes run faster than ones defined in the base class.
7. Accessing arrays
e.g. $row['id'] is 7 times faster than $row[id]
8. Do not implement every data structure as a class
Not everything has to be OOP, often it is too much overhead, each method and object call consumes a lot of memory. For this reason, do not implement every data structure as a class, arrays are
useful, too.
9. Avoid functions inside loops
Try to use functions outside loops. Otherwise the function may get called each time.
// e.g. In PHP for loop with a count() inside the control
// block will be executed on EVERY loop iteration.
$max = count( $array );
for( $i = 0; $i < $max; $i++ )
// do something
// is better than
for( $i = 0; $i < count( $array ); $i++ )
// do something
It's even faster if you eliminate the call to count() AND the explicit use of the counter by using a foreach loop in place of the for loop.
foreach ($array as $i) {
// do something
Read more...
Note: A function call with one parameter and an empty function body takes about the same time as doing 7-8 $localvar++ operations. A similar method call is of course about 15 $localvar++ operations.
10. ?> <?
When you need to output a large or even a medium sized static bit of text it is faster and simpler to put it outside the of PHP. This will make the PHP's parser effectively skip over this bit of text
and output it as is without any overhead. You should be careful however and not use this for many small strings in between PHP code as multiple context switches between PHP and plain text will ebb
away at the performance gained by not having PHP print the text via one of it's functions or constructs.
11. isset instead of strlen
When working with strings and you need to check that the string is either of a certain length you'd understandably would want to use the strlen() function. This function is pretty quick since it's
operation does not perform any calculation but merely return the already known length of a string available in the zval structure (internal C struct used to store variables in PHP). However because
strlen() is a function it is still somewhat slow because the function call requires several operations such as lowercase & hashtable lookup followed by the execution of said function. In some
instance you can improve the speed of your code by using a isset() trick.
if (!isset($foo{5})) { echo "Foo is too short"; }
// is better than
if (strlen($foo) < 5) { echo "Foo is too short"; }
Calling isset() happens to be faster then strlen() because unlike strlen(), isset() is a language construct and not a function meaning that it's execution does not require function lookups and
lowercase. This means you have virtually no overhead on top of the actual code that determines the string's length.
12. true is faster than TRUE
This is because when looking for constants PHP does a hash lookup for name as is. And since names are always stored lowercased, by using them you avoid 2 hash lookups. Furthermore, by using 1 and 0
instead of TRUE and FALSE, can be considerably faster.
13. Incrementing or decrementing the value of the variable
When incrementing or decrementing the value of the variable $i++ happens to be a tad slower then ++$i. This is something PHP specific and does not apply to other languages. ++$i happens to be faster
in PHP because instead of 4 opcodes used for $i++ you only need 3. Post incrementation actually causes in the creation of a temporary var that is then incremented. While pre-incrementation increases
the original value directly. This is one of the optimization that opcode optimized like Zend's PHP optimizer. It is a still a good idea to keep in mind since not all opcode optimizers perform this
optimization and there are plenty of ISPs and servers running without an opcode optimizer.
1. Incrementing a local variable in a method is the fastest. Nearly the same as calling a local variable in a function.
2. Incrementing a global variable is 2 times slower than a local variable.
3. Incrementing an object property (eg. $this->prop++) is 3 times slower than a local variable.
4. Incrementing an undefined local variable is 9-10 times slower than a pre-initialized one.
Read more ... and more...
14. Replace regex calls with ctype extension, if possible
Many scripts tend to reply on regular expression to validate the input specified by user. While validating input is a superb idea, doing so via regular expression can be quite slow. In many cases the
process of validation merely involved checking the source string against a certain character list such as A-Z or 0-9, etc... Instead of using regex in many instances you can instead use the ctype
extension (enabled by default since PHP 4.2.0) to do the same. The ctype extension offers a series of function wrappers around C's is*() function that check whether a particular character is within a
certain range. Unlike the C function that can only work a character at a time, PHP function can operate on entire strings and are far faster then equivalent regular expressions.
// is better than
preg_match("![0-9]+!", $foo);
15. isset vs. in_array and array_key_exists
Another common operation in PHP scripts is array searching. This process can be quite slow as regular search mechanism such as in_array() or manual implementation work by iterating through the entire
array. This can be quite a performance hit if you are searching through a large array or need to perform the searches frequently. So what can you do? Well, you can do a trick that relies upon the way
that Zend Engine stores array data. Internally arrays are stored inside hash tables when they array element (key) is the key of the hashtables used to find the data and result is the value associated
with that key. Since hashtable lookups are quite fast, you can simplify array searching by making the data you intend to search through the key of the array, then searching for the data is as simple
as $value = isset($foo[$bar])) ? $foo[$bar] : NULL;. This searching mechanism is way faster then manual array iteration, even though having string keys maybe more memory intensive then using simple
numeric keys.
$keys = array("apples"=>1, "oranges"=>1, "mangoes"=>1);
if (isset($keys['mangoes'])) { ... }
// is roughly 3 times faster then
$keys = array("apples", "oranges", "mangoes");
if (in_array('mangoes', $keys)) { ... }
// isset is also faster then
if(array_key_exists('mangoes', $keys)) { ... }
Note: However the lookup times don't diverge until you've got a very considerable amount of data in your array. e.g. If you have just 2-3 entries in your array, it will take more time to hash the
values and perform the lookup than it would take to perform a simple linear search ( O( n ) vs. O( log n ) )
16. Free unnecessary memory
Unset your variables to free memory, especially large arrays.
17. Specify full paths
Use full paths in includes and requires, less time spent on resolving the OS paths.
include( '/var/www/html/your_app/database.php' );
//is better than
include( 'database.php' );
Read more...
18. regex vs. strncasecmp, strpbrk and stripos
See if you can use strncasecmp, strpbrk and stripos instead of regex, since regex is usually slower.
19. str_replace vs. preg_replace vs. strtr
The str_replace is better than preg_replace, but strtr is better than str_replace by a factor of 4.
20. select vs. multi if and else if statements
It’s better to use select statements than multi if, else if statements.
switch( $name )
case 'aaa':
// do something
case 'bbb':
// do something
case 'ccc':
// do something
// do something
// is better than
if( $name == 'aaa' )
// do something
else if( $name == 'bbb' )
// do something
else if( $name == 'ccc' )
// do something
// do something
Read more...
21. Error suppression with @ is very slow
$name = isset( $id ) : 'aaa' : NULL;
//is better than
$name = @'aaa';
Read more...
22. Boolean Inversion
Most of the time, inverting a boolean value is as simple as using the logical ‘not’ operator e.g. false = !true. That’s easy enough, but occasionally you might find yourself working with integer-type
booleans instead, with 1s and 0s in the place of true and false; in that case, here’s a short PHP snippet that does the same thing:
$true = 1;
$false = 1 - $true;
$true = 1 - $false;
The same principle can be used any time you want to toggle an integer between two values e.g. between 2 and 5:
$val = 5;
$val = 7 - $val; // now it's 2...
$val = 7 - $val; // and now it's 5 again
23. isset($var, $var, …)
Useful little thing, this - you can check the state of multiple variables within a single PHP isset() construct, like so:
$foo = $bar = 'are set';
isset($foo, $bar); // true
isset($foo, $bar, $baz); // false
isset($baz, $foo, $bar); // false
On a related note, in case you’re not already aware of this, isset actually sees null as being not set:
$list = array('foo' => 'set', 'bar' => null);
isset($list['foo']); // true, as expected
isset($list['bar']); // false!
In situations like the above, it’s more reliable to use array_key_exists().
24. Modulus Operator
During a loop, it’s a fairly common need to perform a specific routine every n-th iteration. The modulus operator is extremely helpful here - it’ll divide the first operand by the second and return
the remainder, to create a useful, cyclic sequence:
for ($i = 0; $i < 10; ++$i)
echo $i % 4, ' ';
// outputs 0 1 2 3 0 1 2 3 0 1
25. http_build_query
This function turns a native array into a nicely-encoded query string. Furthermore, this native function is configurable and fully supports nested arrays.
26. <input name="foo[bar]" />
HTML + PHP are quite capable of handling form fields as arrays. This one’s particularly helpful when dealing with multiple checkboxes since the selected values can be automatically pushed into an
indexed (or associative) array, rather than having to capture them yourself.
27. get_browser()
Easily get your hands on the users browser-type. Some leave this process to the browser end but can be useful to get this info server side.
28. debug_print_backtrace()
I use this one a lot, print a debug-style list of what was called to get the the point where this function is called.
29. Automatic optimization for your database
Just like you need to defrag and check your file system, it’s important to do the same thing with SQL tables. If you don’t, you might end up with slow and corrupted database tables.
Furthermore, you will probably add and delete tables from time to time. Therefore, you want a solution that works no matter how your database looks like. For this, you can use this PHP script that
finds all your tables, and then perform Optimize on every single one. Then a good idea can be to do this every night (or whenever your server is least accessed) with “cron” because you don’t want to
delay your surfers to much.
$tables = mysql_query("SHOW TABLES");
while ($table = mysql_fetch_assoc($tables))
foreach ($table as $db => $tablename)
mysql_query("OPTIMIZE TABLE '".$tablename."'")
or die(mysql_error());
30. require() vs. require_once()
Use require() instead of require_once() where possible.
Read more...
31. Check System Calls
A common mistake with Apache
/usr/sbin/apache2 -X &
strace -p 16367 -o sys1.txt
grep stat sys1.txt | grep -v fstat | wc -l
index.html (No such file or directory)
index.cgi (No such file or directory)
index.pl (No such file or directory)
index.php ...
Fix DirectoryIndex
<Directory /var/www>
DirectoryIndex index.php
32. Secure HTTP connections
You can force a secure HTTP connection using the following code,
if (!($HTTPS == "on")) {
header ("Location: https://$SERVER_NAME$php_SELF");
33. Avoid magic like __get, __set, __autoload
_get() and __set() will provide a convenience mechanism for us to access the individual entry properties, and proxy to the other getters and setters. They also will help ensure that only properties
we whitelist will be available in the object. This obviously carries a small cost. Also, __autoload will affect your code more less in the same way as require_once.
[1] Here it is. 8 randomly useful PHP tricks. http://theresmystuff.com, 2010.
[2] TJS. 5 useful (PHP) tricks and functions you may not know. http://www.tjs.co.uk, 2010.
[3] Checkmate - Play with problems. Optimize your PHP Code - Tips, Tricks and Techniques. http://abcphp.com, 2008.
[4] iBlog - Ilia Alshanetsky. PHP Optimization Tricks. http://ilia.ws, 2004.
Ever since Windows 2000, the NTFS file system in Windows has supported Alternate Data Streams, which allow you to store data “behind” a filename with the use of a stream name.
This isn't a well known feature and was included, primarily, to provide compatibility with files in the Macintosh file system. Alternate data streams allow files to contain more than one stream of
data. Every file has at least one data stream. In Windows, this default data stream is called :$DATA.
Windows Explorer doesn't provide a way of seeing what alternate data streams are in a file but they can be created and accessed easily. Because they are difficult to find they are often used by
hackers to hide files on machines that they've compromised (perhaps files for a root-kit). Executables in alternate data streams can be executed from the command line and they will not show up in
Windows Explorer (or the Console).
How to write data to hidden streams
You can add data to a hidden stream by using any command that can pipe input or output and accept the standard FileName:StreamName syntax. You may also use some text editors such as Notepad.
Here, the StreamName can be seen as a secret word. If you plan to use notepad remember that StreamName must have the extension on the end, e.g. secretword.txt, secretword.exe.
For instance, let's use the echo command and use the data stream name todo.txt for compatibility with notepad.
echo Important - Kiss the girl next door tomorrow
> library.txt:todo.txt
You can add whatever other information to this file that you’d like.
If you prefer to use notepad you can use the following command:
notepad.exe library.txt:todo.txt
Remember that if you didn’t specify the extension on the end, Notepad will automatically add it, and ask if you want to create a new file, even if library.txt already existed, this because todo.txt
doesn’t exist.
You can use the command line again to add a second hidden “compartment” with a different name:
echo think-techie.com is really cool > library.txt:secrets.txt
If you check your filesystem you will find only one empty file called library.txt with zero bytes (because the file is empty and the file size corresponds always to this default data stream, :$DATA).
You can even open up the file by double-clicking on it, and add whatever data you want to make the file look normal.
You can even do something more cool like this:
C:\> type C:\windows\system32\notepad.exe > c:\windows\system32\calc.exe:notepad.exe
C:\> start c:\windows\system32\calc.exe:notepad.exe
With similar commands you can hide also applications.
1. None of these hidden files will affect the other, or change the main file.
2. You have to use the command line to access the hidden data.
3. You can’t copy your file to another location and access the streams over there.
Reading a Stream
You can read data from the stream by piping data into the more command or by using notepad, with the following syntax:
more < FileName:StreamName
notepad.exe FileName:StreamName
For instance,
notepad.exe library.txt:todo.txt
Of course these files aren’t completely hidden because you can use a small command line application called Streams.exe to detect files that have streams, including their names and sizes. As
alternative you can use the DIR command, if you are using Vista.
For instance, in my scenario we’d use the following syntax:
streams.exe library.txt
and the result would be:
C:\>streams.exe library.txt
Streams v1.56 - Enumerate alternate NTFS data streams
Copyright (C) 1999-2007 Mark Russinovich
Sysinternals - www.sysinternals.com
:secrets.txt:$DATA 34
:todo.txt:$DATA 46
This isn’t a secure way to hide data. It’s just one of those things that can be used for fun or be handy here or there. | {"url":"http://www.think-techie.com/2010_04_01_archive.html","timestamp":"2014-04-16T18:57:23Z","content_type":null,"content_length":"101136","record_id":"<urn:uuid:8d2329ef-a687-4ce1-990d-a6ab7f67f44c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bethesda, MD Algebra 1 Tutor
Find a Bethesda, MD Algebra 1 Tutor
...I have attended many math trainings and conferences which have provided me with a wealth of teaching strategies and techniques to be able to reach all types of learners. As a teacher, I use
many hands-on manipulatives to help build a conceptual understanding of the concepts my students are learning. I also enjoy incorporating technology and math games into my lessons.
4 Subjects: including algebra 1, SAT math, elementary math, prealgebra
...I also know that exploring many methods is the best way to build conceptual understanding of math. In many math classrooms today, teachers show their students one way to solve a problem, and
then the students simply mimic a series of steps. This approach does not promote conceptual understanding!
16 Subjects: including algebra 1, English, writing, calculus
...I've taught all types of students from students very far behind to Honors students as well as summer school courses. I am an alumna of Teach for America and have high scores (greater than 90th
percentile) on the SAT, ACT, GMAT and GRE. Geometry is my favorite math subject!
12 Subjects: including algebra 1, geometry, algebra 2, ASVAB
...I'm currently running two algebra classes, and several tutoring classes for 9th grade chemistry, Algebra II, physics, and pre-calculus. Thank you, YuI'm a Chessmaster. I've been tutoring chess
for the last 10 years outside of WyzAnt.
24 Subjects: including algebra 1, reading, calculus, chemistry
...Building a strong rapport with my students, working with the teachers when asked, and sharing my passion for math are hallmarks of my practice. I am an experienced middle school math teacher
with extensive experience teaching algebra. I hold an MEd in secondary math education and am pursuing a PhD in math education at George Mason University.
19 Subjects: including algebra 1, calculus, geometry, statistics
Related Bethesda, MD Tutors
Bethesda, MD Accounting Tutors
Bethesda, MD ACT Tutors
Bethesda, MD Algebra Tutors
Bethesda, MD Algebra 2 Tutors
Bethesda, MD Calculus Tutors
Bethesda, MD Geometry Tutors
Bethesda, MD Math Tutors
Bethesda, MD Prealgebra Tutors
Bethesda, MD Precalculus Tutors
Bethesda, MD SAT Tutors
Bethesda, MD SAT Math Tutors
Bethesda, MD Science Tutors
Bethesda, MD Statistics Tutors
Bethesda, MD Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Arlington, VA algebra 1 Tutors
Chevy Chase algebra 1 Tutors
Chevy Chase Village, MD algebra 1 Tutors
Chevy Chs Vlg, MD algebra 1 Tutors
Falls Church algebra 1 Tutors
Gaithersburg algebra 1 Tutors
Hyattsville algebra 1 Tutors
Martins Add, MD algebra 1 Tutors
Martins Additions, MD algebra 1 Tutors
Mc Lean, VA algebra 1 Tutors
Rockville, MD algebra 1 Tutors
Silver Spring, MD algebra 1 Tutors
Somerset, MD algebra 1 Tutors
Takoma Park algebra 1 Tutors
Washington, DC algebra 1 Tutors | {"url":"http://www.purplemath.com/bethesda_md_algebra_1_tutors.php","timestamp":"2014-04-17T21:53:58Z","content_type":null,"content_length":"24298","record_id":"<urn:uuid:3b8878ac-2287-4dac-8ff1-b021a1b7bb40>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistical test of 9ET in gamelan pelog
Gamelan pelog shows a statistically significant bias toward the interval steps of a 9-tone equal temperament (9ET)
Investigated question: Can the bias of the pelog scale toward 9ET, which seems obvious in the data, be a result of chance distribution (null hypothesis)?
Method: Considering the 22 interval types, the largest difference between a mean interval and a 9ET interval is 36 Cent (see diagram for the interval nem-barang: mean interval = 169 Cent, 9ET
interval = 133 Cent). This maximum-difference range (0-36 Cent) was divided into a close-range quarter (0-8 Cent) and a distant-range three-quarter (9-36 Cent).
Expected chance distribution of the 22 interval types on the two ranges: 5.4 / 16.6
Measured distribution of the 22 interval types on the two ranges: 10 / 12
The deviation of the measured from the expected data was tested by a chi-square goodness-of-fit test. The result was P = 0.02.
Conclusion: We can say with a certainty of 98 % that the pelog tunings of Central Java use intervals that are based on the 9ET scale.
Back to: gamelan page | {"url":"http://www.neuroscience-of-music.se/pelog%20stat.htm","timestamp":"2014-04-16T19:03:32Z","content_type":null,"content_length":"2200","record_id":"<urn:uuid:e28e34bf-7010-4583-8cca-790d718341fc>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00637-ip-10-147-4-33.ec2.internal.warc.gz"} |
Arithemetic sums
March 29th 2008, 05:09 AM
Arithmetic sums
Dear forum members,
please can someone help me to understand how to do this problem
Determine the sum of all quotients $\frac{m}{n}$ where m and n are whole numbers and $0<m<n<100.$
I don't understand how to do this.
Any help is appreciated.
Thank you in advance!
March 29th 2008, 05:25 AM
mr fantastic
$\left( \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + ..... + \frac{1}{99}\right)$
$+ \left( \frac{2}{3} + \frac{2}{4} + \frac{2}{5} + ..... + \frac{2}{99}\right)$
$+ \left( \frac{3}{4} + \frac{3}{5} + \frac{3}{6} + ..... + \frac{3}{99}\right)$
$+ .....$
$+ \left( \frac{97}{98} + \frac{97}{99}\right)$
$+ \left( \frac{98}{99}\right)$.
There is now a clever grouping you can do:
$\frac{1}{2} + \left( \frac{1}{3} + \frac{2}{3}\right) + \left( \frac{1}{4} + \frac{2}{4} + \frac{3}{4}\right) + ....... + \left( \frac{1}{99} + \frac{2}{99} + \frac{3}{99} + ..... + \frac{98}
{99} \right )$
$= \frac{1}{2} + \frac{1}{3} (1 + 2) + \frac{1}{4} (1 + 2 + 3) + ....... + \frac{1}{99} (1 + 2 + 3 + .... + 97 + 98)$
and each bracket contains a simple arithmetic series ...... Note: The sum of the first n natural numbers is given by $\frac{n(n+1)}{2}$.
March 29th 2008, 06:24 AM
Thank you so much!
Now I get the method. But won't it be quite a long calculation, even if using the arithmetic formula. My teacher wrote a "short cut" on a sample answer sheet as follows
the i-1 should be above the sigma sign, and the k=1 below, and there should be no equal sign before the k, but I just can't get it to work like that. Hope it is still clear enough.
Can you please help me figure out what he is trying to tell?
I noticed that the amount numbers in brackets after the fraction multiplier, are always one less in amount than the numerator of the fraction, but I don't know how to make use of that when
creating a shortcut to calculate the sum.
March 29th 2008, 07:20 AM
Did you understood why he gave this formula ? I'll try to explain a part of it...
This is because in a first time, you sum the fractions while variating the numerator, then when variating the denominator.
The numerator is, as Mr F mentioned the sum of the terms of an arithmetic sequence.
The thing is, normally, it's n(n+1)/2, with n the number above the sign sum (can't find a noun about it)
So here, it's (i-1)(i-1+1)/2 according to the formula.
But i-1+1=i
So you can simplify the general term of the sequence.
And you now have to calculate $\sum_{i=2}^{99} \frac{i-1}{2} = \frac{1}{2} (\sum_{i=2}^{99} i) - \frac{98}2$
March 29th 2008, 09:00 AM
We all have different ways. This is close to your instructor's.
$\sum\limits_{n = 2}^{99} {\frac{{\sumolimits_{m = 1}^{n - 1} m }}{n}} = \sum\limits_{n = 2}^{99} {\frac{{\frac{{\left( {n - 1} \right)n}}{2}}}{n}} = \frac{1}{2}\sum\limits_{n = 2}^{99} {(n - 1)}
= \frac{1}{2}\sum\limits_{n = 1}^{98} n = \frac{{\left( {98} \right)\left( {99} \right)}}{4}<br />$ | {"url":"http://mathhelpforum.com/math-topics/32410-arithemetic-sums-print.html","timestamp":"2014-04-18T15:32:20Z","content_type":null,"content_length":"11824","record_id":"<urn:uuid:fa14bd5d-0715-427e-b10b-7977de252ce3>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Nonempty Finite Interval Mereology
A.P. Hazen a.hazen at philosophy.unimelb.edu.au
Sat Nov 19 01:46:16 EST 2005
Harvey Friedman writes--
>The kind of mereology being discussed currently on the FOM can be reasonably
>identified, model theoretically, with the partial order of all nonempty
>finite open intervals in the real line under inclusion.
----Mereology usually considers a bit more than the intervals: basic
axiom of mereology says that for any non-empty set of "individuals"
there is an individual which is their fusion. So, taking nonempty
finite open intervals as "individuals" we ought to have also unions
of individuals (or rather interiors of unions of individuals: regular
open sets). The axiomatics of this structure have been studied at
least since Tarski (whose "Foundations of the Geometry of Solids,"
ch. 2 of his "Logic, Semantics, Metamathematics," takes the set of
regular open sets in a Euclidean space as a model for "solids,"
interpreting the mereological notion "part of" by containment). I
could be wrong, but my impression is that the real line is a rich
enough structure to provide counterexamples to any sentence of
first-order mereology (= first-order language with variables
construed asranging over "individuals" and "part of" or something
interdefinable with it as vocabulary) that fails in ANY atomless
mereology. (So the theory is the theory of a complete Boolean
Algebra with thebottom element left off.)
Cf. also Tarski's 1935 "Foundations of Boolean Algebra," (= ch 11
of "LSM") and the finalfootnote added to it in the book.
Allen Hazen
Philosophy Department
University of Melbourne
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2005-November/009363.html","timestamp":"2014-04-20T13:21:35Z","content_type":null,"content_length":"4192","record_id":"<urn:uuid:55fbbd7b-bd31-4304-8c7c-100b7453d3ad>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
Penn Valley, PA Geometry Tutor
Find a Penn Valley, PA Geometry Tutor
...I am very up to date on the new Common Core Standards at all levels and was a member of the curriculum revision team in the school district where I am employed. I am also familiar with MAPS
tests and ASK (soon to be PARCC) testing. Depending on the level of math and the personality of each stud...
12 Subjects: including geometry, algebra 1, trigonometry, algebra 2
...I have experience in tutoring all subject fields that are included on the ACT math test. As an undergraduate student at Jacksonville University, I studied both ordinary differential equations
and partial differential equations obtaining A's in both courses. I have also been tutoring these courses while a tutor at Jacksonville University.
13 Subjects: including geometry, calculus, GRE, algebra 1
...This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in linear algebra, differential equations, analysis, complex variables, number theory, and
non-euclidean geometry. I taught Trigonometry with a national tutoring chain for five years. I have taught Trigonometry as a private tutor since 2001.
12 Subjects: including geometry, calculus, algebra 1, writing
...I specialize in tutoring high school math and science as well as test preparation for the SAT and ACT with years of experience in the US and abroad. My approach to tutoring is to first
identify the area(s) where the student is struggling and then develop a personalized learning program based on ...
21 Subjects: including geometry, reading, writing, chemistry
I am teaching math, for over 20 years now, and was awarded four times as educator of the year. I was also mentor of the year twice. I have a variety of experience teaching not only in different
countries, but also teaching here in public school, private school, charter school, and adult continuing education school.
15 Subjects: including geometry, algebra 1, algebra 2, GED
Related Penn Valley, PA Tutors
Penn Valley, PA Accounting Tutors
Penn Valley, PA ACT Tutors
Penn Valley, PA Algebra Tutors
Penn Valley, PA Algebra 2 Tutors
Penn Valley, PA Calculus Tutors
Penn Valley, PA Geometry Tutors
Penn Valley, PA Math Tutors
Penn Valley, PA Prealgebra Tutors
Penn Valley, PA Precalculus Tutors
Penn Valley, PA SAT Tutors
Penn Valley, PA SAT Math Tutors
Penn Valley, PA Science Tutors
Penn Valley, PA Statistics Tutors
Penn Valley, PA Trigonometry Tutors
Nearby Cities With geometry Tutor
Bala Cynwyd geometry Tutors
Bala, PA geometry Tutors
Belmont Hills, PA geometry Tutors
Cynwyd, PA geometry Tutors
Gulph Mills, PA geometry Tutors
Lower Merion, PA geometry Tutors
Merion Park, PA geometry Tutors
Merion, PA geometry Tutors
Miquon, PA geometry Tutors
Narberth geometry Tutors
Overbrook Hills, PA geometry Tutors
Penn Wynne, PA geometry Tutors
Pilgrim Gardens, PA geometry Tutors
Pilgrim Gdns, PA geometry Tutors
Wynnewood, PA geometry Tutors | {"url":"http://www.purplemath.com/Penn_Valley_PA_Geometry_tutors.php","timestamp":"2014-04-21T04:54:44Z","content_type":null,"content_length":"24460","record_id":"<urn:uuid:c2a5a190-00ce-49d0-9596-4c937784ed6d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
Research Computational Geometry
Research The Computational Geometry Group works on fonts, graph drawing and motion planning, geometric problems in manufacturing, probabalistic analysis, and vertex enumeration for convex
Labs polytopes. Click here to go to the CGM group webpages.
Publications Fonts
Reports In font design, we study the geometric, mathematical and methodological problems associated with the simulation of handwriting, random typefaces, brushed characters, running ink
and linked letters. The character "6" on this page is from a typeface designed from a sample written on a magnetic pad. Our software identifies the important points of a stroke,
creates smooth Bézier curves, defines glyphs based on various pen nibs, and outputs a Postscript font.
Contact Luc Devroye for more info on font research.
Geometric Problems in Manufacturing
There are many interesting geometric problems that arise in manufacturing. A typical example of the problems we study is the problem of finding a stable grip on an object with a
robotic hand consisting of two parallel rectangular plates. The diagram on the left illustrates a particularly challenging case.
Contact Godfried Toussaint for more info on this research topic.
Vertex Enumeration
The theory of convex polytopes has its origins in the study of systems of linear inequalities. A (convex) polyhedron is the set of solutions to a system of linear inequalities. A
bounded polyhedron is called a polytope. The vertices of a polytope are those feasible points that do not lie in the interior of a line segment between two other feasible points.
Converting from the halfspaces to the vertices is called vertex enumeration. An important open problem is the existence of a vertex enumeration algorithm polynomial in the number
of halfspaces plus the number of vertices.
Talk to David Avis for more info on this research topic.
Graph Drawing and Motion Planning
We study the layout of graphs and diagrams. For example, the picture here illustrates a 3-dimensional orthogonal grid layout of a graph. For more on the subject of graph drawing,
go here. Another area of research is the design of algorithms for moving linkages and point robots.
Contact Sue Whitesides for more info on this research topic.
Probabalistic Analysis
We study the expected behavior of algorithms and data structures under random input or artificial randomization. Our work builds on elementary probability theory rather than
combinatorial analysis. Subtopics of particular interest include random number generation and random trees. For example, we create models of random trees for simulating virtual
forests. The tree on this page is a binary search tree built from a Weyl sequence (e), (2e), (3e), etcetera, where (.) denotes modulo 1.
Talk to Luc Devroye for more info on this research topic.
Associated Faculty | {"url":"http://www.cs.mcgill.ca/research/profile?id=34","timestamp":"2014-04-16T10:12:34Z","content_type":null,"content_length":"28714","record_id":"<urn:uuid:a8dbe1c9-62d9-4989-a728-d93367284530>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need to catch up on some algebra... don't know where to start...
September 18th 2007, 09:53 PM #1
Need to catch up on some algebra... don't know where to start...
I think this problem is algebra... it says I need to know it to move on to the next chapter... but I'm in Geometry... I'm suppossed to not do algebra anymore! *sigh*
Evaluate each expression for the given value of n.
3n - 2;n = 4
(n+1) + n;n = 6
There are several problems like these in different forms, but first could someone walk me through the steps of how to find the answer to these, and please, for the love of pizza, could someone
tell me what the ";" does?? It must do something simple right? The ";" is what I don't know what to do with I guess... I've never seen problems like this...
i found that statement somewhat funny
Evaluate each expression for the given value of n.
3n - 2;n = 4
(n+1) + n;n = 6
no big deal here. just plug in the value for n in the expressions on the left and calculate their answer. so, for the first, you would replace n with 4 in 3n - 2 and calculate that
no big deal here. just plug in the value for n in the expressions on the left and calculate their answer. so, for the first, you would replace n with 4 in 3n - 2 and calculate that
Bu, but, um... *confused*... there are so many n's... um...
So I take the 4 on the other side of the equal sign and put it on the 3(n)? Which makes it 12? So then I have 12 - 2;n= 0... or = 4?
What is the semi colon for though? *cries*
I'm confused Mr. All Knowing
Evaluate $\bold {3n - 2}$for $\bold {n = 4}$
when $n = 4$:
$3n - 2 = 3(4) - 2$
......... $= 12 - 2$
......... $= 10$
Oh! I see, you thought they were in one equation. no, my dear Melancholy. (read the instructions ... and i'm not all knowing)
So the ";" wasn't actually part of the problem. It was just to separate the two thingamabobs. I GET IT! See, I get frightened easily I swear!
So then...
(n+1) + n ; n = 6
(6 + 1) + 6
7 + 6
Big deal. lol!
So that was right, right?
exactly ...the thingamabobs... well, at least you're getting your math jargon down
I GET IT! See, I get frightened easily I swear!
So then...
(n+1) + n ; n = 6
(6 + 1) + 6
7 + 6
Big deal. lol!
So that was right, right?
yes, it is a big deal, isn't it? anyway, that is correct. now i'm going to bed, i have to wake up in like 4 hours
Thank you so muchly! And I have to wake up in 5 hours too, uhg. Have a good sleep! I'm hitting the sack as well...
*terminator voice*
I'll be back!
September 18th 2007, 09:56 PM #2
September 18th 2007, 10:05 PM #3
September 18th 2007, 10:11 PM #4
September 18th 2007, 10:16 PM #5
September 18th 2007, 10:18 PM #6
September 18th 2007, 10:20 PM #7
September 18th 2007, 10:21 PM #8
September 18th 2007, 10:47 PM #9 | {"url":"http://mathhelpforum.com/algebra/19183-need-catch-up-some-algebra-don-t-know-where-start.html","timestamp":"2014-04-18T04:21:01Z","content_type":null,"content_length":"62045","record_id":"<urn:uuid:0e112eab-bbcf-41d0-a4ee-9e042dca244b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A certain radioactive element has a half-life of one hour. If you start with 1.0 gram of the element at noon, how much of the radioactive element will be left at 2 p.m.? 0.50 grams 0.25 grams 0.125
grams 0.0 grams
• 9 months ago
• 9 months ago
Best Response
You've already chosen the best response.
what you think :)
Best Response
You've already chosen the best response.
i think its 0.25g im not to sure
Best Response
You've already chosen the best response.
yaaa ur right see at 12 Pm or noon 1 gram is there since 1 hr is half time where it get converted to half of initial value at 1pm = 1/2 =0.5 gram then at 2 pm 0.5/2 =0.25 gram ur right :)
Best Response
You've already chosen the best response.
ohh hmm thought so, thank you!!(:
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
it is 0.25 as time is 2 hours then there are two half lives.1?2=0.5......0.5/2=0.25
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51cd9768e4b011c79f637cdc","timestamp":"2014-04-17T06:49:48Z","content_type":null,"content_length":"39796","record_id":"<urn:uuid:89327385-08c4-4160-a9b1-fdf0db048c26>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
Berwyn Heights, MD Calculus Tutor
Find a Berwyn Heights, MD Calculus Tutor
...I have very flexible hours and am happy to come work with you wherever is most convenient. I can meet you at a library, coffee shop or even your house whatever is most comfortable for you! I
truly believe that math can be fun and easy if it's broken down for you in a way that you can comprehend it.
22 Subjects: including calculus, geometry, algebra 1, GRE
...These courses involved learning the techniques, both analytical and numerical, for solving ordinary, partial, and non-linear differential equations including Green's function techniques. I
have also taught Physics and Electrical Engineering courses for both undergraduate and graduate students. ...
16 Subjects: including calculus, physics, statistics, geometry
...I completed a B.S. degree in Applied Mathematics from GWU, graduating summa cum laude, and also received the Ruggles Prize, an award given annually since 1866 for excellence in mathematics. I
minored in economics and went on to study it further in graduate school. My graduate work was completed...
16 Subjects: including calculus, geometry, statistics, ACT Math
...I strive to establish an intellectual connection with the student, regardless of age or background. Each student is an individual who learns most effectively at his/her own pace. I am very
sensitive to finding that pace for each student and working at the pace.
10 Subjects: including calculus, Spanish, geometry, algebra 2
...I have been tutoring since the 11th grade, and have tutored throughout college. I have worked with elementary school children on Math and English. I have also worked with high school students
on Math and Science.
40 Subjects: including calculus, English, reading, chemistry
Related Berwyn Heights, MD Tutors
Berwyn Heights, MD Accounting Tutors
Berwyn Heights, MD ACT Tutors
Berwyn Heights, MD Algebra Tutors
Berwyn Heights, MD Algebra 2 Tutors
Berwyn Heights, MD Calculus Tutors
Berwyn Heights, MD Geometry Tutors
Berwyn Heights, MD Math Tutors
Berwyn Heights, MD Prealgebra Tutors
Berwyn Heights, MD Precalculus Tutors
Berwyn Heights, MD SAT Tutors
Berwyn Heights, MD SAT Math Tutors
Berwyn Heights, MD Science Tutors
Berwyn Heights, MD Statistics Tutors
Berwyn Heights, MD Trigonometry Tutors
Nearby Cities With calculus Tutor
Berwyn, MD calculus Tutors
Brentwood, MD calculus Tutors
College Park calculus Tutors
Colmar Manor, MD calculus Tutors
Cottage City, MD calculus Tutors
Edmonston, MD calculus Tutors
Greenbelt calculus Tutors
Landover Hills, MD calculus Tutors
Mount Rainier calculus Tutors
North Brentwood, MD calculus Tutors
North College Park, MD calculus Tutors
Riverdale Park, MD calculus Tutors
Riverdale Pk, MD calculus Tutors
Riverdale, MD calculus Tutors
University Park, MD calculus Tutors | {"url":"http://www.purplemath.com/Berwyn_Heights_MD_calculus_tutors.php","timestamp":"2014-04-21T11:04:52Z","content_type":null,"content_length":"24436","record_id":"<urn:uuid:141b4064-20ee-4e2f-8215-73a576996487>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
Catholic Encyclopedia (1913)/Augustin-Louis Cauchy
From Wikisource
Catholic Encyclopedia (1913), Volume 3
←Diocese of Cattaro (Catharum) Augustin-Louis Cauchy Caughnawaga→
French mathematician, b. at Paris, 21 August, 1789; d. at Sceaux, 23 May, 1857. He owed his early training to his father, a man of much learning and literary taste, and, at the suggestion of La
Grange, who early detected his talents and took a lively interest in him, he received a good classical education at the Ecole Centrale du Panthéon in Paris. In 1805 he entered the Ecole
Polytechnique, where he distinguished himself in mathematics. Two years later he entered the Ecole des Ponts et Chaussées and, after a brilliant course of study, he was appointed one of the engineers
in charge of the extensive public works inaugurated by Napoleon at Cherbourg. While here he devoted his leisure moments to mathematics. Several important memoirs from his pen, among them those
relating to the theory of polyhedra, symmetrical functions, and particularly his proof of a theorem of Fermat which had baffled mathematicians like Gauss and Euler, made him known to the scientific
world and won him admittance into the Academy of Sciences. At about the same time the Grand Prix offered by the Academy was bestowed on him for his essays on the propagation of waves. After a sojourn
of three years at Cherbourg his health began to fail, and he resigned his post to begin at the age of twenty-two his career of professor at the Ecole Polytechnique. In 1818 he married Mlle. de Bure,
who, with two daughters, survived him.
Cauchy was a stanch adherent of the Bourbons and after the Revolution of 1830 followed Charles X into exile. After a brief stay at Turin, where he occupied the chair of mathematical physics created
for him at the university, he was invited to become one of the tutors of the young Duc de Bordeaux, grandson of Charles, at Prague. The old monarch conferred the title of baron upon him in
recognition of his services. He returned to France in 1838, and was proposed by the Academy for a vacant chair at the Collège de France. His conscientious refusal to take the requisite oath on
account of his devotion to the prince prevented his appointment. His nomination to the Bureau des Longitudes was declared void for the same reason. After the Revolution of 1848, however, he received
a professorship at the Sorbonne. Upon the establishment of the Second Empire the oath was reinstated, but an exception was made by Napoleon III in the cases of Cauchy and Arago, and he was thus free
to continue his lectures. He spent the last years of his life at Sceaux, outside of Paris, devoting himself to his mathematical researches until the end.
Cauchy was an admirable type of the true Catholic savant. A great and indefatigable mathematician, he was at the same time a loyal and devoted son of the Church. He made public profession of his
faith and found his greatest pleasure and recreation in works of zeal and charity. He was an active member of the Society of St. Vincent de Paul, and took a leading part in founding the "Ecoles
d'Orient" in 1856, and the "Association pour la liberté du dimanche". During the famine of 1846 in Ireland Cauchy made an appeal to the pope on behalf of the stricken people. He was on terms of
intimate friendship with Père de Ravignan, S. J., the well-known preacher, and when, during the reign of Louis-Philippe, the colleges of the Society of Jesus were attacked he wrote two memoirs in
their defence. Cauchy is best known for his achievements in the domain of mathematics, to almost every branch of which he made numerous and important contributions. He was a prolific writer and,
besides his larger works, he was the author of over seven hundred memoirs, papers, etc., published chiefly in the "Coptes Rendus". A complete edition of his works has been issued by the French
Government under the auspices of the Academy of Sciences. Among his researches may be mentioned his development of the theory of series in which he established rules for investigating their
convergency. To him is due the demonstration of the existence and number of real and imaginary roots of any equation, and he did much to bring determinants into general use. In connexion with his
work on definite integrals, his treatment of imaginary limits deserves special mention. He was the first to give a rigid proof of Taylor's theorem. The "Calculus of Residues" was his invention, and
he made important researches in the theory of functions. By his theory of the continuity of functions and the method of limits he placed the differential calculus on a logical basis. Cauchy was also
a pioneer in extending the applications of mathematics to physical science, especially to molecular mechanics, optics, and astronomy. In the theory of dispersion we have his well-known formula giving
the refractive index in terms of the wave length and three constants. Besides his numerous memoirs, he was the author of "Cours d'analyse de l'Ecole royale polytechnique" (1821); "Résumé des leçons
données à l'Ecole royale polytechnique sur les applications du calcul infinitésimal" (1823); "Leçons sur les applications du calcul infintésimal à la géométrie" (1826, 1828); "Leçons sur le calcul
différentiel" (1829); "Anciens exercices de mathématicques" (1826-1830); "Résumés analytiques" (1833); "Noveaux exercices de mathématiques" (1835-1836); "Noveaux exercices d'analyse et de physique
mathématique" (1840-47).
VALSON, La vie et les travaux du baron Cauchy (Paris, 1868); MARIE, Hist. des sciences math. et phys. (1888), XII; BALL, Hist. of Math. (London, 1893); KNELLER, Das Christentum, u. die Vertreter der
neueren Naturwisseschaft (Freiburg, 1904); IDEM in Stimmen aus Maria-Laach (Freiburg, 1903), LXIV; The Month, No. 516 (New Series, 126), June, 1907.
HENRY M. BROCK | {"url":"https://en.wikisource.org/wiki/Catholic_Encyclopedia_(1913)/Augustin-Louis_Cauchy","timestamp":"2014-04-20T22:01:40Z","content_type":null,"content_length":"28928","record_id":"<urn:uuid:c8e2ad11-acc0-4648-ab43-c2c60461e4b2>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
Does playing baseball shorten your lifespan? (Answer: No.)
August 24, 2012
By David Smith
A National Institute for Occupational Safety and Health study, published in March, found that professional American football (NFL) players lived longer, on average, than similar "mere mortals" in the
general population. Football is a dangerous sport, so that might seem surprising at first, until you consider the fact that NFL players are elite sportsmen: only the strongest, fastest and most
healthy members of the population get a chance to play. On top of that, they have access to excellent healthcare, even after they retire.
But it's important to note what this study does NOT say: it does not claim that playing professional football will make you live longer. (In fact, throwing a couple of dozen men selected at random
into a scrimmage with the Dallas Cowboys is most likely to shorten their lifespan, given the inevitable injuries that would entail.) All it says that the population of men selected to play in the NFL
tend to live longer than similar counterparts in the general population.
Sports Journalist Bill Barnwell attempted to answer the question by using a different method: why not compare NFL players to baseball players, who are also elite athletes. But as R user and
biostatistician Gregory Matthews reports, Barnwell's "Mere Mortals" article makes some profound statistical errors in making the claim that MLB players live shorter lives than NFL players.
The basic error is that the populations of MLB players and NFL players are not directly comparable, mainly because baseball players tend to be older than football players. Matthews created the age
distribution chart below to illustrate the difference (blue is baseball, red is football):
If you merely count the number of baseball players that have died and compare that to the number of football players have died, you'll find more baseball player deaths:
Baseball Football
Qualifying Players 1,494 3,088
Alive 1,256 2,694
Deceased 238 394
Mortality Rate 15.9 percent 12.8 percent
Even though that difference is statistically significant (using a Fisher Test) this is hardly surprising: because the average baseballer in the sample was older than the average footballer, you'd
expect to find more deaths in the interim. But if you include player age in a logistic regression model, as Gregory Matthews used R to do, the effect of the sport (baseball vs football) disappears.
It's only the relative age differences between the sports that causes the discrepancy in the mortality rates above.
There are two lessons to learn from this tale:
• When you read reports in the newspaper about how 'X causes Y', always ask yourself whether the direction of causality is correct. While researches and journals are (for the most part) careful to
merely report that there's an association between X and Y, journalists often upgrade that to direct causality. Sometimes it's just the case that people who are afflicted by Y tend to be people
who are likely to have consumed/performed/been exposed to X.
• Unless a formal random trial has been conducted, whenever two (or more) groups A and B are compared, it's amost never appropriate to directly compare averages or other statistics with the two
groups. The analysis must control for any factors (age, sex, location, lifestyle, genetics, ...) that might influence the outcome, using regression or other appropriate statistical techniques.
And even then, you can never be certain that all possible variables have been controlled for: it can strengthen the confidence of the association, but never prove causality outright.
Thanks go to Gregory Matthews for illustrating these important lessons in his critique of Bill Barnwell's "Mere Mortals" article, linked below.
Stats in the Wild: Mere Mortals: Retract This Article
for the author, please follow the link and comment on his blog:
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/does-playing-baseball-shorten-your-lifespan-answer-no/","timestamp":"2014-04-18T13:26:08Z","content_type":null,"content_length":"40688","record_id":"<urn:uuid:78a38d1a-c0c7-49ca-909f-9c1d7bc65f73>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00150-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ask a Physicist Answers
When carrying out the double slit experiment using electrons or buckyball molecules, do the particles have to be traveling at near light speed velocities to produce an interference pattern?
Results of a double-slit-experiment performed by Dr. Tonomura showing the build-up of an interference pattern of single electrons. Numbers of electrons are 11 (a), 200 (b), 6000 (c), 40000 (d),
140000 (e).
One of the most amazing things about quantum mechanics, in my opinion, is that we're beginning to see that it applies, not just to microscopic particles moving near light speed, but also to larger
objects moving more slowly. Being able to perform the famous double slit experiment with buckyballs is indicative of the power of quantum mechanics as a scientific theory.
To get a little technical, there's really one equation which dictates how a double slit experiment should be performed, and it's this:
This equation relates three things: p is the momentum of a particle, which depends on the particle's speed and mass, h is a constant (called Planck's Constant, with a horrendous value of 6.63 x 10^
-34 Joule-seconds!), and λ is what's known as the de Broglie wavelength. Louis de Broglie was a French physicist who, in his PhD thesis in 1924, argued what we now know to be true - that both light
and particles can behave like waves. The connection between particles and waves that he came up with is the equation I've just mentioned. That's why λ is called the de Broglie wavelength (when you
discover something, you get to have it named after you!). The de Broglie wavelength describes the effective wavelength that a particle would have when it was behaving as a wave.
So it's obvious that, in order for the double slit experiment to work, the particles in question (be they photons, electrons, buckyballs or airplanes) have to be behaving like waves, and that's
determined by the de Broglie wavelength. Step two of setting up our experiment has to do with the dimensions of the double slit experiment. The interference pattern is dictated by the distance from
one bright line (coherence) to the next:
where D is the distance from the slit to the screen (or detector), little d is the spacing between the slits, and λ is going to be our de Broglie wavelength.
Let's assume we want to use electrons for our experiment. We build a setup with the screen placed 1 meter from the slits, and the two slits 1 millimeter apart (maybe we found this equipment in a
storage closet in the physics department...). This setup will make the distance between the bright spots on our screen 1000 times what the de Broglie wavelength of our incoming electron is. We want
to be able to actually see the interference pattern in our detectors, so perhaps we should request that the spacing of the bright spots be about 1 millimeter (this would depend on the detectors, of
course). This means the de Broglie wavelength of our electron has to be about one meter. Now we go back to the equation for de Broglie wavelength, and see that we know h and we now know λ, so we can
calculate what p should be. Since we know the mass of the electron, calculating the momentum is essentially the same as calculating the speed; for our experiment, we find the electron needs to be
going about 0.0007 m/s! That's a tiny speed... about 2 inches a minute (kind of like pouring ketchup)!
Such a "tabletop" experiment perhaps isn't a good setup to measure electrons (they actually do it using diffraction gratings instead of slits, with spacing much smaller than 1 mm), but the basic
point is that the de Broglie wavelength gets smaller with increasing speed, and that makes a double slit experiment harder and harder to do, because you have to adjust your setup to match the
changing wavelength. For the buckyball experiment (see http://physicsworld.com/cws/article/news/2952), the researchers used slits about 100 nanometers apart (a nanometer is one millionth of a
millimeter), and shot the buckyballs through the slits at about 200 meters per second (roughly 500 mph), much slower than the speed of light. The de Broglie wavelength is what dictates whether a
particle is behaving like a wave - and thus whether or not you see an interference pattern.
Answered by:
Kelly Chipps (AKA nuclear.kelly)
Postdoctoral Fellow
Department of Physics
Colorado School of Mines
Submitted by:
Mog from Rainford, UK | {"url":"http://physicscentral.com/experiment/askaphysicist/physics-answer.cfm?uid=20111017091810","timestamp":"2014-04-17T12:36:16Z","content_type":null,"content_length":"21004","record_id":"<urn:uuid:84146cdb-95b2-47e1-a153-2bcd663d3aee>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
34,117pages on
this wiki
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
In mathematics, orthogonal is synonymous with perpendicular when used as a simple adjective that is not part of any longer phrase with a standard definition. It means at right angles. It comes from
the Greek ὀρθός orthos, meaning "straight", used by Euclid to mean right; and γωνία gonia, meaning angle. Two streets that cross each other at a right angle are orthogonal to one another.
Formally, two vectors $x$ and $y$ in an inner product space $V$ are orthogonal if their inner product $\langle x, y \rangle$ is zero. This situation is denoted $x \perp y$.
Two vector subspaces $A$ and $B$ of vector space $V$ are called orthogonal subspaces if each vector in $A$ is orthogonal to each vector in $B$. The largest subspace that is orthogonal to a given
subspace is its orthogonal complement.
A linear transformation $T : V \rightarrow V$ is called an orthogonal linear transformation if it preserves the inner product. That is, for all pairs of vectors $x$ and $y$ in the inner product space
$\langle Tx, Ty \rangle = \langle x, y \rangle.$
This means that $T$ preserves the angle between $x$ and $y$, and that the lengths of $Tx$ and $x$ are equal.
A term rewriting system is said to be orthogonal if it is left-linear and is non-ambiguous. Orthogonal term rewriting systems are confluent.
The word normal is sometimes also used in place of orthogonal. However, normal can also refer to unit vectors. In particular, orthonormal refers to a collection of vectors that are both orthogonal
and normal (of unit length). So, using the term normal to mean "orthogonal" is often avoided.
In some contexts, two things are said to be orthogonal if they are mutually exclusive.
In Euclidean vector spaces
In 2- or 3-dimensional Euclidean space, two vectors are orthogonal if their dot product is zero, i.e. they make an angle of 90° or π/2 radians. Hence orthogonality of vectors is a generalization of
the concept of perpendicular. In terms of vector subspaces, the orthogonal complement of a line is the plane perpendicular to it, and vice versa. Note however that there is no correspondence with
regards to perpendicular planes, because vectors in subspaces start from the origin.
In 4-dimensional Euclidean space, the orthogonal complement of a line is a hyperplane and vice versa, and that of a plane is a plane.
Several vectors are called pairwise orthogonal if any two of them are orthogonal, and a set of such vectors is called an orthogonal set. An orthogonal set is an orthonormal set if all its vectors are
unit vectors. Non-zero pairwise orthogonal vectors are always linearly independent.
Orthogonal functions
It is common to use the following inner product for two functions f and g:
$\langle f, g\rangle_w = \int_a^b f(x)g(x)w(x)\,dx.$
Here we introduce a nonnegative weight function $w(x)$ in the definition of this inner product.
We say that those functions are orthogonal if that inner product is zero:
$\int_a^b f(x)g(x)w(x)\,dx = 0.$
We write the norms with respect to this inner product and the weight function as
$||f||_w = \sqrt{\langle f, f\rangle_w}$
The members of a sequence { f[i] : i = 1, 2, 3, ... } are:
$\langle f_i, f_j \rangle=\int_{-\infty}^\infty f_i(x) f_j(x) w(x)\,dx=||f_i||^2\delta_{i,j}=||f_j||^2\delta_{i,j}$
$\langle f_i, f_j \rangle=\int_{-\infty}^\infty f_i(x) f_j(x) w(x)\,dx=\delta_{i,j}$
$\delta_{i,j}=\left\{\begin{matrix}1 & \mathrm{if}\ i=j \\ 0 & \mathrm{if}\ ieq j\end{matrix}\right\}$
is Kronecker's delta. In other words, any two of them are orthogonal, and the norm of each is 1 in the case of the orthonormal sequence. See in particular orthogonal polynomials.
• The vectors (1, 3, 2), (3, −1, 0), (1/3, 1, −5/3) are orthogonal to each other, since (1)(3) + (3)(−1) + (2)(0) = 0, (3)(1/3) + (−1)(1) + (0)(−5/3) = 0, (1)(1/3) + (3)(1) − (2)(5/3) = 0. Observe
also that the dot product of the vectors with themselves are the norms of those vectors, so to check for orthogonality, we need only check the dot product with every other vector.
• The vectors (1, 0, 1, 0, ...)^T and (0, 1, 0, 1, ...)^T are orthogonal to each other. Clearly the dot product of these vectors is 0. We can then make the obvious generalization to consider the
vectors in Z[2]^n:
$\mathbf{v}_k = \sum_{\begin{matrix}i=0\\ai+k < n\end{matrix}}^{n/a} \mathbf{e}_i$
for some positive integer a, and for 1 ≤ k ≤ a − 1, these vectors are orthogonal, for example (1, 0, 0, 1, 0, 0, 1, 0)^T, (0, 1, 0, 0, 1, 0, 0, 1)^T, (0, 0, 1, 0, 0, 1, 0, 0)^T are orthogonal.
• Take two quadratic functions 2t + 3 and 5t^2 + t − 17/9. These functions are orthogonal with respect to a unit weight function on the interval from −1 to 1. The product of these two functions is
10t^3 + 17t^2 − 7/9 t − 17/3, and now,
$\int_{-1}^{1} \left(10t^3+17t^2-{7\over 9}t-{17\over 3}\right)\,dt = \left[{5\over 2}t^4+{17\over 3}t^3-{7\over 18}t^2-{17\over 3}t\right]_{-1}^{1}$
$=\left({5\over 2}(1)^4+{17\over 3}(1)^3-{7\over 18}(1)^2-{17\over 3}(1)\right)-\left({5\over 2}(-1)^4+{17\over 3}(-1)^3-{7\over 18}(-1)^2-{17\over 3}(-1)\right)$
$={19\over 9}-{19\over 9}=0.$
• The functions 1, sin(nx), cos(nx) : n = 1, 2, 3, ... are orthogonal with respect to Lebesgue measure on the interval from 0 to 2π. This fact is basic in the theory of Fourier series.
Derived meanings
Other meanings of the word orthogonal evolved from its earlier use in mathematics.
In art the perspective imagined lines pointing to the vanishing point are referred to as 'orthogonal lines'.
Computer science
Orthogonality is a system design property which facilitates the making of complex designs feasible and compact. Orthogonality guarantees that modifying the technical effect produced by a component of
a system neither creates nor propagates side effects to other components of the system. The emergent behaviour of a system consisting of components should be controlled strictly by formal definitions
of its logic and not by side effects resulting from poor integration, i.e. non-orthogonal design of modules and interfaces. Orthogonality reduces testing and development time because it is easier to
verify designs that neither cause side effects nor depend on them.
For example, a car has orthogonal components and controls, e.g. accelerating the vehicle does not influence anything else but the components involved in the acceleration. On the other hand, a car
with non-orthogonal design might have, for example, its steering influence its braking (e.g. Electronic Stability Control), or its speed influence its suspension.^[1] Consequently, this usage is seen
to be derived from the use of orthogonal in mathematics: One may project a vector onto a subspace by projecting it onto each member of a set of basis vectors separately and adding the projections if
and only if the basis vectors are mutually orthogonal.
An instruction set is said to be orthogonal if any instruction can use any register in any addressing mode. This terminology results from considering an instruction as a vector whose components are
the instruction fields. One field identifies the registers to be operated upon, and another specifies the addressing mode. An orthogonal instruction set uniquely encodes all combinations of registers
and addressing modes.
Radio communications
In radio communications, multiple access schemes are orthogonal when a receiver can (theoretically) completely reject an arbitrarily strong unwanted signal. An example of an orthogonal scheme is Code
Division Multiple Access, CDMA. Examples of non-orthogonal schemes are TDMA and FDMA.
Social sciences/Statistics/Econometrics
In the social sciences, variables that affect a particular result are said to be orthogonal if they are independent. That is to say that by varying each separately, one can predict the combined
effect of varying them jointly. If synergistic effects are present, the factors are not orthogonal. This meaning derives from the mathematical one, because orthogonal vectors are linearly
In taxonomy, an orthogonal classification is one in which no item is a member of more than one group, that is, the classifications are mutually exclusive.
In combinatorics, two n×n Latin squares are said to be orthogonal if their superimposition yields all possible n^2 combinations of entries. One can also have a more general definition of
combinatorial orthogonality.
Quantum mechanics
In quantum mechanics, two eigenstates of a wavefunction, $\psi_m$ and $\psi_n$, are orthogonal unless they are identical (i.e. m=n). This means, in Dirac notation, that $< \psi_m | \psi_n > = 0$
unless m=n, in which case $< \psi_m | \psi_n > = 1$. The fact that $< \psi_m | \psi_n > = 1$ is because wavefunctions are normalized.
See also
References and external links
1. ↑ Lincoln Mark VIII speed-sensitive suspension (MPEG video). URL accessed on 2006-09-15. | {"url":"http://psychology.wikia.com/wiki/Orthogonality","timestamp":"2014-04-24T12:13:33Z","content_type":null,"content_length":"83803","record_id":"<urn:uuid:c139ba96-3592-47e2-b5d6-1e87788c9107>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: HEIGHT REDUCING PROBLEM ON ALGEBRAIC
Abstract. Let be an algebraic integer and assume that it is expanding,
i.e., its all conjugates lie outside the unit circle. We show several results
of the form Z[] = B[] with a certain finite set B Z. This property is
called height reducing property, which attracted special interest in the self-
affine tilings. Especially we show that if is quadratic or cubic trinomial,
then one can choose B = {0, ±1, . . . , ± (|N()| - 1)}, where N() stands
for the absolute norm of over Q.
1. Introduction
Let be an algebraic integer with conjugates 1 = , 2, . . . , d lying
outside the unit circle (including itself). Such numbers are called expanding
algebraic numbers. We are interested in the height reducing property of ,
that is
Z[] = B[]
for a certain finite set B Z. We note that
Lemma 1. If an algebraic integer , || > 1, has height reducing property,
then is expanding.
Proof. Suppose has height reducing property with a finite set B Z. First | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/023/4267904.html","timestamp":"2014-04-21T05:33:12Z","content_type":null,"content_length":"8184","record_id":"<urn:uuid:4acd241a-6abe-4da5-9581-1ec757583816>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mechanics question
May 21st 2009, 07:19 AM #1
Junior Member
Dec 2008
England - Warwickshire
Mechanics question
Hey i am stuck on this question i have found in a past paper can someone please help me solve this asap as i have a mechanics exam tomorrow and i need to be able to do all topics as well as
A steel girder AB has a weight 210N. It is held in equilibrium in a horizontal position by two vertical cables. One cable is attatched to the end A. The other cable is attatched to the point C on
the girder, where AC = 90cm. The girder is modeled as a uniform rod, and the cables as light inextensible strings.
Given that the tension in the cable at C is twice the tension in the cable at A, show that AB = 120cm.
Thanks to anyone who helps.
Hey i am stuck on this question i have found in a past paper can someone please help me solve this asap as i have a mechanics exam tomorrow and i need to be able to do all topics as well as
A steel girder AB has a weight 210N. It is held in equilibrium in a horizontal position by two vertical cables. One cable is attatched to the end A. The other cable is attatched to the point C on
the girder, where AC = 90cm. The girder is modeled as a uniform rod, and the cables as light inextensible strings.
Given that the tension in the cable at C is twice the tension in the cable at A, show that AB = 120cm.
Thanks to anyone who helps.
Did you draw a diagram?
From mine you can see where A, B and C with relation to T1 and T2. From the question you can deduce $T_2 = 2T_1$
Resolve vertically about point C and let from C to A be the positive direction:
$mg - x = 2 \times 90 \rightarrow 210-x=180$
$x = 30$
From the diagram we can see that AB = AC + CB = 90+30 = 120cm
Edit1: oops my diagram is wrong - I'll upload the correct one soon
Edit 2: done
May 21st 2009, 08:23 AM #2 | {"url":"http://mathhelpforum.com/math-topics/89934-mechanics-question.html","timestamp":"2014-04-16T17:54:06Z","content_type":null,"content_length":"34967","record_id":"<urn:uuid:26b6425e-a1a5-4eb3-b943-4701d97e5dca>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Saxon Math, Singapore or Math U See
May 18, 2013 at 11:48 PM
Feedback? Which one is the best? Trying to decide what to begin with for my 4.5 year old.
Oh that's a hard one. I don't know that there is a "best." I think that each individual child will have one that works better for them. So it would probably be best to decide which one you think
is best for your child.
My sons are using Math U See this upcoming school year. Basically I think that this curriculum will be great for them. I have also heard raving reviews about it. The other reason I chose this one
is because the subject matters are taught thoroughly before moving on to the net material.
At that age, my DS did the best with Liberty Math K from CLP. DD is that age and after a couple of misses, we settled on Lifepac because she needs mastery. Every kiddo is different.....
We love Saxon Math! I am using it for my two 6th graders and will be using it again next year as well as for my kindergartener. I love that it doesn't just jump to the next unit, it always keeps
a few of the previous concepts so that they don't forget them.
I have only used Singapore math K and we liked it, but I do not know how it compares to the other two
My 4 year old loves Miquon. We also use Singapore. We didn't like MUS.
We use Saxon. I've come to the conclusion that if you want your child to hate math then choose Saxon. It is a very good curriculum, but it is dry and the kids hate it after a while. But we are
sticking with it because we want their math skills to be excellent.
I have heard that Singapore is even better, in terms of student achievement, but I have not taken the time to look at it.
It really depends upon the child. We've used Saxon -- although not for a 4 yr old, to the extent there has been an overall plan, we start with Abeka and then move to Saxon for upper grades. I've
never used Math U See, but with our youngest who turned 6 in December, I wasn't completely satisfied with Abeka -- it seemed a bit too easy for him, so we moved to Singapore (Level 2 A) in the
last couple of months. The jury is still out, but it seems to be a bit more challenging.
The differences that I see: Abeka is big on review and repetition at the younger levels. You do need to have the "math races," as we call them, books, to work on addition and subtraction
combinations (for first grade). I suppose you could also do flash cards or get the combination pages off the internet, but it's very convenient to have the combinations that he's been studying
available without having to think about it.
Based on my experience with Saxon at the higher grades, its approach with respect to daily repetition of what has already been learned is very similar to Abeka, but it seems to me to jump around
a bit more with respect to what is introduced and when. In other words, you start one concept, and then you'll move to several other concept entirely, before returning for "part two" of the first
concept. This isn't a bad way of doing things for some kids. My daughter really does well with it. Our middle son, on the other hand, found that he did better if he FULLY understood a concept
before moving on to something else, and so we moved him to a more conventional math book (this is for algebra, though).
Singapore takes a more conventional approach. Thus, our youngest son's math book (Level 2A) starts with addition and subtraction, then moves on to multiplication and division, etc. There is still
review, but it's not daily. They do have an approach of "showing physically" before moving on to the concept, but it's not a strong as I understand Math U See to be -- although, again, I have not
used "Math U See."
If I were trying to choose between them, I'd focus on how your child learns. If he or she tends toward the physical and doesn't like to do memorization or "following the steps," I think strongly
about Math U See. If they tend to like to try to get the "big picture" and like to do a lot of different things all at once (i.e., they get bored doing 20 addition problems, but are happy if they
can do 3 addition, 3 subtraction, a few fractions, etc.) I'd think about Saxon. If they need to really focus on something for a while and "dig deep" in order to feel comfortable, but like to
"think it out," I'd go with Singapore.
And remember, if one doesn't work well, you can always switch. Singapore and Abeka, at least, are not super expensive.
Of those 3 I've only used Math U See and we love it. I tried a few others before settling with Math U See. I've heard Saxon is dry and boring, and I've heard good things about Singapore, but that
it's also very advanced.
I've only used Math U See from your list. My 1st grader only needa help reading the word problems but is able to do her books herself with minimal help. My 4yo ds loves the program as well. He
only really plays with the blocks BUT it is setting the stage for him :) I love how MUS is more self paced. I can add supplements as needed or we can speed through anything that is too
im a saxon math fan, used it for 20 something years now, i dont just do it straight out of the book, i make it more hands on for the kids also, cuz i know i would get bored just doing math
problems day after day!hahah | {"url":"http://mobile.cafemom.com/group/114079/forums/read/18532393/Saxon_Math_Singapore_or_Math_U_See?use_mobile=1","timestamp":"2014-04-21T02:03:43Z","content_type":null,"content_length":"33411","record_id":"<urn:uuid:77e6fe09-6237-45ff-9882-792faa580c98>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
OpenMx - Advanced Structural Equation Modeling
Hi metaSEM users,
I'm currently trying to use metaSEM to implement a network meta-analysis (i.e., a meta-analysis of different treatment comparisons; see Salanti, 2012). One important aspect of multivariate
implementations of network meta-analysis is that the between-studies takes on a particular structure (see Lu & Ades, 2004). For example, given AB, AC, and AD, one common way to constrain the matrix
is to do the following (basically, set the correlation between any two treatment comparisons equal to .5):
AB AC AD
AB s1^2 .5*s1*s2 .5*s1*s3
AC .5*s1*s2 s2^2 .5*s3*s3
AD .5*s1*s3 .5*s2*s3 s3^2
It looks like the way to impose this structure is through the RE.constraints argument. However, I'm a little in the dark about how to force the covariances to essentially be a function of the
variances. Can anyone help me?
Code sample pasted below for reference.
### Begin example ###
AB <- c(.5, .3, .2, .2, NA, NA, .1)
AC <- c(NA, .3, .4, NA, .2, .1, .5)
ys <- data.frame(AB, AC)
n1 <- 50
n2 <- 50
nT <- 150 # Assuming equal N per study arm; N = 50
vars <- matrix(c(1/n1 + 1/n2 + .5^2/(2*(n1 + n2)), NA, NA,
1/n1 + 1/n2 + .3^2/(2*nT), 1/n1 + .3 * .3 / (2*nT), 1/n1 + 1/n2 + .3^2/(2*nT),
1/n1 + 1/n2 + .2^2/(2*nT), 1/n1 + .2 * .4 / (2*nT), 1/n1 + 1/n2 + .4^2/(2*nT),
1/n1 + 1/n2 + .2^2/(2*(n1 + n2)), NA, NA,
NA, NA, 1/n1 + 1/n2 + .2^2/(2*(n1 + n2)),
NA, NA, 1/n1 + 1/n2 + .1^2/(2*(n1 + n2)),
1/n1 + 1/n2 + .1^2/(2*nT), 1/n1 + .1 * .5 / (2*nT), 1/n1 + 1/n2 + .5^2/(2*nT)),
nrow = 7, ncol = 3, byrow = T)
con <- matrix(0, ncol = 2, nrow = 2)
diag(con) <- "2*a"
# What to do with the covariances?
mod <- meta(y = ys, v = vars, RE.constraints = con)
Wed, 03/06/2013 - 00:27
The current version of
The current version of metaSEM (0.8-2) is not flexible enough to handle the covariance structure list above. I will try to implement it in the future releases. For the time being, you may need to
code it directly in OpenMx.
Multivariate meta-analysis can be formulated as a SEM, see Figure 5 in the following paper.
Suppose the the known sampling variance covariance matrix is V, R is the correlation matrix of the random effects, and SD is the diagonal matrix of the standard deviations of the random effects, the
model implied covariance matrix is
SD %&% R + V
In your case, R is a known correlation matrix of .5.
Hope it helps.
Mon, 03/18/2013 - 18:39
Some sample code
Mike, thanks so much for replying. I've read through the paper that you linked to, which was also very helpful.
Per your suggestion, I've attempted to fit a relatively simple network meta-analysis model using OpenMx (and a few functions from your metaSEM package). It's been a little tough for me to figure out
exactly the proper steps, so I proceeded by trying to reverse-engineer the code from your meta() function (I hope you don't mind).
I've fit a relatively simple model using some data that I fabricated. The data consist of studies comparing three conditions, A, B, and C. For simplicity, I'm assuming that only AB and BC comparisons
are observed (there are procedures for representing AC comparisons, but I'd rather make sure that I'm imposing my desired constraints before I tackle that part of the code). I am attempting to fit a
model that estimates two intercepts, one representing the estimated AB effect size and another representing the estimated BC effect size. I am attempting to impose the constraint that the
between-studies correlation between the AB and BC effect sizes is .5, and I'm attempting to allow separate estimates of the variance for AB and BC.
I wonder if I might impose upon you to glance through my code to make sure that I have successfully implemented my desired constraints. I'm only moderately familiar with OpenMx (and with the SEM
approach to multivariate meta-analysis in general), so I'm feeling paranoid that I may have made a mistake somewhere. My code is attached. I've made copious comments to myself about what each line is
doing to make sure that I understand what's going on in the code.
Attachment Size
SEM meta code.R 8.23 KB
Wed, 03/20/2013 - 11:16
I have never conducted a
I have never conducted a network meta-analysis before. From the code, it appears to be correct.
One (minor) issue is the estimates of the sd which are negative. Since they are squared in calculating the variance covariance matrix, it may not be a critical issue. | {"url":"http://openmx.psyc.virginia.edu/thread/1986","timestamp":"2014-04-20T18:31:00Z","content_type":null,"content_length":"33142","record_id":"<urn:uuid:dae4f08c-9539-44b2-b312-463e6f4b14a6>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
Acute triangulation
up vote 10 down vote favorite
Assume that $S$ is a finite 2-dimensional simplicial complex equipped with a metric $d$ such that each triangle is isometric to a plane triangle (so $(S,d)$ is a polyhedral space).
Is it possible to subdivide the triangulation of $S$ so that it will have acute triangles only?
• It is well known that any triangle admits a triangulation with only acute triangels; one can see the proof in the following picture. But I do not see a way to fit such triangulations together.
• Such a triangulation would be useful in the proof of Zalgaller's theorem: any n-dimensional polyhedral space admits a length-preserving piecewise linear map to $\mathbb R^n$. Well, it would help
only if $n=2$, for larger $n$, we should say that a simplex is acute if it contains its circumcenter.
discrete-geometry mg.metric-geometry polyhedra
3 Related though you are probably aware of it: It is shown in [Kopczynski, Pak, Przytycki, Acute triangulations of polyhedra and $\mathbb{R}^n$][mimuw.edu.pl/~pprzytyc/combinatorica.pdf] that the
4-cube and $\mathbb{R}^n$ for $n\ge 5$ do not admit acute triangulations. Where acute means in their sense that all dihedral angles are smaller than 90°. – HenrikRüping May 15 '12 at 14:11
add comment
2 Answers
active oldest votes
There is a theorem in Y. D. Burago, V. A. Zallgaller, "Polyhedral embedding of a net" (Russian), Vestnik Leningrad. Univ., 15 (1960), 66–80, which says that any 2-dimensional
polyhedral surface has an acute triangulation.
up vote 12 down Another proof for this theorem can be found in "Acute and nonobtuse triangulations of polyhedral surfaces" by S. Saraf. Moreover they show that any polyhedral subdivision can be
vote accepted further refined into a non-obtuse sub-triangulation.
Thank you so much. I was surprised that acute triangulations are so popular. Is anything known in the higher dimensional case? (acute simplex = simplex which contains its
circumcenter.) – Anton Petrunin May 15 '12 at 14:16
1 I have no idea. I think people in discrete geometry call this a "well centered mesh", but I haven't been able to find anything relevant. I know of the reference above from Igor
Pak's book and as it was mentioned in the comments, Pak has investigated acute triangulations in the sense of acute dihedral angles. – Gjergji Zaimi May 15 '12 at 15:14
Saraf's paper for free: web.mit.edu/shibs/www/acute.pdf – Anton Petrunin Aug 5 '12 at 14:10
add comment
Just to add to @Gjergji's answer: there are also algorithms to compute such things. See, for example,Erten and Ungor, CCCG 2007 and references therein.
up vote 3 down vote
add comment
Not the answer you're looking for? Browse other questions tagged discrete-geometry mg.metric-geometry polyhedra or ask your own question. | {"url":"http://mathoverflow.net/questions/96988/acute-triangulation/96998","timestamp":"2014-04-21T15:16:15Z","content_type":null,"content_length":"58219","record_id":"<urn:uuid:00922a0a-9eee-49a6-a574-28a1856970f1>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
You are looking at historical revision 29421 of this page. It may differ significantly from its current revision.
This page is maintained in the package's github repository.
This is a Chicken Scheme egg which solves constraint satisfaction problems.
A CSP is composed of a number of domain-variables that have a domain, a set of bindings they can assume, along with a number of constraints. This egg supports 5 kinds of constraints: efd (early
failure detection), fc (forward checking), vp (value propagation), gfc (generalized forward checking) and ac (arc consistency).
[procedure] (csp-solution domain-variables select)
Given a list of domain variables and a function to select which one to try to bind next, one can simply pass in first, this produces a solution to the CSP.
[procedure] (create-domain-variable domain)
Create a domain variable whose domain is domain.
[parameter] *csp-strategy*
[procedure] (assert-constraint! constraint domain-variables)
Assert a constraint, a function that returns a boolean, between a number of domain variables. The kind of constraint is determined by inspecting *csp-strategy*. Valid types are: efd (early failure
detection), fc (forward checking), vp (value propagation), gfc (generalized forward checking) and ac (arc consistency). The default is ac. For constraints of low arity, a small number of domain
variables, it will use an optimized version of the constraint propagation code.
[procedure] (bound? domain-variable)
[procedure] (binding domain-variable)
Determine if the domain variable is bound and what its binding is.
[procedure] (assert-unary-constraint-efd! constraint x)
[procedure] (assert-binary-constraint-efd! constraint x y)
[procedure] (assert-ternary-constraint-efd! constraint x y z)
[procedure] (assert-unary-constraint-fc! constraint x)
[procedure] (assert-binary-constraint-fc! constraint x y)
[procedure] (assert-ternary-constraint-fc! constraint x y z)
[procedure] (assert-unary-constraint-vp! constraint x)
[procedure] (assert-binary-constraint-vp! constraint x y)
[procedure] (assert-ternary-constraint-vp! constraint x y z)
[procedure] (assert-unary-constraint-gfc! constraint x)
[procedure] (assert-binary-constraint-gfc! constraint x y)
[procedure] (assert-ternary-constraint-gfc! constraint x y z)
[procedure] (assert-unary-constraint-ac! constraint x)
[procedure] (assert-binary-constraint-ac! constraint x y)
[procedure] (assert-ternary-constraint-ac! constraint x y z)
Assert each of the 5 kinds of constraints between domain-variables x, y and z. You can always use assert-constraint! instead and it will default to these functions if your constraint is of low arity.
[procedure] (assert-constraint-efd! constraint ds)
[procedure] (assert-constraint-fc! constraint ds)
[procedure] (assert-constraint-vp! constraint ds)
[procedure] (assert-constraint-gfc! constraint ds)
[procedure] (assert-constraint-ac! constraint ds)
Assert each of the 5 kinds of constraints between the list of domain-variables ds. You can always use assert-constraint! instead as it will pick an optimized version for each of the above if the
arity of the constraint is low.
[procedure] (attach-before-demon! demon x)
[procedure] (attach-after-demon! demon x)
[procedure] (restrict-domain! x domain)
Only of interest to implementers.
This solves Project Euler problem 43.
(use traversal nondeterminism csp)
(let* ((ds (map-n (lambda _ (create-domain-variable (map-n identity 10))) 10))
(d3 (lambda (ns) (+ (* (first ns) 100) (* (second ns) 10) (third ns))))
(div (lambda (n) (lambda ns (= (modulo (d3 ns) n) 0))))
(nthd (lambda (a b) (sublist ds (- a 1) b))))
(assert-constraint! (div 17) (nthd 8 10))
(assert-constraint! (div 13) (nthd 7 9))
(assert-constraint! (div 11) (nthd 6 8))
(assert-constraint! (div 7) (nthd 5 7))
(assert-constraint! (div 5) (nthd 4 6))
(assert-constraint! (div 3) (nthd 3 5))
(assert-constraint! (div 2) (nthd 2 4))
(map-all-pairs (lambda l (assert-constraint! (lambda (a b) (not (= a b))) l)) ds)
(foldl + 0
(map (lambda (l) (foldl (lambda (a b) (+ (* a 10) b)) 0 l))
(all-values (csp-solution ds last)))))
Copyright 1993-1995 University of Toronto. All rights reserved.
Copyright 1996 Technion. All rights reserved.
Copyright 1996 and 1997 University of Vermont. All rights reserved.
Copyright 1997-2001 NEC Research Institute, Inc. All rights reserved.
Copyright 2002-2013 Purdue University. All rights reserved.
Contact Andrei Barbu, andrei@0xab.com. Originally written by Jeff Siskind.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License
along with this program. If not, see http://www.gnu.org/licenses. | {"url":"http://wiki.call-cc.org/eggref/4/csp?action=show&rev=29421","timestamp":"2014-04-21T09:44:18Z","content_type":null,"content_length":"9730","record_id":"<urn:uuid:5a8d322e-e490-4ac9-9adb-ae35e318eeee>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Effectiveness of Group Theory in Quantum Mechanics
Eugene Wigner and Hermann Weyl led the way in applying the theory of group representations to the newly formulated theory of quantum mechanics starting in 1927. My talk will focus, first, on two
aspects of this early work. Physicists had long exploited symmetries as a way of simplifying problems within classical physics. Wigner recognized that the theory of group representations would
similarly have enormous payoff in quantum mechanics, allowing him to solve problems in atomic spectroscopy ``almost without calculation.'' Here I will describe the novel aspects of symmetry in QM
that Wigner clarified in the series of papers leading up to his 1931 textbook (Wigner's theorem, projective representations, etc.). The second aspect is less well-known: Weyl (1927) argued that group
theory could also be used to address foundational questions in quantum mechanics, leading to a reformulation of the classical commutation relations and a proposal for quantization. Weyl's program had
much less immediate impact, although it led to the Stone-von Neumann theorem and to Mackey's imprimitivity theorem. As a final historical point, I argue that in this early work the theory of group
representations was optional (as emphasized by Slater and others) in a sense that it was not in particle physics in the 60s. The closing section of the talk turns to philosophical morals that have
been drawn from this historical episode, in particular claims regarding ontic structural realism (French, Ladyman) and the group-theoretic constitution of objects (Castellani). | {"url":"http://www.perimeterinstitute.ca/videos/effectiveness-group-theory-quantum-mechanics","timestamp":"2014-04-20T10:38:29Z","content_type":null,"content_length":"27254","record_id":"<urn:uuid:58921bfa-29ea-46c6-bf99-36859a4585a4>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Through the Ages : A Gentle History for Teachers and Others
ISBN: 9780883857366 | 0883857367
Edition: Revised
Format: Hardcover
Publisher: Mathematical Assn of Amer
Pub. Date: 12/1/2003
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/math-through-ages-gentle-history-teachers/bk/9780883857366","timestamp":"2014-04-16T16:39:54Z","content_type":null,"content_length":"29358","record_id":"<urn:uuid:47a98d24-12b5-4ba8-8728-decd29d7e122>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
<philosophical terminology> in the traditional square of opposition, the relationship between a universal proposition and its corresponding particular proposition. Thus, an I is the subaltern of its
A proposition, and an O is the subaltern of its E proposition. Thus, for example: "Some larks are birds is subaltern to All larks are birds", and "Some robins are not fish is subaltern to No robins
are fish". Subalternation is a reliable pattern of inference only on the assumption of existential import for universal propositions.
[A Dictionary of Philosophical Terms and Names]
Try this search on OneLook / Google
Nearby terms: structured language « structured programming « Suarez Francisco « subalternation » subclass » subcontraries » subject | {"url":"http://lgxserve.ciseca.uniba.it/lei/foldop/foldoc.cgi?subalternation","timestamp":"2014-04-21T09:36:20Z","content_type":null,"content_length":"2374","record_id":"<urn:uuid:b316dc14-6202-4e19-9623-56f19252ec77>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |