content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Videos for mistakes - Homework Help Videos - Brightstorm
11 Videos for "mistakes"
How to avoid making an order of operation mistake.
How to avoid making a mistake when squaring a binomial.
How to avoid making a mistake when dealing with exponents.
How to avoid making a mistake when dealing with logarithms.
How to avoid making a mistake when taking the square root of a sum/difference.
How to avoid making a mistake when simplifying rational expressions.
How to use the method of substitution to avoid common integration mistakes.
How to use the method of substitution to avoid common integration mistakes.
How to use the method of substitution to avoid common integration mistakes.
How to use the method of substitution to avoid common integration mistakes. | {"url":"http://www.brightstorm.com/tag/mistakes/","timestamp":"2014-04-18T13:12:12Z","content_type":null,"content_length":"43989","record_id":"<urn:uuid:d528ef00-0f38-4ba0-9474-b6126c03475d>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
David Swigon
Publications &
SNP Meeting
Math Kangaroo
University of Pittsburgh
Department of Mathematics
511 Thackeray Hall
Pittsburgh, PA 15260
Tel: (412) 624-4689
Fax: (412) 624-8397
My research interests are in the area of mathematical biology, in particular, construction of mathematical models of biological systems within the framework of theories of continuum mechanics,
dynamical systems, and stochastic dynamics.
DNA mechanics
Mathematical Immunology
I am interested in the development of mathematical models of in-host human immune response to influenza A virus infection and the relation between immunity and inflammation. With my collaborators we
have developed an ODE model for individual response, and an ensemble model capable of characterizing probabilistic outcomes of treatment scenarios.
Cell Migration
We are developing a mathematical model of migration of enterocytes during intestinal wound healing process that is based on novel assumption of elastic deformation of the cell layer and incorporates
cell mobility, adhesion and proliferation.
Math Kangaroo
With my colleague Piotr Hajlasz I organize a local testing site for Math Kangaroo, international mathematical competition for children grades 2-12. For more information see the link on the left or
our article in Post-Gazette. | {"url":"http://www.math.pitt.edu/~swigon/","timestamp":"2014-04-18T00:23:28Z","content_type":null,"content_length":"3550","record_id":"<urn:uuid:13efb669-bea8-4142-a1a3-eebb95fc7978>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
Compare, Order, and Round Whole Numbers and Money
An important application of place value is its use in the comparison of numbers. When comparing 51,432 and 9,567, students will note that the first number has five digits and the second number has
four digits. They can thus conclude that 51,432 > 9,567. This method is justified by the fact that for whole numbers the least 2-digit number (10) is greater than the greatest 1-digit number (9); the
least 3-digit number (100) is greater than the greatest 2-digit number (99); the least 4-digit number (1,000) is greater than the greatest 3-digit number (999); and so on.
To compare two whole numbers with the same number of digits, students begin by comparing digits with the same place value, starting from the left. The number that has the greater digit in that place
is the greater number. For example, when comparing 21,487 and 21,612, students note that the digits in the ten thousands and thousands places are the same but the digits in the hundreds place are
different. The fact that 6 > 4 implies that 21,612 > 21,487. At this grade level, students should also realize that 6 > 4 can be written as 4 < 6, which implies that 21,487 < 21,612.
The concept of place value is also applied to rounding. When rounding a number n to the nearest hundred, students determine the number closest to n that has all zeros to the right of the hundreds
place. For example, 5,378 rounded to the nearest hundred will be either 5,300 or 5,400. Since 5,400 = 5,378 + 22 and 5,378 = 5,300 + 78, 5,378 is closer to 5,400 than to 5,300. When both differences
are equal, the rounding rule calls for choosing the greater value. Thus, 5,378 rounded to the nearest hundred is 5,400.
The number line makes it easy to visualize why 5,378 rounds to 5,400: the point representing 5,378 is closer to the tick mark at 5,400 than to the tick mark at 5,300.
When a number lies halfway between two tick marks, it is rounded to the value represented by the tick mark to its right.
Money provides a natural introduction to decimal notation. The fact that the notation for whole numbers can be extended to represent rational numbers has many important consequences. Key to such an
extension is the use of a decimal point to the right of the digit in the ones place. The place-value rule that each digit has a place value equal to 10 times that of the digit to its right can be
extended to digits to the right of the decimal point and restated as “Each digit has a place value equal to one tenth that of the digit to its left.” This can be represented as follows.
│ Ones │ │ Tenths │ Hundredths │
│ 1 │ │ │ │
│ │ │ │ │
│ 1 │ │ │ │
│ 6 │ . │ 0 │ 5 │
In this chapter, decimals are shown as money amounts.
│ Dollars │ Dimes │ Pennies │
│ 6 │ 0 │ 5 │
This amount is written as $6.05.
When writing money amounts, a zero is placed in the hundredths place when there are no hundredths. Thus, three dollars and fifty cents is written as $3.50 rather than $3.5.
Comparing money amounts is similar to comparing whole numbers. Starting at the left, the numbers in each place are compared. When different digits occur, the greater digit indicates the greater
$25.34 > $15.34
$25.54 > $25.34
$25.34 > $24.29
Encourage students to compare place values, not coins or bills. For example, suppose students are comparing these two groups of coins.
Students may assume that because 3 quarters > 1 quarter, or the first group has more coins, the first group of coins has a greater total value than the second group. However, the values are $0.87 and
$0.90, respectively. Since 9 > 8 and $0.90 > $0.87, the second group of coins has the greater total value.
The usual method of making change is to start with the amount of the purchase and then count on, starting with the coin of the smallest denomination, until the amount tendered, or given, is reached.
The objective is to use the least number of coins and bills possible when making change. This may involve some analysis. Suppose an item that costs $2.08 is paid for with a $5 bill. The illustration
shows how a student might count on to find the amount of change that is due.
The total amount of change can be found by adding the values of the coins and bills given as change. This amount can also be found by subtracting the total cost from the amount tendered, or given.
Teaching Model 2.1: Compare Numbers | {"url":"http://www.eduplace.com/math/mw/models/overview/4_2_1.html","timestamp":"2014-04-17T09:36:42Z","content_type":null,"content_length":"9404","record_id":"<urn:uuid:a3f5fdbe-a864-4a5e-9064-b16f0f39289a>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
'Mathematical apocrypha'
Issue 29
March 2004
Mathematical apocrypha: stories and anecdotes of mathematicians and the mathematical
By Steven G. Krantz
"Mathematical Apocrypha" is, as its subtitle intimates, a book of stories and anecdotes about mathematicians and the mathematical. However, in contrast with many books about mathematicians, Steven
Krantz focuses on contemporary figures such as Wiener, Littlewood and Hardy, and says very little about the usual myths regarding Pythagoras, Descartes or Euler.
The structure of the book, divided in six main chapters, already tells us about the whimsical and personal style of the author. Great foolishness, great affrontery, great ideas, great failures, great
pranks and great people are the titles of each of the chapters in which we can find short independent stories, sometimes no longer than two lines, about Einstein, Russell, Atiyah and many others.
Many of the anecdotes presented are derived from the author's direct or second-hand experience and, as he says in the preface, they "are in fact verifiable, and have been checked with other
witnesses". However, the fact that a story is more likely to be true does not make it more interesting (otherwise we would always find fiction boring), and many of the stories can be seen as common
accounts of the personal lives of characters who happen to be mathematicians. Nevertheless, "Mathematical Apocrypha" is, most of the time, an entertaining read.
The book is also a good way of learning a few new names from the mathematics community of recent years and keeping up to date. With each anecdote we learn a little bit about these mathematicians'
lives, their students, the people they met, who they talked to, and their main contributions to mathematics. Throughout the book, the author manages to sketch a network of enthusiastic mathematicians
and their acquaintances, which, even if biased by his personal experience, helps us to a better understanding of the world of mathematics and the people behind it.
This book will appeal mainly to mathematicians, people working in the world of mathematics and academics looking for the latest mathematical gossip. For readers who are not familiar with the
characters, it might be hard to attach faces to the names and the stories might as a result seem rather colourless. However, if you have a taste for anecdote and are interested in those small facts
that reveal so much about character, this book is for you. Bear in mind, though, that in order to understand the humour behind some of these stories it is necessary to have at least a minimum
knowledge of mathematical culture.
The book's bottom line is that mathematicians are human beings - but of a very different kind. But are they really that eccentric, lunatic and absentminded? Sometimes the book is based on clichés and
stereotypes, but then again a story never becomes an interesting anecdote if it doesn't account for unconventional episodes.
In case "Mathematical Apocrypha" doesn't completely satisfy your need for anecdotes and stories, Krantz presents a section full of references for further reading at the end of the book, including a
few webpages such as the Anecdotes about Mathematicians and Logicians site.
Book details:
Mathematical apocrypha: stories and anecdotes of mathematicians and the mathematical
Steven G. Krantz
paperback - 228 pages (2002)
The Mathematical Association of America
ISBN: 0883855399
You can buy the book and help Plus at the same time by clicking on the link on the left to purchase from amazon.co.uk, and the link to the right to purchase from amazon.com. Plus will earn a small
commission from your purchase.
About the reviewer
Cristina Escoda is a second year Phd student at the University of Cambridge working on the phenomenology of String Theory and Supersymmetry. Cristina is an enthusiastic writer very intereted in
science communication. | {"url":"http://plus.maths.org/content/mathematical-apocrypha","timestamp":"2014-04-21T02:19:22Z","content_type":null,"content_length":"27133","record_id":"<urn:uuid:78ea2b4f-d693-4e51-86e5-74f9a3b0dd11>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
fluctuating results in basic algorithm
Hey all.
I wrote a piece of code that is used to calculate the time of usage remaining, based on a systems current battery percentage (similar to a phone). This uses a basic y=mx+c algorithm and a least
squares estimation (in descending order). It all works fine except that occasionally the time remaining will fluctuate significantly before returning to its linear relationship. It can return
reasonable results for about an hour then gives a few "rubbish" results, then back to normal.
here is an example of percentage/time remaining (minutes):
81\37 etc...
Does anyone have an idea as what can be causing this (assuming the equations are correct). I realize it's hard to judge without the code, but any ideas would be very much appreciated.
Is this because the inputs can have a few (very) bad readings?
Maybe robust regression instead of least squares will give you a more consistent result
Might be useful to plot your readings. Can a single "very busy" reading cause such a distortion?
All the inputs follow a linear slope and when plotted in excel give an R-Squared value of 0f 0.97, so there aren't any bad inputs or outliers.
It sounds like a type precision overflow error to me.
I tend to agree with Duoas, but how do I overcome that if I use float. Using double doesn't seem to change anything. Any ideas?
You'll need to use a little algebra to rearrange your calculations to reduce the likelihood of overflow. Sometimes this takes a little thought.
Perhaps C++ isn't the language for the job if precision is in question here. You could potentially look into FORTRAN, I've never used the language but I know it was designed for scientific
calculations in mind. There might be some newer languages out there now rather than FORTRAN. I know Python has no overflow issues as far as integer multiplication goes, although I'm not entirely sure
how precise it is.
Last edited on
You won't get better precision from any other language, and bignums won't help improve speed.
C++ uses the fpu to do it's floating point calculations which in turn uses extended floating point percision (80 bits). If you compare that to a high level language which may not use the fpu, which
in turn gives a higher percision but a slower calculation time. Since all a double is is two integers, which is in a standardized IEEE format.
Try to calulate 10 ** (2 ** 19) in C++.
There is no easy fix. I guess I will just modify my equations and incorporate some form of real time filtering like a kalman filter.
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/general/59188/","timestamp":"2014-04-21T14:45:56Z","content_type":null,"content_length":"15419","record_id":"<urn:uuid:22379749-4149-4a62-8558-ec845156e5d5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00284-ip-10-147-4-33.ec2.internal.warc.gz"} |
Precalculus - A Fuctional Approach to Graphing and Problem Solving
• Log of Both Sides Theorem
If A, B, and b are positive real numbers with b not equal to 1, then
log[b]A = log[b]B
is equivalent to A = b.
• Logarithmic Equations
Log Type I: The unknown is the logarithm.
Log Type II: The unknown is the base.
Log Type III: The logarithm of an unknown is equal to a number.
Log Type IV: The logarithm of an unknown is equal to the logarithm of a number.
• Laws of Logarithms
Let A, B, and b be positive numbers (b not equal to 1) and let p be any real number,
First law (Additive): log[b](AB) = log[b]A + log[b]B
Second law (Subtractive): log[b](A/B) = log[b]A + log[b]B
Third law (Multiplicative): log[b](A)^p = P log[b]A
< Back to Section 6
© 2011 Karl J. Smith. All rights reserved. | {"url":"http://www.mathnature.com/precalc/chapter4/essential4-6.htm","timestamp":"2014-04-17T00:58:43Z","content_type":null,"content_length":"10708","record_id":"<urn:uuid:d2c406c9-e235-4d3c-9ff8-f178b89a4aaf>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |
trapezium rule, area under curve
June 3rd 2009, 02:22 PM #1
Super Member
Sep 2008
trapezium rule, area under curve
For this question, its does not state for how many intervals I have to split the strips up, so how would I know what value of 'h' to take?
the instructions say to use all the data in the table ... based on the table values, looks like $h = \Delta x = 4$ to me.
June 3rd 2009, 02:42 PM #2 | {"url":"http://mathhelpforum.com/calculus/91695-trapezium-rule-area-under-curve.html","timestamp":"2014-04-17T08:03:36Z","content_type":null,"content_length":"33285","record_id":"<urn:uuid:9bb61d0c-9b32-49b4-b0ad-5a058da1e603>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
Diigo In Education
Gerald Carey on 30 Jul 11
Help the University of Oxford transcribe ancient papyrus rolls - for real.
Part of the Zooniverse (ie Galaxy Zoo)
web2write Idensen on 30 Jul 11
36 print optimized lessons based on the teacher / learner friendly methodology of SCC or Student Created Content.
View samples here - Preface: http://bit.ly/geMws5 Lessons: http://bit.ly/gylisE Teacher's Notes: http://bit.ly/dGSj16 Certificate: http://bit.ly/hnznO4 Extra Printables: http://bit.ly/eOousU
Wiki: http://teachlearnscc.pbworks.com/w/page/35979221/Teach%20%20Learn (all Lessons editable als doc-files)
have been lucky enough to have taught the full range of our freshman / sophmore undergraduate offerings as both an onsite and online instructor.
While I have thoroughly enjoyed both formats - and very much so - I must admit that my experiences online have been *much* more positive than onsite instruction. Let me try and elucidate:
1. While in the onsite classroom you have the opportunity to think on your feet and challenge and be experiential on your feet to reactions to the students who speak, in the online classroom, you are
able to meet *every* class member and challenge their minds and ideas. The students who would normally be lost in a classroom of 35-40 are met and developed each day or week at their level and pushed
to consider ideas they might not have considered.
2. I am able to reach the entire class through multimedia exhibits in each of the weekly units - journal articles, non-copyrighted film clips (and many from our university's purchased collection
under an agreement for both onsite classroom and online classroom use), photography, art, patents, etc, that the students would not see - or would otherwise ignore - in an onsite classroom. We
incorporate this information into our discussions and make it part of the larger whole of history.
3. Each student and I - on the phone during office hours or in e-mail - discuss the creation of their term papers - and discuss midterm and final "anxiety" issues - and as they are used to the online
format, and regular communication with me through the discussion boards, they respond much more readily than onsite students, whom I have found I have to pressure to talk to me.
4. I am able to accommodate students from around the country - and around the world. I have had enrolled in my class students from Japan, Indonesia, India, England - and many other countries. As a
result, I have set up a *very* specific Skype address *only* for use of my students. They are required to set up the time and day with me ahead of time and I need to approve that request, but for
them (and for some of my students scattered all over the state and US), the face time is invaluable in helping them feel "connected" - and I am more than happy to offer it.
5. As the software upgrades, the possibilities of what I can offer become more and more amazing, and the ease of use for both me - and for the students - becomes astronomically better. Many have
never known the software, so they don't notice it - but those who have taken online courses before cheer it on. Software does not achieve backwards.
As very few of these issues are met by the onsite classroom, I am leaning more and more toward the online classroom as the better mode of instruction. Yes, there are times I *really* miss the onsite
opportunities, but then I think of the above distinctions and realize that yes, I am where I should be, and virtually *ALL* the students are getting far more for their money than they would get in an
onsite classroom.
This is the wave of the future, and it holds such amazing promise. Already I think we are seeing clear and fruitful results, and if academics receive effective - and continuing - instruction and
support from the very beginning, I cannot imagine why one would ever go back. The only reason I can think of *not* doing this is if the instructor has his or her *own* fear of computers. Beyond that
- please, please jump on the bandwagon, swallow your fears, and learn how to do this with vigor. I don't think you will ever be sorry.
S B on 29 Jul 11
I am a graduate student at Sam Houston State University and before I started grad school I never had taken an online course before. My opinion then was that online courses were a joke and you
couldn't learn from taking a course online. Now my opinion has done a complete 180. The teachers post numerous youtube videos and other helpful tools for each assignment so that anyone can
successfully complete the assignment no matter what their technology skill level is. I do not see much difference between online and face-to-face now because of the way the instructors teach the
Are your students fed up with sharks eating their boats? Make sure it doesn't happen again with this shark themed place value maths game. Set the ability level and choose the correct answer from
three options.
A collection of amazing science YouTube videos with experiments and demonstrations.
A site that provides ready-made printable bingo cards and a whiteboard resource that generates random number facts questions.
A great set of maths addition games. Choose two numbers from the grid to make the sum correct.
A great set of maths addition games. Choose two numbers from the grid to make the sum correct.
A great set of maths addition games. Choose two numbers from the grid to make the sum correct.
Richard Bradshaw on 29 Jul 11
This essay interested me, because it attempts to synthesize the relationship between spirituality and politics.
Glenn Hervieux on 29 Jul 11
Anyone interested in Chromebooks might want to take the 60 minutes to get a nice overview of Chromebooks in education. Presented by Google and includes sharing by a teacher who piloted them in
his classroom. | {"url":"https://groups.diigo.com/group/diigoineducation?view=recent&page_num=680","timestamp":"2014-04-20T18:54:50Z","content_type":null,"content_length":"161694","record_id":"<urn:uuid:eeb8aba0-7da3-49a3-8e5e-055591d02961>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
Points in the plane imposing independent conditions: reference request
up vote 2 down vote favorite
Does anybody know a reference for the following result: $d\ge 5$ points of $\mathbb P^2$ fail to impose independent conditions on curves of degree $d-3$ if and only if at least $d-1$ of these points
are collinear. As usual, "fail to impose independent conditions" means $h^0(\mathcal I_D(d-3))>h^0(\mathcal O_{\mathbb P^2}(d-3))-d$, where $D$ is the set of points in question.
I have written up a proof of that, but of course one should give a reference if there is one, which is in my opinion quite probable.
Thank you in advance,
ag.algebraic-geometry algebraic-curves
add comment
1 Answer
active oldest votes
You mean "curves of degree $d-3$". A reference (for a more general result) is: D. Eisenbud, M. Green, and J. Harris, CayleyBacharach theorems and conjectures, Bull. Amer.
up vote 3 down vote Math. Soc. 33 (1996), 295–324.
$d-3$, yes. Many thanks both for the correction and the reference! – Serge Lvovski Jan 1 '13 at 18:43
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry algebraic-curves or ask your own question. | {"url":"https://mathoverflow.net/questions/117787/points-in-the-plane-imposing-independent-conditions-reference-request?sort=newest","timestamp":"2014-04-24T13:45:10Z","content_type":null,"content_length":"51539","record_id":"<urn:uuid:625cebbe-7d77-4243-ac58-ab34f91a6c84>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
Winner Of Dwyer Award Makes Learning Math Fun
To prepare for his class, David E. Williams darts around his classroom arranging desks as about 25 students pour into his Algebra I class.
After a few jokes and some small talk, Williams and his students get down to business.
On this particular weekday, the class is learning age and rate problems. Williams teaches the math by using an overhead projector and colored pens. Alternating between the projector and the
chalkboard, he draws helpful charts and pictures to map the formulas. | {"url":"http://articles.sun-sentinel.com/1990-05-04/news/9001060285_1_algebra-i-class-math-dwyer","timestamp":"2014-04-21T00:00:22Z","content_type":null,"content_length":"41679","record_id":"<urn:uuid:bd9971b9-ca97-4068-8a1c-1674c468d9ac>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
If 7.400 g of C6H6 is burned and the heat produced from the burning is added to 5691 g of water at 21 °C, what is the final temperature of the water?
• 5 months ago
• 5 months ago
Best Response
You've already chosen the best response.
the kJ for the chh6 is 6542 and is 2 moles to it as well
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/526744abe4b029b030db6384","timestamp":"2014-04-17T01:08:04Z","content_type":null,"content_length":"27668","record_id":"<urn:uuid:c1e80f86-1226-4556-b586-7e37905dd241>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00216-ip-10-147-4-33.ec2.internal.warc.gz"} |
A certain team has 12 members, including Joey. A three
Author Message
A certain team has 12 members, including Joey. A three [#permalink] 25 Apr 2012, 09:59
25% (low)
Question Stats:
(02:06) correct
34% (01:08)
Galiya wrong
Manager based on 160 sessions
Joined: 15 Jan 2011 A certain team has 12 members, including Joey. A three-member relay team will be selected as follows: one of the 12 members is to be chosen at random to run first, one of the
remaining 11 members is to be chosen at random to run second, and one of the remaining 10 members is to be chosen at random to run third. What is the probability that Joey will
Posts: 107 be chosen to run second or third?
Followers: 7 A. 1/1,320
B. 1/132
C. 1/110
D. 1/12
E. 1/6
actually i have an official explanation of the right answer, but its a bit illogical for me
There is a lot of excess wording to this question when it is really a simple concept. Each of the team members has an equal chance to be selected to run first, second, or
third, and (perhaps obviously) no team member can be selected to run more than one of those. Therefore,from Joey's perspective, he has a 1/12 chance of running first, a 1/12
chance of running second,and a 1/12 chance of running third. Since he can't run both second AND third,
the chances that he'll run second OR third is the sum of those two probabilities: 1/12 +1/12 =2/12 =1/6but how is it possible to have a probability of 1/12, that Joey will run
the second or the third, if the first runner has already started? After this we have just 11 members, and the probability should be 1/11 that Joey will start the second,
accordingly 1/10th that he will run the third
Could u please share with your thoughts on this?
Spoiler: OA
Re: A certain team has 12 members, including Joey. A three [#permalink] 25 Apr 2012, 10:37
This post received
Expert's post
Galiya wrote:
A certain team has 12 members, including Joey. A three-member relay team will be selected as follows: one of the 12 members is to be chosen at random to run Örst, one of the
remaining 11 members is to be chosen at random to run second, and one of the remaining 10 members is to be chosen at random to run third. What is the probability that Joey will
be chosen to run second or third?
A. 1/1,320
B. 1/132
C. 1/110
D. 1/12
E. 1/6
actually i have an official explanation of the right answer, but its a bit illogical for me
Explanation: There is a lot of excess wording to this question when it is really a simple concept. Each of the team members has an equal chance to be selected to run first,
second, or third, and (perhaps obviously) no team member can be selected to run more than one of those. Therefore,from Joey's perspective, he has a 1/12 chance of running
first, a 1/12 chance of running second,and a 1/12 chance of running third. Since he can't run both second AND third,
the chances that he'll run second OR third is the sum of those two probabilities: 1/12 +1/12 =2/12 =1/6
Bunuel but how is it possible to have a probability of 1/12, that Joey will run the second or the third, if the first runner has already started? After this we have just 11 members,
and the probability should be 1/11 that Joey will start the second, accordingly 1/10th that he will run the third
Math Expert Could u please share with your thoughts on this?
Joined: 02 Sep 2009 Standard approach:
Posts: 17307 (any but Joey)(Joey)(any) + (any but Joey)(any but Joey)(Joey) = 11/12*1/11*1+11/12*10/11*1/10=2/12.
Followers: 2873 Answer: E.
Kudos [?]: 18373 [4] , Another approach:
given: 2348
Actually even OE has one more step than necessary: since there are two slots for Joey from 12 possible than the probability is simply 2/12.
Consider this line 12 members in a row. Now, what is the probability that Joey is 1st in that row? 1/12. What is the probability that he's 2nd? Again 1/12. What is the
probability that he's 12th? What is the probability that he's second or third? 1/12+1/12=2/12. What is the probability that he's in last 6? 6/12...
Answer: E.
Hope it's clear.
NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!!
PLEASE READ AND FOLLOW: 11 Rules for Posting!!!
RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math
Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!;
COLLECTION OF QUESTIONS:
PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and
Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12
Fresh Meat
DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With
Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set.
What are GMAT Club Tests?
25 extra-hard Quant Tests
Re: A certain team has 12 members, including Joey. A three [#permalink] 25 Apr 2012, 10:45
if rephrase the question "What is the probability that Joey will run second or third?", will we get the probability 1/11+1/10?
Joined: 15 Jan 2011
Posts: 107
Followers: 7
Re: A certain team has 12 members, including Joey. A three [#permalink] 25 Apr 2012, 10:54
Math Expert
Joined: 02 Sep 2009
This post received
Posts: 17307 KUDOS
Followers: 2873 Expert's post
Kudos [?]: 18373 [1] ,
given: 2348
Galiya Re: A certain team has 12 members, including Joey. A three [#permalink] 26 Apr 2012, 12:05
Manager Bunuel
i went thru the add stuff, you had provided.
Joined: 15 Jan 2011 FTB I'm absolutely confused by this kind of questions!
Could you please give me an example where it will be appropriate to use the method with decreasing denominators (1/11+1/10) in contrast to the one above?eg the probability
Posts: 107 wouldn't be the same for the each member within the team.
I need to clarify for myself when to use what approach
Followers: 7
Math Expert Re: A certain team has 12 members, including Joey. A three [#permalink] 26 Apr 2012, 12:54
Joined: 02 Sep 2009 Expert's post
Posts: 17307
Followers: 2873
Math Expert Re: A certain team has 12 members, including Joey. A three [#permalink] 07 Jun 2013, 06:03
Joined: 02 Sep 2009 Expert's post
Posts: 17307
Followers: 2873
lchen Re: A certain team has 12 members, including Joey. A three [#permalink] 07 Jun 2013, 07:15
Intern 1
Joined: 09 Apr 2013 This post received
Posts: 31
Bunuel, don't you mean answer is E?
Location: United
States Here is another way to solve it.
1 - (chance Joey will run first) - (chance Joey will not run at all)
Strategy, General 1 - \frac{1}{12} - (\frac{11}{12} * \frac{10}{11} * \frac{9}{10})
1 - \frac{1}{12} - \frac{9}{12}
GMAT 1: 750 Q50 V41
1 - \frac{10}{12}
GRE 1: 317 Q160 V157
\frac{2}{12} = \frac{1}{6}
GPA: 3.55
Answer is E
Followers: 1
Explanation: If Joey doesn't run first and he actually gets the chance to run, that means that he has to run either second or third.
Kudos [?]: 10 [1] ,
given: 0
Math Expert Re: A certain team has 12 members, including Joey. A three [#permalink] 07 Jun 2013, 07:20
Joined: 02 Sep 2009 Expert's post
Posts: 17307
Followers: 2873
Re: A certain team has 12 members, including Joey. A three [#permalink] 07 Jun 2013, 14:07
This post received
Let s say that we have three spots to fill
_ _ _
Zarrolou one for each position
VP Case Joey second
Status: Far, far away! 11*1*10
Joined: 02 Sep 2012 , we can take 11 people for the first one, only one (Joey) for the second one and 10 of the remaining for the third place.
Posts: 1125 Case Joey third
Location: Italy 11*10*1
Concentration: , with the same logic.
Entrepreneurship The total cases possible are
GPA: 3.8 12*11*10
Followers: 92 , this time we consider all 12 people at the beginning, with no limitations.
Kudos [?]: 998 [2] , Probability =
given: 219
It is beyond a doubt that all our knowledge that begins with experience.
Kant , Critique of Pure Reason
Tips and tricks: Inequalities , Mixture | Review: MGMAT workshop
Strategy: SmartGMAT v1.0 | Questions: Verbal challenge SC I-II- CRNew SC set out !! , My Quant
Rules for Posting in the Verbal Forum - Rules for Posting in the Quant Forum[/size][/color][/b]
Re: A certain team has 12 members, including Joey. A three [#permalink] 23 Oct 2013, 17:33
Bunuel wrote:
Bumping for review and further discussion*. Get a kudos point for an
Joined: 14 Dec 2011
*New project from GMAT Club!!! Check
Posts: 15
Location: India
Alternative Solution:
Finance, Technology Probability of Joey getting selected at second or third place (P) = 1 - (Probability of Joey either not selected at all (P1) or Joey selected at first place (P2) )
GPA: 2.56 P1 = Joey is not selected, first person is selected out of 11, second is selected out of 10 and third is selected out of 9/Total possbile outcomes = (11*10*9)/(12*11*10)
WE: Information P2 = Joey is selected at first, Second person is selected out of 11 and third is selected out of 10/Total possible outcomes = (1*11*10)/(12*11*10)
Technology (Computer
Software) P1 + P2 = (11*10*9 + 1*11*10)/(12*11*10)
Followers: 0 = 10/12
Kudos [?]: 10 [0], P = 1 - (P1+P2)
given: 59
= 1- (10/12)
= 2/12
Re: A certain team has 12 members, including Joey. A three [#permalink] 31 Oct 2013, 21:32
probability that Joey will be chosen to run second or third
Joined: 09 Jan 2013 Means, Chosen Second = Not chosen first * chosen second -------------------(A)
Chosen third = Not chosen first * not chosen second * chosen third. --------(B)
Posts: 68
Total: (A)+(B) = (11/12)*(1/11) + (11/12)*(10/11)*(1/10)
Followers: 0 =1/6
Hence E
Kudos [?]: 10 [0],
given: 6
gmatclubot Re: A certain team has 12 members, including Joey. A three [#permalink] 31 Oct 2013, 21:32
Similar topics Author Replies Last post
A certain club has 10 members, including Harry. One of the gk3.14 2 24 Aug 2006, 18:20
8 A certain club has 10 members, including Harry. One of the Himalayan 17 08 Jan 2007, 23:22
1 A certain team has 12 members, including Joey. A three-membe dreambeliever 13 23 Oct 2011, 12:16
11 A certain club has 10 members, including Harry. One of the Bunuel 11 25 Jun 2012, 01:31
2 A certain club has 10 members, including Harry. One of the akuma86 8 24 Jul 2012, 17:55 | {"url":"http://gmatclub.com/forum/a-certain-team-has-12-members-including-joey-a-three-131321.html?sort_by_oldest=true","timestamp":"2014-04-18T14:05:32Z","content_type":null,"content_length":"249866","record_id":"<urn:uuid:5c780c95-2a7d-4bbf-915e-ca487b6e2d67>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
Additive trees
Phylogenetic Trees and Multiple Alignments
Additive trees
A generalization of ultrametric trees are additive trees. Remind that in an ultrametric tree, the number of mutations was assumed to be proportional to the temporal distance of a node to the ancestor
and it was also assumed that the mutations took place with the same rate in all paths. Thus an ultramaetric tree is assigned a root and the distance from the root to a leave is constant. But it's a
fact, that the evolutionary clock is running differently for different species and even for different regions i.e. in a protein sequence. An unrooted phylogenetic tree is a reflection of our
ignorance as to where the common ancestor lies. All nodes of an additive tree except for the leaves have degree three, an additive tree is therefore an unrooted binary tree.
Definition: The additional requirement for an additive metric is:
An additive tree also is characterized by the four point condition:
Any 4 points can be renamed such that
The tree construction from an additive metric works by successive insertion. There is exactly one tree topology that allows for realization of an additive metric.
Comments are very welcome. | {"url":"http://lectures.molgen.mpg.de/Phylogeny/Additive/index.html","timestamp":"2014-04-20T15:50:15Z","content_type":null,"content_length":"4329","record_id":"<urn:uuid:99d3f030-d15e-4c25-9ac3-1f701b0628ff>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00596-ip-10-147-4-33.ec2.internal.warc.gz"} |
Case-control studies
April 30th 2010, 12:28 AM #1
Junior Member
Aug 2009
Case-control studies
Hello. This is more a theoretical question.I've just start studying biostatistics and I'm having some doubts. I'm focusing in case-control studies.
First, the classical result from Anderson (1972) and Prentice and Pike (1979) says that the case-control study when a logistic model is being used can be viewed as a prospect study, the
case-control character can be ignored.
After reading that I thought everything was done, there was nothing more to study. According to that result you can estimate your coefficients (apart from the intercept) and they are likelihood
estimates. But no... There are many other ways of dealing with case-control studies, such as Weighted Likelihood, Pseudo Likelihood, Semi-parametric Likelihood... but why?
Is there a limitation on that result obtained by Anderson? Are this new methodologies broader? Or they are "just" trying to estimate the coefficients more efficiently?
One last question: when I use softwares (such as R, or others) what methodology do they use to make a logistic regression of a case-control study?
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-statistics/142255-case-control-studies.html","timestamp":"2014-04-17T13:01:57Z","content_type":null,"content_length":"29118","record_id":"<urn:uuid:c01570c4-424d-479f-8a7a-a56bd0d67376>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
Extra, Extra - Read All About It: Nearly All Binary Searches and Mergesorts are Broken
I remember vividly Jon Bentley's first Algorithms lecture at CMU, where he asked all of us incoming Ph.D. students to write a binary search, and then dissected one of our implementations in front of
the class. Of course it was broken, as were most of our implementations. This made a real impression on me, as did the treatment of this material in his wonderful
Programming Pearls
(Addison-Wesley, 1986; Second Edition, 2000). The key lesson was to carefully consider the invariants in your programs.
Fast forward to 2006. I was shocked to learn that the binary search program that Bentley proved correct and subsequently tested in Chapter 5 of
Programming Pearls
contains a bug. Once I tell you what it is, you will understand why it escaped detection for two decades. Lest you think I'm picking on Bentley, let me tell you how I discovered the bug: The version
of binary search that I wrote for the JDK contained the same bug. It was reported to Sun recently when it broke someone's program, after lying in wait for nine years or so.
So what's the bug? Here's a standard binary search, in Java. (It's one that I wrote for the
1: public static int binarySearch(int[] a, int key) {
2: int low = 0;
3: int high = a.length - 1;
5: while (low <= high) {
6: int mid = (low + high) / 2;
7: int midVal = a[mid];
9: if (midVal < key)
10: low = mid + 1
11: else if (midVal > key)
12: high = mid - 1;
13: else
14: return mid; // key found
15: }
16: return -(low + 1); // key not found.
17: }
The bug is in this line:
6: int mid =(low + high) / 2;
Programming Pearls
Bentley says that the analogous line "sets m to the average of l and u, truncated down to the nearest integer." On the face of it, this assertion might appear correct, but it fails for large values
of the
. Specifically, it fails if the sum of
is greater than the maximum positive
value (2
- 1). The sum overflows to a negative value, and the value stays negative when divided by two. In C this causes an array index out of bounds with unpredictable results. In Java, it throws
This bug can manifest itself for arrays whose length (in elements) is 2
or greater (roughly a billion elements). This was inconceivable back in the '80s, when
Programming Pearls
was written, but it is common these days at Google and other places. In
Programming Pearls
, Bentley says "While the first binary search was published in 1946, the first binary search that works correctly for all values of
did not appear until 1962." The truth is, very few correct versions have ever been published, at least in mainstream programming languages.
So what's the best way to fix the bug? Here's one way:
6: int mid = low + ((high - low) / 2);
Probably faster, and arguably as clear is:
6: int mid = (low + high) >>> 1;
In C and C++ (where you don't have the
operator), you can do this:
6: mid = ((unsigned int)low + (unsigned int)high)) >> 1;
And now we
the binary search is bug-free, right? Well, we strongly suspect so, but we don't know. It is not sufficient merely to prove a program correct; you have to test it too. Moreover, to be really certain
that a program is correct, you have to test it for all possible input values, but this is seldom feasible. With concurrent programs, it's even worse: You have to test for all internal states, which
is, for all practical purposes, impossible.
The binary-search bug applies equally to mergesort, and to other divide-and-conquer algorithms. If you have any code that implements one of these algorithms, fix it now before it blows up. The
general lesson that I take away from this bug is humility: It is hard to write even the smallest piece of code correctly, and our whole world runs on big, complex pieces of code.
We programmers need all the help we can get, and we should never assume otherwise. Careful design is great. Testing is great. Formal methods are great. Code reviews are great. Static analysis is
great. But none of these things alone are sufficient to eliminate bugs: They will always be with us. A bug can exist for half a century despite our best efforts to exterminate it. We must program
carefully, defensively, and remain ever vigilant.
Update 17 Feb 2008
: Thanks to Antoine Trux, Principal Member of Engineering Staff at Nokia Research Center Finland for pointing out that the original proposed fix for C and C++ (Line 6), was not guaranteed to work by
the relevant C99 standard (
INTERNATIONAL STANDARD - ISO/IEC - 9899 - Second edition - 1999-12-01
), which says that if you add two signed quantities and get an overflow, the result is undefined. The older C Standard, C89/90, and the C++ Standard are both identical to C99 in this respect. Now
that we've made this change, we
that the program is correct;)
• Programming Pearls - Highly recommended. Get a copy today!
• A 2003 paper by Salvatore Ruggieri discussing a related problem - The problem is a bit more general but perhaps less interesting: the average of two numbers of arbitrary sign. The paper does not
discuss performance, and its solution is not fast enough for use in the inner loop of a mergesort. | {"url":"http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html?_escaped_fragment_=/2006/06/extra-extra-read-all-about-it-nearly.html","timestamp":"2014-04-18T15:41:58Z","content_type":null,"content_length":"128571","record_id":"<urn:uuid:c1349d41-a463-4faf-80b7-bb7aa096a4c1>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
The polymorphic pi-calculus: Theory and implementation
Results 1 - 10 of 71
- Information and Computation , 1998
"... INTRODUCTION Mobile computation, where independent agents roam widely distributed networks in search of resources and information, is fast becoming a reality. A number of programming languages,
APIs and protocols have recently emerged which seek to provide high-level support for mobile agents. These ..."
Cited by 201 (19 self)
Add to MetaCart
INTRODUCTION Mobile computation, where independent agents roam widely distributed networks in search of resources and information, is fast becoming a reality. A number of programming languages, APIs
and protocols have recently emerged which seek to provide high-level support for mobile agents. These include Java [30], Odyssey [15], Aglets [19], Voyager [24] and the latest revisions of the
Internet protocol [25, 2]. In addition to these commercial efforts, many prototype languages have been developed and implemented within the programming language research community --- examples
include Linda [8, 9], Facile [16], Obliq [7], Infospheres [11], the join calculus [13], and Nomadic Pict [33]. In this paper we address the issue of resource access control for such languages.
Central to the paradigm of mobile computation are the notions of agent, resource and location. Agents are effective entities that perform computation and interact with other First publis
- IEEE Concurrency , 1999
"... We study the distributed infrastructures required for location-independent communication between migrating agents. These infrastructures are problematic: different applications may have very
different patterns of migration and communication, and require different performance and robustness propertie ..."
Cited by 103 (15 self)
Add to MetaCart
We study the distributed infrastructures required for location-independent communication between migrating agents. These infrastructures are problematic: different applications may have very
different patterns of migration and communication, and require different performance and robustness properties; algorithms must be designed with these in mind. To study this problem we introduce an
agent programming language - Nomadic Pict. It is designed to allow infrastructure algorithms to be expressed as clearly as possible, as translations from a high-level language to a low level. The
levels are based on rigorously-defined process calculi, they provide sharp levels of abstraction. In this paper we describe the language and use it to develop an infrastructure for an example
application. The language and examples have been implemented; we conclude with a description of the compiler and runtime.
- In Bioconcur’04. ENTCS , 2004
"... Abstract. This paper presents an abstract machine for a variant of the stochastic pi-calculus, in order to correctly model the stochastic simulation of biological processes. The abstract machine
is proved sound and complete with respect to the calculus, and then used as the basis for implementing a ..."
Cited by 76 (10 self)
Add to MetaCart
Abstract. This paper presents an abstract machine for a variant of the stochastic pi-calculus, in order to correctly model the stochastic simulation of biological processes. The abstract machine is
proved sound and complete with respect to the calculus, and then used as the basis for implementing a stochastic simulator. The correctness of the machine helps ensure that the simulator is correctly
implemented, giving greater confidence in the simulation results. A graphical representation for the pi-calculus is also presented, as a potential front-end to the simulator. 1
- In Internet Programming Languages, LNCS 1686 , 1998
"... We study communication primitives for interaction between mobile agents. They can be classified into two groups. At a low level there are location dependent primitives that require a programmer
to know the current site of a mobile agent in order to communicate with it. At a high level there are loca ..."
Cited by 65 (38 self)
Add to MetaCart
We study communication primitives for interaction between mobile agents. They can be classified into two groups. At a low level there are location dependent primitives that require a programmer to
know the current site of a mobile agent in order to communicate with it. At a high level there are location independent primitives that allow communication with a mobile agent irrespective of its
current site and of any migrations. Implementation of these requires delicate distributed infrastructure. We propose a simple calculus of agents that allows implementations of such distributed
infrastructure algorithms to be expressed as encodings, or compilations, of the whole calculus into the fragment with only location dependent communication. These encodings give executable
descriptions of the algorithms, providing a clean implementation strategy for prototype languages. The calculus is equipped with a precise semantics, providing a solid basis for understanding the
algorithms and for reasoning about their correctness and robustness. Two sample infrastructure algorithms are presented as encodings.
, 1997
"... We introduce a calculus which is a direct extension of both the and the π calculi. We give a simple type system for it, that encompasses both Curry's type inference for the -calculus, and
Milner's sorting for the π-calculus as particular cases of typing. We observe that the various continuation pas ..."
Cited by 64 (2 self)
Add to MetaCart
We introduce a calculus which is a direct extension of both the and the π calculi. We give a simple type system for it, that encompasses both Curry's type inference for the -calculus, and Milner's
sorting for the π-calculus as particular cases of typing. We observe that the various continuation passing style transformations for -terms, written in our calculus, actually correspond to encodings
already given by Milner and others for evaluation strategies of -terms into the π-calculus. Furthermore, the associated sortings correspond to well-known double negation translations on types.
Finally we provide an adequate cps transform from our calculus to the π-calculus. This shows that the latter may be regarded as an "assembly language", while our calculus seems to provide a better
programming notation for higher-order concurrency.
- In Proceedings of ICALP '98, LNCS 1443 , 1998
"... This paper considers how locality restrictions on the use of capabilities can be enforced by a static type system. A distributed π-calculus with a simple reduction semantics is introduced,
integrating location and migration primitives from the Distributed Join Calculus with asynchronous π communicat ..."
Cited by 62 (11 self)
Add to MetaCart
This paper considers how locality restrictions on the use of capabilities can be enforced by a static type system. A distributed π-calculus with a simple reduction semantics is introduced,
integrating location and migration primitives from the Distributed Join Calculus with asynchronous π communication. It is given a type system in which the input and output capabilities of channels
may be either global, local or absent. This allows compile-time optimization where possible but retains the expressiveness of channel communication. Subtyping allows all communications to be invoked
uniformly. We show that the most local possible capabilities for internal channels can be inferred automatically.
- Proceedings of the 1999 European Symposium on Programming, number 1576 in Lecture Notes in Computer Science , 1999
"... Abstract. We define an extension of the π-calculus with a static type system which supports high-level specifications of extended patterns of communication, such as client-server protocols.
Subtyping allows protocol specifications to be extended in order to describe richer behaviour; an implemented ..."
Cited by 49 (6 self)
Add to MetaCart
Abstract. We define an extension of the π-calculus with a static type system which supports high-level specifications of extended patterns of communication, such as client-server protocols. Subtyping
allows protocol specifications to be extended in order to describe richer behaviour; an implemented server can then be replaced by a refined implementation, without invalidating type-correctness of
the overall system. We use the POP3 protocol as a concrete example of this technique. 1
- Gilmore (Eds.), Proc. Int. Conf. Computational Methods in Systems Biology (CMSB’07 , 2007
"... Abstract. This paper presents a simulation algorithm for the stochastic π-calculus, designed for the efficient simulation of biological systems with large numbers of molecules. The cost of a
simulation depends on the number of species, rather than the number of molecules, resulting in a significant ..."
Cited by 42 (13 self)
Add to MetaCart
Abstract. This paper presents a simulation algorithm for the stochastic π-calculus, designed for the efficient simulation of biological systems with large numbers of molecules. The cost of a
simulation depends on the number of species, rather than the number of molecules, resulting in a significant gain in efficiency. The algorithm is proved correct with respect to the calculus, and then
used as a basis for implementing the latest version of the SPiM stochastic simulator. The algorithm is also suitable for generating graphical animations of simulations, in order to visualise system
dynamics. 1
- MATH. STRUCT. COMPUT. SCI , 1998
"... An interpretation of Abadi and Cardelli's first-order Imperative Object Calculus into a typed pi-calculus is presented. The interpretation validates the subtyping relation and the typing
judgements of the Object Calculus, and is computationally adequate. The proof of computational adequacy makes use ..."
Cited by 41 (13 self)
Add to MetaCart
An interpretation of Abadi and Cardelli's first-order Imperative Object Calculus into a typed pi-calculus is presented. The interpretation validates the subtyping relation and the typing judgements
of the Object Calculus, and is computationally adequate. The proof of computational adequacy makes use of (a pi-calculus version) of ready simulation, and of a factorisation of the interpretation
into a functional part and a very simple imperative part. The interpretation can be used to compare and contrast the Imperative and the Functional Object Calculi, and to prove properties about them,
within a unified framework.
, 2002
"... The theory of relational parametricity and its logical relations proof technique are powerful tools for reasoning about information hiding in the polymorphic -calculus. We investigate the
application of these tools in the security domain by defining a cryptographic -calculus---an extension of the ..."
Cited by 38 (2 self)
Add to MetaCart
The theory of relational parametricity and its logical relations proof technique are powerful tools for reasoning about information hiding in the polymorphic -calculus. We investigate the application
of these tools in the security domain by defining a cryptographic -calculus---an extension of the standard simply typed -calculus with primitives for encryption, decryption, and key generation--- and
introducing syntactic logical relations (in the style of Pitts and Birkedal-Harper) for this calculus that can be used to prove behavioral equivalences between programs that use encryption. We | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=6782","timestamp":"2014-04-21T11:23:56Z","content_type":null,"content_length":"37863","record_id":"<urn:uuid:cf836177-0f8a-42a8-983d-3de850725c6e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maintainer diagrams-discuss@googlegroups.com
Safe Haskell None
data Envelope v
Every diagram comes equipped with an envelope. What is an envelope?
Consider first the idea of a bounding box. A bounding box expresses the distance to a bounding plane in every direction parallel to an axis. That is, a bounding box can be thought of as the
intersection of a collection of half-planes, two perpendicular to each axis.
More generally, the intersection of half-planes in every direction would give a tight "bounding region", or convex hull. However, representing such a thing intensionally would be impossible; hence
bounding boxes are often used as an approximation.
An envelope is an extensional representation of such a "bounding region". Instead of storing some sort of direct representation, we store a function which takes a direction as input and gives a
distance to a bounding half-plane as output. The important point is that envelopes can be composed, and transformed by any affine transformation.
Formally, given a vector v, the envelope computes a scalar s such that
• for every point u inside the diagram, if the projection of (u - origin) onto v is s' *^ v, then s' <= s.
• s is the smallest such scalar.
There is also a special "empty envelope".
The idea for envelopes came from Sebastian Setzer; see http://byorgey.wordpress.com/2009/10/28/collecting-attributes/#comment-2030. See also Brent Yorgey, Monoids: Theme and Variations, published in
the 2012 Haskell Symposium: http://www.cis.upenn.edu/~byorgey/pub/monoid-pearl.pdf; video: http://www.youtube.com/watch?v=X-8NCkD2vOw.
Action Name (Envelope v)
Show (Envelope v)
Ord (Scalar v) => Semigroup (Envelope v)
Ord (Scalar v) => Monoid (Envelope v)
(InnerSpace v, OrderedField (Scalar v)) => Juxtaposable (Envelope
(InnerSpace v, OrderedField (Scalar v)) => Enveloped (Envelope v)
(HasLinearMap v, InnerSpace v, Floating (Scalar v)) =>
Transformable (Envelope v)
The local origin of an envelope is the point with respect to which bounding queries are made, i.e. the point from which the input
(InnerSpace v, Fractional (Scalar v)) => HasOrigin (Envelope v) vectors are taken to originate.
(InnerSpace v, OrderedField (Scalar v)) => Alignable (Envelope v)
Newtype (QDiagram b v m) (DUALTree (DownAnnots v) (UpAnnots b v m)
() (Prim b v))
class (InnerSpace (V a), OrderedField (Scalar (V a))) => Enveloped a
Enveloped abstracts over things which have an envelope.
Enveloped b => Enveloped [b]
Enveloped b => Enveloped (Set b)
(InnerSpace v, OrderedField (Scalar v)) => Enveloped (Envelope v)
Enveloped t => Enveloped (TransInv t)
(OrderedField (Scalar v), InnerSpace v) => Enveloped (Point v)
(InnerSpace v, HasBasis v, Ord (Basis v), AdditiveGroup (Scalar v), Ord (Scalar v), Floating (Scalar v)) => Enveloped (
BoundingBox v)
The envelope of a Located a is the envelope of the a, translated to the
Enveloped a => Enveloped (Located a) location.
(InnerSpace v, OrderedField (Scalar v)) => Enveloped (FixedSegment v)
(InnerSpace v, OrderedField (Scalar v)) => Enveloped (Trail v)
(InnerSpace v, OrderedField (Scalar v)) => Enveloped (Path v)
(Enveloped a, Enveloped b, ~ * (V a) (V b)) => Enveloped (a, b)
Enveloped b => Enveloped (Map k b)
(InnerSpace v, OrderedField (Scalar v)) => Enveloped (Segment Closed v) The envelope for a segment is based at the segment's start.
(InnerSpace v, OrderedField (Scalar v)) => Enveloped (Trail' l v) The envelope for a trail is based at the trail's start.
(HasLinearMap v, InnerSpace v, OrderedField (Scalar v)) => Enveloped (QDiagram b v m)
(OrderedField (Scalar v), InnerSpace v, HasLinearMap v) => Enveloped (Subdiagram b v m)
Diagram envelopes
withEnvelope :: (HasLinearMap (V a), Enveloped a, Monoid' m) => a -> QDiagram b (V a) m -> QDiagram b (V a) mSource
Use the envelope from some object as the envelope for a diagram, in place of the diagram's default envelope.
phantom :: (Backend b (V a), Enveloped a, Traced a, Monoid' m) => a -> QDiagram b (V a) mSource
phantom x produces a "phantom" diagram, which has the same envelope and trace as x but produces no output.
pad :: (Backend b v, InnerSpace v, OrderedField (Scalar v), Monoid' m) => Scalar v -> QDiagram b v m -> QDiagram b v mSource
pad s "pads" a diagram, expanding its envelope by a factor of s (factors between 0 and 1 can be used to shrink the envelope). Note that the envelope will expand with respect to the local origin, so
if the origin is not centered the padding may appear "uneven". If this is not desired, the origin can be centered (using, e.g., centerXY for 2D diagrams) before applying pad.
extrudeEnvelope :: (Ord (Scalar v), Num (Scalar v), AdditiveGroup (Scalar v), Floating (Scalar v), HasLinearMap v, InnerSpace v, Monoid' m) => v -> QDiagram b v m -> QDiagram b v mSource
extrudeEnvelope v d asymmetrically "extrudes" the envelope of a diagram in the given direction. All parts of the envelope within 90 degrees of this direction are modified, offset outwards by the
magnitude of the vector.
This works by offsetting the envelope distance proportionally to the cosine of the difference in angle, and leaving it unchanged when this factor is negative.
intrudeEnvelope :: (Ord (Scalar v), Num (Scalar v), AdditiveGroup (Scalar v), Floating (Scalar v), HasLinearMap v, InnerSpace v, Monoid' m) => v -> QDiagram b v m -> QDiagram b v mSource
intrudeEnvelope v d asymmetrically "intrudes" the envelope of a diagram away from the given direction. All parts of the envelope within 90 degrees of this direction are modified, offset inwards by
the magnitude of the vector.
Note that this could create strange inverted envelopes, where diameter v d < 0 .
Querying envelopes
envelopeVMay :: Enveloped a => V a -> a -> Maybe (V a)
Compute the vector from the local origin to a separating hyperplane in the given direction, or Nothing for the empty envelope.
envelopeV :: Enveloped a => V a -> a -> V a
Compute the vector from the local origin to a separating hyperplane in the given direction. Returns the zero vector for the empty envelope.
envelopePMay :: Enveloped a => V a -> a -> Maybe (Point (V a))
Compute the point on a separating hyperplane in the given direction, or Nothing for the empty envelope.
envelopeP :: Enveloped a => V a -> a -> Point (V a)
Compute the point on a separating hyperplane in the given direction. Returns the origin for the empty envelope.
diameter :: Enveloped a => V a -> a -> Scalar (V a)
Compute the diameter of a enveloped object along a particular vector. Returns zero for the empty envelope.
radius :: Enveloped a => V a -> a -> Scalar (V a)
Compute the "radius" (1/2 the diameter) of an enveloped object along a particular vector. | {"url":"http://hackage.haskell.org/package/diagrams-lib-0.7/docs/Diagrams-Envelope.html","timestamp":"2014-04-20T16:41:50Z","content_type":null,"content_length":"40880","record_id":"<urn:uuid:adfd545b-7be4-4845-a556-8671499b3f38>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00436-ip-10-147-4-33.ec2.internal.warc.gz"} |
2D Dynamically allocated pointer arrays
06-08-2005 #1
Registered User
Join Date
May 2005
2D Dynamically allocated pointer arrays
I'm trying to create (insert subject)!
I get an error "missing ';' before type" on the line where I declare my pointer. Can anyone tell me what I'm doing wrong?
This program demonstrates recursive functions and dynamic, multi-dimensional pointer arrays.
It will calculate the total factorial value for multiple groups of items.
You enter the total number of items, and the number of groups you will have. Then you enter the
number of items in each group. The program error-checks to make sure the number of items
entered in each groups adds up to be the total number of items.
// Prototypes.
unsigned long factr (unsigned long n);
void main ()
// These are in order of appearance.
unsigned long num_items = 0;
unsigned long num_groups = 0;
unsigned long loop = 0;
unsigned long total = 0;
unsigned long quantity = 0;
unsigned long numerator = 0;
unsigned long denominator = 1;
printf ("\n\nHow many items total?\n");
scanf ("%u", &num_items);
printf ("\n\nHow many groups?\n");
scanf ("%u", &num_groups);
if (num_groups > num_items)
printf("\n\nError! Can't have more groups than items!\n\n");
exit (1);
// Array dimensioning.
unsigned long *group_total[num_groups][2];
group_total = malloc (num_groups*2*sizeof(int));
puts ("\n\nERROR! Not enough Memory!\n\n");
exit (1);
// Array initialization.
for (loop = 1; loop <= num_items; loop++)
group_total[loop][1] = 0;
group_total[loop][2] = 0;
loop = 0;
// Array data-entry.
for (loop = 1; loop <= num_groups; loop++)
printf ("\n\nHow many in group %d\n", loop);
scanf ("%u", &quantity);
group_total[loop][1] = quantity;
total = total + quantity;
loop = 0;
// Error checking and factorial calculations for each group
if (total = num_groups);
for (loop = 1; loop <= num_groups; loop++)
group_total[loop][2] = factr (quantity);
loop = 0;
printf ("\n\nTotal number of items not equal to total items entered!\n\n");
free (group_total);
exit (1);
total = 0;
// Final calculations.
numerator = factr (num_items);
for (loop = 1; loop <= num_groups; loop++)
denominator = denominator * group_total[loop][2];
total = numerator / denominator;
printf ("\n\nTotal factorial for this mess: %u\n\n", total);
total = 0;
loop = 0;
free (group_total);
// Factorial engine.
unsigned long factr (unsigned long n)
unsigned long answer;
if (n==1) return (1);
answer = n==0 ? 1: n * factr (n - 1); // recursive call
return (answer);
In C90, declare variables before any statements.
7. It is easier to write an incorrect program than understand a correct one.
40. There are two ways to write error-free programs; only the third one works.*
I thought that was the problem... :-(
Can I make my pointer declaration a function by itself and retain the pointer after the function returns?
>Can I make my pointer declaration a function by itself and retain the pointer after the function returns?
I don't think I fully understand, but sure.
There are plenty of issues with your code that are handled in the FAQ.
for (loop = 1; loop <= num_items; loop++)
Arrays are indexed from 0 to N-1, not 1 to N.
7. It is easier to write an incorrect program than understand a correct one.
40. There are two ways to write error-free programs; only the third one works.*
What I meant was:
The program asks the user how many groups there are and then declares a pointer array of that size. I can do this by calling a function that declares a pointer, but then when the function returns
the pointer is gone.
Since I'm already using a recursive function, I want to close all of the extra functions before I start "recursing". I guess I can call a function that sets up my pointer and then calls the
recursive function to perform my calculations, but is this going to "flood my stack" or whatever?
I know about the starting index of an array being zero but aren't pointers indexed starting at one?
What I meant was:
The program asks the user how many groups there are and then declares a pointer array of that size. I can do this by calling a function that declares a pointer, but then when the function returns
the pointer is gone.
Perhaps pass a pointer to pointer. Also,
I know about the starting index of an array being zero but aren't pointers indexed starting at one?
No. Pointers point.
7. It is easier to write an incorrect program than understand a correct one.
40. There are two ways to write error-free programs; only the third one works.*
Ok, I've changed my code:
This program demonstrates recursive functions and dynamic, multi-dimensional pointer arrays.
It will calculate the total factorial value for multiple groups of items.
You enter the total number of items, and the number of groups you will have. Then you enter the
number of items in each group. The program error-checks to make sure the number of items
entered in each groups adds up to be the total number of items.
// Prototypes.
unsigned long factr (unsigned long num_items2, unsigned long num_groups2);
unsigned long factorial_engine (unsigned long n);
void main ()
unsigned long num_items = 0;
unsigned long num_groups = 0;
printf ("\n\nHow many items total?\n");
scanf ("%u", &num_items);
printf ("\n\nHow many groups?\n");
scanf ("%u", &num_groups);
if (num_groups > num_items)
printf("\n\nError! Can't have more groups than items!\n\n");
exit (1);
factr (num_items, num_groups);
unsigned long factr (unsigned long num_items2, unsigned long num_groups2)
// These are in order of appearance.
unsigned long loop = 0;
unsigned long total = 0;
unsigned long quantity = 0;
unsigned long numerator = 0;
unsigned long denominator = 1;
// Array dimensioning.
unsigned long *group_total;
group_total = malloc (num_groups2*2*sizeof(int));
puts ("\n\nERROR! Not enough Memory!\n\n");
exit (1);
// Array initialization.
for (loop = 1; loop <= num_items2; loop++)
group_total[loop][1] = 0;
group_total[loop][2] = 0;
loop = 0;
// Array data-entry.
for (loop = 1; loop <= num_groups2; loop++)
printf ("\n\nHow many in group %d\n", loop);
scanf ("%u", &quantity);
group_total[loop][1] = quantity;
total = total + quantity;
loop = 0;
// Error checking and factorial calculations for each group
if (total = num_groups2);
for (loop = 1; loop <= num_groups2; loop++)
group_total[loop][2] = factr (quantity);
loop = 0;
printf ("\n\nTotal number of items not equal to total items entered!\n\n");
free (group_total);
exit (1);
total = 0;
// Final calculations.
numerator = factr (num_items2);
for (loop = 1; loop <= num_groups2; loop++)
denominator = denominator * group_total[loop][2];
total = numerator / denominator;
printf ("\n\nTotal factorial for this mess: %u\n\n", total);
total = 0;
loop = 0;
free (group_total);
// Factorial engine.
unsigned long factorial_engine (unsigned long n)
unsigned long answer;
if (n==1) return (1);
answer = n==0 ? 1: n * factr (n - 1); // recursive call
return (answer);
But now when I tried to reference a subscript in group_total it gives me "subscript requires array or pointer type". I think it's telling me I didn't declare my pointer as a 2D array.
I looked up that link you gave me about multi-dimensional arrays, but I don't understand it.
Last edited by Lionmane; 06-09-2005 at 09:40 AM.
Looking at your code:
Arrays are indexed from 0 to N-1, not 1 to N.
You forgot to change your code for this.
>void main ()
main return an int:
int main(void)
> if (total = num_groups2);
This statement contains two errors:
1) = means assignment not equality.
2) The semicolon at the end means everything following the if() gets executed regardless of the outcome of the condition.
> group_total[loop][1] = 0;
> group_total[loop][2] = 0;
If you're not using the second element, then just make this one dimension.
group_total[loop] = 0;
Last edited by swoopy; 06-09-2005 at 09:50 AM.
> unsigned long *group_total;
> group_total = malloc (num_groups2*2*sizeof(int));
If you're trying to malloc the equivalent of unsigned long group_total[num_groups2][2];
This is what you do
unsigned long (*group_total)[2];
group_total = malloc ( num_groups2 * sizeof *group_total );
Be very careful here, (*group_total)[2] is NOT the same thing as *group_total[2], so don't go leaving out the () for the fun of it, it simply won't work.
By the way, this only works if your minor dimensions are constant. If they vary, then you need to start with unsigned long **group_total
Then you need to fix all your loops to index from 0 to N-1
for ( i = 0 ; i < num_groups2 ; i++ ) {
for ( j = 0 ; j < 2 ; j++ ) {
group_total[i][j] = 0;
When you're done, it's simply
free( group_total );
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support http://www.ukip.org/ as the first necessary step to a free Europe.
Thanks guys! I'm still not understanding pointers very well it seems... :-(
The "C for Dummies" has little to no info about pointers. I have "The Complete Reference: C" by Herbert Schildt but I couldn't figure it out with that either.
Anyway, I was finally able to compile without any errors but now I'm getting a pop-up window that says "Debug Error!" and gives me some obscure message. The program runs fine until it begins
calculating the factorials.
The error is "Damage: after Normal block (#44) at 0x00431DF0." Any ideas there? Sorry to ask so many questions. I'm trying to learn C pretty much on my own.
If the size of the array varies, you have to dynamically allocate the rows and columns:
//allocate 1D array of pointers, one pointer per row
a = (int **) calloc(row, sizeof(int *));
//allocate one column element array of ints for each row
for (i=0; i<row; ++i)
a[i]=(int *) calloc(column, sizeof(int));
//fill the matrix
for (i=0; i< row; ++i)
for (j=0; j<column; ++j)
a[i][j] = 2;
calloc fills the array with 0's automatically. You can have the user specify what the number of rows and columns is.
See if that helps.
Don't cast malloc and calloc in C. (Or any other function that returns a void * for that matter.)
Hope is the first step on the road to disappointment.
Quzah, how do you create 2D dynamic arrays (first subscript is dynamic, second is static)? I thought malloc was used to check the memory.
I never said to not use malloc. I said don't typecast it's return.
char **twodee;
int numofrows = 5, numofcols = 10, x;
twodee = malloc( sizeof( char * ) * numofrows );
for( x = 0; x < numofrows; x++ )
twodee[ x ] = malloc( sizeof( char ) * numofcols );
The 'sizeof( char )' is just to illustrate the point, were this any other type than characters. The reason being, 'sizeof( char )' always evaluates to one, so it's not needed in this example.
Were this any other type, it would be needed there (of the appropriate type (int, float, struct foo, etc etc).
Hope is the first step on the road to disappointment.
> The error is "Damage: after Normal block (#44) at 0x00431DF0." Any ideas there?
Yeah, you modified some memory which wasn't yours.
Typically, you walked off the end of whatever array you allocated.
Post some more code.
> The Complete Reference: C" by Herbert Schildt
Oh dear - hope you still have the receipt, and time to take it back.
A book so famed in infamy, it's got it's own buzzword
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support http://www.ukip.org/ as the first necessary step to a free Europe.
06-08-2005 #2
06-08-2005 #3
Registered User
Join Date
May 2005
06-08-2005 #4
06-08-2005 #5
Registered User
Join Date
May 2005
06-08-2005 #6
06-09-2005 #7
Registered User
Join Date
May 2005
06-09-2005 #8
Registered User
Join Date
Oct 2001
06-09-2005 #9
06-09-2005 #10
Registered User
Join Date
May 2005
06-09-2005 #11
Registered User
Join Date
May 2005
Toronto, Canada
06-09-2005 #12
06-09-2005 #13
Registered User
Join Date
May 2005
06-10-2005 #14
06-10-2005 #15 | {"url":"http://cboard.cprogramming.com/c-programming/66379-2d-dynamically-allocated-pointer-arrays.html","timestamp":"2014-04-18T16:57:12Z","content_type":null,"content_length":"109145","record_id":"<urn:uuid:e08e15a4-e92d-4b83-bfbb-3b4b350e60ed>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
Percent of decrease calculator
This percent of decrease calculator will get you the answer for any percent of decrease word problems.
Just enter the original amount in the box on the left and the final amount in the box on the right
For instance, if you are looking for the percent of decrease for 20 and 10, enter 20 in the box on the left and 10 in the box on the right.
Just hit the calculate button and you are good to go.
Recall that to get the percent of decrease, you need to do the following:
Amount of decrease/ original amount
Then, convert the answer into percent
For example, find the percent of decrease from 60 to 30
Amount of decrease is 60 − 30 = 30
Original amount is 60
30 /60 = 0.5
multiply 2 by one hundred to get the answer as a percent
0.5 × 100 = 50, so the answer is 50%.
Need a Quick Answer to your Basic Mathematics Problems?
Get an answer in 10 minutes or less from a math expert!
Justanswer features top-notch math experts handpicked by personnel after they have taken and passed a rigourous math test and after their credentials have been verified by a third party
Most math experts have bachelor's or master's degree in math or a related field
I am also an expert for justanswer. If you want me to answer your questions, sign in, browse the list of math experts, select my name or ask for me (Jetser Carasco) before sending your question(s)
Justanswer is 100% RISK FREE.You Pay Only for the Answers You Like. Fees are Typically $9-$15 | {"url":"http://www.basic-mathematics.com/percent-of-decrease-calculator.html","timestamp":"2014-04-16T07:14:09Z","content_type":null,"content_length":"38172","record_id":"<urn:uuid:64f5068a-571b-47aa-a82d-5ffd1f977f74>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Department, Princeton University
Mathematical Physics Seminar - Antti Knowles, Courant Institute - Quantum diffusion and delocalization for random band matrices
I give a summary of recent progress in establishing the diffusion approximation for random band matrices. We obtain a rigorous derivation of the diffusion profile in the regime W > N^{4/5}, where W
is the band width and N the dimension of the matrix. As a corollary, we prove complete delocalization of the eigenvectors. Our proof is based on a new self-consistent equation for the Green function.
Joint work with L. Erdos, H.T. Yau, and J. Yin.
Location: Jadwin A06
Date/Time: 03/05/13 at 4:30 pm - 03/05/13 at 6:00 pm
Category: Mathematical Physics Seminar
Department: Physics | {"url":"http://www.princeton.edu/physics/events_archive/viewevent.xml?id=577","timestamp":"2014-04-21T05:23:01Z","content_type":null,"content_length":"10130","record_id":"<urn:uuid:10841faf-5048-493b-acd2-1f56dcd33393>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
Departmental Acquisitions: Math / Computer Science
• Title: 99 points of intersection : examples - pictures - proofs /by Hans Walser ; translated from the original German by Peter Hilton and Jean Pedersen.
• Title: Algebraic geometry in coding theory and cryptography / Harald Niederreiter and Chaoping Xing.
• Title: Always on : how the iPhone unlocked the anything - anytime - anywhere future--and locked us in /Brian X. Chen.
• Title: America the vulnerable : inside the new threat matrix of digital espionage, crime, and warfare /Joel Brenner.
• Title: Analysis on fractals / Jun Kigami.
• Title: Beginning 3D game development with Unity : the world's most widely used multi-platform game engine /Sue Blackman.
• Title: The Cambridge dictionary of statistics / B.S. Everitt, A. Skrondal.
• Title: The changing shape of geometry : celebrating a century of geometry and geometry teaching /edited on behalf of the Mathematical Association by Chris Pritchard.
• Title: Cloud computing : SaaS, PaaS, IaaS, virtualization, business models, mobile, security and more /Kris Jamsa.
• Title: Cluster computing for robotics and computer vision / Damian M. Lyons.
• Title: Computational fairy tales / Jeremy Kubica.
• Title: Computer forensics : cybercriminals, laws, and evidence /by Marie-Helen Maras.
• Title: Computer network security and cyber ethics / Joseph Migga Kizza.
• Title: Computing with C# and the .NET Framework / Art Gittleman.
• Title: Crafting by concepts : fiber arts and mathematics /edited by Sarah-Marie Belcastro, Carolyn Yackel.
• Title: Crashes, crises, and calamities : how we can use science to read the early-warning signs /Len Fisher.
• Title: Cyber warfare : techniques, tactics and tools for security practitioners /Jason Andress, Steve Winterfeld ; Russ Rogers, technical editor ; foreword by Stephen Northcutt.
• Title: Data structures using Java / Duncan A. Buell.
• Title: Digital universe : the global telecommunication revolution.
• Title: Discrete and computational geometry / Satyan L. Devadoss and Joseph O'Rourke.
• Title: Distributed and cloud computing : from parallel processing to the Internet of things /Kai Hwang, Geoffrey C. Fox, Jack J. Dongarra.
• Title: Div, grad, curl, and all that : an informal text on vector calculus /H.M. Schey.
• Title: Elements of computer security / David Salomon.
• Title: Elliptic tales : curves, counting, and number theory /Avner Ash, Robert Gross.
• Title: Emmy Noether's wonderful theorem / Dwight E. Neuenschwander.
• Title: Explorations in complex analysis / Michael A. Brilleslyper ... [et al.].
• Title: Final Jeopardy : man vs. machine and the quest to know everything /Stephen Baker.
• Title: The fractalist : memoir of a scientific maverick /Benoit B. Mandelbrot.
• Title: Galois theory / David A. Cox.
• Title: Girls get curves : geometry takes shape /Danica McKellar.
• Title: Gottfried Wilhelm Leibniz : the polymath who brought us calculus /M.B.W. Tent.
• Title: Graph algorithms / Shimon Even.
• Title: Grids, clouds and virtualization / Massimo Cafaro, Giovanni Aloisio, editors.
• Title: A guide to elementary number theory / Underwood Dudley.
• Title: A guide to experimental algorithmics / Catherine C. McGeoch.
• Title: A guide to topology / Steven G. Krantz.
• Title: Hacking the future : privacy, identity, and anonymity on the Web /Cole Stryker.
• Title: Hidden harmonies : the lives and times of the pythagorean theorem /Robert Kaplan and Ellen Kaplan ; illustrations by Ellen Kaplan.
• Title: Histories of computing / by Michael Sean Mahoney ; edited and with an introduction by Thomas Haigh.
• Title: The history of mathematics : a very short introduction /Jacqueline Stedall.
• Title: History of mathematics : highways and byways /by Amy Dahan-Dalmedico and Jeanne Peiffer ; translated by Sanford Segal.
• Title: How groups grow / Avinoam Mann.
• Title: How to fold it : the mathematics of linkages, origami, and polyhedra /Joseph O'Rourke.
• Title: Hypernumbers and extrafunctions : extending the classical calculus /Mark Burgin.
• Title: Idea man : a memoir by the cofounder of Microsoft /Paul Allen.
• Title: In pursuit of the traveling salesman : mathematics at the limits of computation /William J. Cook.
• Title: In pursuit of the unknown : 17 equations that changed the world /Ian Stewart.
• Title: Insight through computing : a MATLAB introduction to computational science and engineering /Charles F. Van Loan, K.-Y. Daisy Fan.
• Title: An interdisciplinary introduction to image processing / Steven L. Tanimoto.
• Title: An invitation to mathematics : from competitions to research /Dierk Schleicher, Malte Lackmann, editors.
• Title: The irrationals : [a story of the numbers you can't count on] /Julian Havil.
• Title: Java illuminated : an active learning approach /Julie Anderson and Hervââe Franceschi.
• Title: The joy of x : a guided tour of math, from one to infinity /Steven Strogatz.
• Title: Laboratories in mathematical experimentation : a bridge to higher mathematics /Mount Holyoke College.
• Title: Learning Web design : a beginner's guide to HTML, CSS, JavaScript, and web graphics /Jennifer Niederst Robbins.
• Title: Lost in a cave : applying graph theory to cave exploration /Richard L. Breisch.
• Title: Loving + hating mathematics : challenging the myths of mathematical life /Reuben Hersh and Vera John-Steiner.
• Title: Machine learning : an algorithmic perspective /Stephen Marsland.
• Title: Macs translated for PC users / Dwight Spivey.
• Title: Magical mathematics : the mathematical ideas that animate great magic tricks /Persi Diaconis and Ron Graham ; with a foreword by Martin Gardner.
• Title: The man of numbers : Fibonacci's arithmetic revolution /Keith Devlin.
• Title: The manga guide to linear algebra / Shin Takahashi, Iroha Inoue, Trend-pro Co. Ltd.
• Title: Mathematical and algorithmic foundations of the internet / Fabrizio Luccio, Linda Pagli, with Graham Steel.
• Title: A mathematical look at politics / E. Arthur Robinson, Daniel Ullman.
• Title: A mathematical tapestry : demonstrating the beautiful unity of mathematics /Peter Hilton, Jean Pedersen ; with illustrations by Sylvie Donmoyer.
• Title: Mathematics by experiment : plausible reasoning in the 21st century /Jonathan Borwein, David Bailey.
• Title: Mathematics for 3D game programming and computer graphics / Eric Lengyel.
• Title: Mining of massive datasets / Anand Rajaraman, Jeffrey David Ullman.
• Title: Network information theory / Abbas El Gamal, Young-Han Kim.
• Title: Neuromorphic and brain-based robots / Jeffrey L. Krichmar, Hiroaki Wagatsuma.
• Title: Nine algorithms that changed the future : the ingenious ideas that drive today's computers /John MacCormick ; with a foreword by Chris Bishop.
• Title: Origami 5 : Fifth International Meeting of Origami Science, Mathematics, and Education /edited by Patsy Wang-Iverson, Robert J. Lang, Mark Yim.
• Title: Origami, Eleusis, and the Soma cube : Martin Gardner's mathematical diversions /Martin Gardner.
• Title: Practical applications of data mining / Sang C. Suh.
• Title: Practical malware analysis : the hands-on guide to dissecting malicious software /by Michael Sikorski and Andrew Honig.
• Title: Probabilistic graphical models : principles and techniques /Daphne Koller and Nir Friedman.
• Title: Probability, Markov chains, queues, and simulation : the mathematical basis of performance modeling /William J. Stewart.
• Title: Proofs without words : exercises in visual thinking /Roger B. Nelsen.
• Title: Proofs without words II : more exercises in visual thinking /Roger B. Nelsen.
• Title: Protecting your internet identity : are you naked online? /Ted Claypoole and Theresa Payton ; foreword by Chris Swecker.
• Title: The secrets of triangles : a mathematical journey /by Alfred S. Posamentier and Ingmar Lehmann.
• Title: Security and game theory : algorithms, deployed systems, lessons learned /Milind Tambe.
• Title: Sources in the development of mathematics : infinite series and products from the fifteenth to the twenty-first century /Ranjan Roy.
• Title: Sphere packing, Lewis Carroll, and reversi : Martin Gardner's new mathematical diversions /Martin Gardner.
• Title: Steve Jobs / Walter Isaacson.
• Title: A strange wilderness : the lives of the great mathematicians /Amir D. Aczel.
• Title: The tangled Web : a guide to securing modern Web applications /Michal Zalewski.
• Title: Thirty-three miniatures : mathematical and algorithmic applications of linear algebra /Jiérâi Matouések.
• Title: Turing's cathedral : the origins of the digital universe /George Dyson.
• Title: Unity 3.x game development essentials : game development with C# and Javascript /Will Goldstone.
• Title: Using SPSS : an interactive hands-on approach /James B. Cunningham, James O. Aldrich.
• Title: Web 2.0 and beyond : principles and technologies /Paul Anderson.
• Title: Web data management / Serge Abiteboul ... [et al.].
• Title: Webbots, spiders, and screen scrapers : a guide to developing Internet agents with PHP/CURL /by Michael Schrenk.
• Title: Who's number one? : the science of rating and ranking /Amy N. Langville and Carl D. Meyer.
• Title: World wide mind : the coming integration of humanity, machines and the internet /Michael Chorost.
• Title: Worm : the first digital world war /Mark Bowden.
• Title: X and the city : modeling aspects of urban life /John A. Adam. | {"url":"http://mwa.edinboro.edu/library/printer_friendly.php?term=EIB_MACS&dept=Math%20/%20Computer%20Science&fy=2012","timestamp":"2014-04-18T15:38:35Z","content_type":null,"content_length":"10269","record_id":"<urn:uuid:13f9d1d1-4dcf-4c38-b1f1-d07edea4fece>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 47
how to solve j = 7k + 5
Quantum physics
Thank you ppl!
Quantum physics
rare, for q2 - just multiply the two quibits states. Answer to q2: 1/sqrt(2)*3/5 ----> |00> (3*i)/(5*sqrt(2)) ----> |01> 4/(5*sqrt(2)) -----> |10> (4*i)/(5*sqrt(2)) -----> |11> Please, give us
answers to question 6! Thanks
Quantum physics
ank, The answer to question 5 is 1/(sqrt(3)) in the first box and (2*i)/sqrt(6) in the second box.
Quantum physics
For question 9 the answer is CNOT12, CNOT23 Please give us the answer to question 4 and 6! Thank you!
Physics help
Fortune tellers use crystal balls to see the future. A fortune teller has a crystal ball with an index of refraction of 1.5 and diameter of 0.2 m. You sit on one side of the ball and the fortune
teller sits on the other. She holds up a small red jewel on her side in the equato...
Physics help
Fortune tellers use crystal balls to see the future. A fortune teller has a crystal ball with an index of refraction of 1.5 and diameter of 0.2 m. You sit on one side of the ball and the fortune
teller sits on the other. She holds up a small red jewel on her side in the equato...
Physics help
The galaxies in the universe are all flying away from each other. The speeds of nearby galaxies are proportional to the distance the galaxy is away from us. This relation, v=Hd is known as Hubble's
law and the constant H is known as Hubble's constant. The evolution of ...
Let S be the set of {(1,0),(0,1),(1,1),(1,−1),(−1,1)}-lattice path which begin at (1,1), do not use the same vertex twice, and never touch either the x-axis or the y-axis. Let Px,y be the number of
paths in S which end at the point (x,y). Determine P2,4. Details an...
x and y are positive real numbers that satisfy logxy+logyx=174 and xy=2883√. If x+y=a+bc√, where a, b and c are positive integers and c is not divisible by the square of any prime, what is the value
of a+b+c?
The number 2+3√+5√−−−−−−−−−−√ is algebraic because it is a root of a monic polynomial of degree 8, namely x^8+ax^7+bx^6+cx^5+dx^4+ex^3+fx^2+g^x+h. Find |a|+|b|+|c|+|d|+|e|+|f|+|g|+|h|.
The sequence {ak}112,k=1 satisfies a1=1 and an=1337+n/an−1, for all positive integers n. Let S=⌊a10a13+a11a14+a12a15+⋯+a109a112⌋. Find the remainder when S is divided by 1000.
Find the number of ordered pairs of distinct positive primes p, q (p≠q) such that p^2+7pq+q^2 is the square of an integer.
geometry help
wrong answer..
Well I just asked for an explanation and not an answer but thanks! It helps a lot.
Please help this is due today and I'm really confused.
Please help this is due today and I'm really confused.
19. Point A(4, 2) is translated according to the rule (x, y) (x + 1, y 5) and then reflected across the y-axis. a) In which quadrant of the coordinate plane is point A located? b) What are the
coordinates of translated point A ? In which quadrant of the coordinate ...
Science help.
Oh, okay. Got it. Thanks.
Help Damon Ms.Sue
Econometric -SAS
I am having trouble figuring out the correct code. I am trying to do an unrestricted and restricted model of a Cobb-Douglas Production Model. lnQ=B1 + B2 lnL + B3 lnK + e This is my code data cobb;
infile 'cobb'; input q l k; proc reg data = cobb; model q = l k; <--...
Well... for some reason it wants pairs.... I put 46 cause I saw 23 pairs so i times it by 2 and got it wrong... >=\
Yegh, thats because its 23 PAIRS
D A C C D A 100% correct
lol, Im doing the same test right now... Ill tell you the answers once im done...
Well, it depends what you want to learn. There are three games that I love to play. One teaches marketing(Social-paradise). The other teaches marking too and it teaches you how to run a business
(weseed). The last one teaches programming games(roblox).
Thank you, this makes a lot of sense.
5h 9 = 16 + 6h Answer Choices: 4 7 7 10 I need a step-by-step response cause I'm having a hard time understanding.
7th gade math Ms. Sue please
You go to conections academy dont you?
This is Urgent Science
I go to CenCA Connections Academy. lol xD
Think about it, how did you get to this question by using your mouse?
Mine had different possible answers, mine was 1/210. I'm not sure how it got there but thats what it was. Sorry i prob didnt help much.
Math kindergarten
I love math and you should too!! Hope that answers your question!!
I have an MSA math question. What scale would you use for making a frequency table of the following data 21,79,11,9,55,38,111, and 92? Please answer it I need the question now.
5th grade math
i dunno
In the Crucible, At what point does Abigail first begin feeling cold? What effect does this event have on Danforth? What does it suggest about her motives?
5th grade
i dunno
1 way, 4 groups of three
Quotes in essays
DO NOT CENTER!!!!!!
hmmm. well I finsihed this, losers!!!!
book help
i have to write a book and i am in 9th grade and it has to be able to follow the Hero's journey from the odessey but idk how to write a book can somebody give me a step by step guide that is really
specific and helpful PLEASE!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Yeah.... um i need help with that too. My brother is in 11th grade and he can't do it so....
can somebody help me pick out key topics for the begginning of the cicil war Best thing to do with stuff like this, since you already have a textbook, is learn how to scan for the information. Look
through and see if you have titles for subsections in the chapters. You're ...
HHHHHHHHHHHHEEEEEEEEELLP! Social Studies!!!!!
I heard that i m having a pop quiz today on the beginning of the civil war. my textbook is called creating america. i need the vocab words for that chapter (15-3 and 15-4). can someone plz help me
find website to study on or i m going to be in so much trouble. Go through and f...
Math: Direct Variation
I really don't get direct variation and I took a test on it and bombed it but now I can correct it so if anyone can just explain it to me I'd be really grateful. :) Generally, direct variation means
that as one of the variables assumes increasing positive values the ot... | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Somebody","timestamp":"2014-04-19T20:43:07Z","content_type":null,"content_length":"14910","record_id":"<urn:uuid:eaa4899d-b5b2-4640-9fe3-80128685f178>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
How I Scored a 780 on the GMAT
Archive | April, 2011
APRIL 23, 2011
Quantitative 33 36% 35Q | 17 Incorrect
Verbal 34 68% 41Q | 13 Incorrect
TOTAL 560
- – - – - – - – -
MARCH 01, 2011
Quantitative 37Q | 24 Incorrect
Verbal 41Q | 10 Incorrect
TOTAL 490
- – - – - – - – -
Great day of reviewing Basic Math Video series. Also, got back into Crossfit!
MATERIALS: Basic Math Video Series
STUDY HOURS: 12
TOTAL HOURS: 76
GMAT Divisibility & Primes kicked my butt today. Need to go back and carefully review my Basic Math and Foundations of GMAT Math materials. Hopefully, this will also give me a confidence boost to
tackle this math!
On a good note, I ran outside for 30 minutes today. Great stress relief and the first major fitness activity I’ve done in two months. Tomorrow will be my return to Crossfit!
MATERIALS: Manhattan GMAT Book #1 – Number Properties
STUDY HOURS: 3
TOTAL HOURS: 65
My MBA study plan is getting on track. I studied the majority of the day and even completed my first set of “Official Guide” problems! Shocking revelation. GMAT Math is much harder than I
anticipated. Not technically demanding but extremely complex theoretically. I will really need to dig into this GMAT Math…
MATERIALS: Manhattan GMAT Book #1 – Number Properties
STUDY HOURS: 6
TOTAL HOURS: 62
This is the beginning of my journey! Two months ago, I took a practice GMAT Computer Adaptive Test (CAT) and scored a 490. Yes, this is a very low score. However, I took the test with no preparation,
at midnight, after a very long day at work so I’m sure my next diagnostic test will be much higher.
I have already completed the Manhattan GMAT “Fundamentals of GMAT Math” workbook and two online “Fundamentals” workshops. In February, I devoted 47 hours to GMAT fundamentals preparation.
Unfortunately, I have forgotten 70% of this content due to my 8-week break!
My goals for tonight are as follows:
• Thoroughly Review Manhattan GMAT Book 1: Number Properties – Chapter One
• Create a blogroll of essential GMAT websites and motivational resources
Current Mood:
• Anxiety over balancing GMAT workload with my demanding (read: highly stressful) marketing agency job
STUDY HOURS: 3
TOTAL HOURS: 50
My goal is to get a 780 on the GMAT. Here we go!! | {"url":"http://780gmatblog.com/2011/04/","timestamp":"2014-04-21T04:37:46Z","content_type":null,"content_length":"33415","record_id":"<urn:uuid:94fa0d43-6ed5-4d9b-99a2-d8542f766105>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00333-ip-10-147-4-33.ec2.internal.warc.gz"} |
(This article was first published on Rmetrics blogs, and kindly contributed to R-bloggers) To leave a comment for the author, please follow the link and comment on his blog: Rmetrics blogs.
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git,
hadoop, Web...
Speeding tickets for R and Stata
How fast is R? Is it as fast in executing routines as the other off-the-shelf software, such as Stata? After some comparative experimentation, I found Stata to be 5 to 8 times faster than R. For me,
speed has not been a concern in the past. I had used R with smaller datasets...
Speeding tickets for R and Stata
How fast is R? Is it as fast in executing routines as the other off-the-shelf software, such as Stata? After some comparative experimentation, I found Stata to be 5 to 8 times faster than R. For me,
speed has not been a concern in the past. I had used R with smaller datasets...
Video Tutorial on IV Regression
Update: I am working on a better augmentation of the current IV regression functions (specifically ivreg() in AER) in R. I will post a link here to my new method/function for IV regression when I
finish debugging the code.Update 2: [15 Ma...
Video Tutorial on IV Regression
Update: I am working on a better augmentation of the current IV regression functions (specifically ivreg() in AER) in R. I will post a link here to my new method/function for IV regression when I
finish debugging the code.Update 2: [15 Ma...
Quality comparison of floating-point maths libraries
What is the best way to compare the quality of floating-point math libraries (e.g., sin, cos and log)? The traditional approach for evaluating the quality of an algorithm implementing a mathematical
function is based on mathematics; methods have been developed to calculate the maximum error between the calculated and the actual value. The answer produced
Mixtures in Madrid
As I already did two years ago, in connection with the double degree between UAM and Dauphine, I will give a short graduate course at the Universidad Autonoma de Madrid (UAM). It will be part of the
regular fourth year statistics course and will focus on mixtures, as given in of Bayesian Core. It will
Teipei, July 4-6, 2011 – Financial Optimization and Advanced Portfolio Analysis
(This article was first published on Rmetrics blogs, and kindly contributed to R-bloggers) To leave a comment for the author, please follow the link and comment on his blog: Rmetrics blogs.
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git,
hadoop, Web...
Kuala Lumpur, March 29-31, 2011 – Portfolio Management and Optimization
(This article was first published on Rmetrics blogs, and kindly contributed to R-bloggers) To leave a comment for the author, please follow the link and comment on his blog: Rmetrics blogs.
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git,
hadoop, Web...
Adjust branch lengths with node ages: comparison of two methods
Here is an approach for comparing two methods of adjusting branch lengths on trees: bladj in the program Phylocom and a fxn written by Gene Hunt at the Smithsonian.Get the code and example files
here: http://wp.me/PRT1F-2vGet phylocom here: http:/... | {"url":"http://www.r-bloggers.com/2011/04/page/25/","timestamp":"2014-04-21T07:24:13Z","content_type":null,"content_length":"37790","record_id":"<urn:uuid:9998fc8f-938d-4c59-b1a5-85cf8d1ad78b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alternate Proof
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
The Fourier transform of a complex Gaussian can also be derived using the differentiation theorem and its dual (§B.2).^D.1
Proof: Let
Then by the differentiation theorem (§B.2),
By the differentiation theorem dual (§B.3),
Differentiating gives
Integrating both sides with respect to yields
In §D.7, we found that , so that, finally, exponentiating gives
as expected.
The Fourier transform of complex Gaussians (``chirplets'') is used in §10.6 to analyze Gaussian-windowed ``chirps'' in the frequency domain.
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email]
[Lecture Video] [Exercises] [Examination] | {"url":"https://ccrma.stanford.edu/~jos/sasp/Alternate_Proof.html","timestamp":"2014-04-16T15:00:03Z","content_type":null,"content_length":"12121","record_id":"<urn:uuid:1bc9d20b-7973-4fbe-b271-4250f505273c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reseda SAT Math Tutor
Find a Reseda SAT Math Tutor
...I have tutored several students privately in algebra 2, all of whom have scored well above average in their classes. I have also tutored Algebra 2 through 3 different tutoring companies over
the past few years, 2 of which were through a Federally funded NCLB program. I have a BS from USC in Computer Science and Computer Engineering.
18 Subjects: including SAT math, geometry, algebra 1, GRE
...Tutoring is my passion and I always look for an opportunity to aid a student, to improve his or her skills, and to bring out his or her talent. Albert Einstein said once: "It is the supreme art
of the teacher to awaken joy in creative expression and knowledge."I have been tutoring Chemistry at ...
11 Subjects: including SAT math, chemistry, geometry, algebra 1
...I have tutoring experience with children (ages 5 and up) and adults. I especially love tutoring in English (including English as a Second Language, Reading, and Writing), many levels of
Mathematics, and Psychology. I also offer standardized test preparation tutoring.
44 Subjects: including SAT math, English, reading, writing
...I also have ample tutoring experience, having worked with students ranging from the son of a high-level diplomat to the United States to troubled inner-city youth through the Los Angeles
Unified School District's Program for Youth who are Neglected, Delinquent, or at Risk of Dropping Out.I have a...
23 Subjects: including SAT math, reading, writing, English
...When my a cappella group was recording in the studio, the studio engineer whispered to my friend that I could probably work there because I was so attuned to hearing a correct pitch. This skill
has been developed over several years: In high school I took AP Music Theory, during which I spent a ...
18 Subjects: including SAT math, chemistry, calculus, algebra 2 | {"url":"http://www.purplemath.com/Reseda_SAT_Math_tutors.php","timestamp":"2014-04-16T16:14:03Z","content_type":null,"content_length":"23808","record_id":"<urn:uuid:853bb6c1-d80c-4337-8e4d-acf377adaec9>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
HPS 0410 Einstein for Everyone
Back to main course page
John D. Norton
Department of History and Philosophy of Science
University of Pittsburgh
Why Spacetime?
So far all our discussion in special relativity has involved the motion of bodies in space over time. If you haven't already noticed, these motions can become rather complicated to visualize. Recall
how tough it is to keep track of what the different ends of a moving rod are doing as a light signal bounces back and forth between them.
In 1907 the mathematician Hermann Minkowski explored a way of visualizing these processes that proved to be especially well suited to disentangling relativistic effects. This was their representation
in spacetime. Quite puzzling relativistic effects could be comprehended with ease within the spacetime representation and work in the theory of relativity started to be transformed into work on the
geometry of spacetime.
Building a Spacetime
We build a spacetime by taking instantaneous snapshots of space at successive instants of time and stacking them up. It is easiest to imagine this if we start with a two dimensional space. The
snapshots taken at different times are then stacked up to give us a three dimensional spacetime. In this spacetime, a small body at rest will be represented by a vertical line. To see why it is
vertical, recall that it has to intersect each instantaneous space at the same spot. A vertical line will do this. If it is moving, it will intersect each instantaneous space at a different spot; a
moving body is presented by a line inclined to the vertical.
A standard convention (that I will usually use) is to represent trajectories of light signals by lines at 45^o to the vertical.
In the figure, a moving rod is represented by the trajectories in spacetime of its ends. The zig-zag line is a light signal bouncing back and forth between these two ends.
Here's another example. Take snapshots of the earth orbiting the sun in the three dimensional space around the sun in the course of a year, which will look like:
Now we stack them up
into the third dimension.
When we clean things up a little,
we have a spacetime.
So far we have described how a two dimensional space is combined with the one extra dimension of time to generate a three dimensional spacetime, such as shown above in the figures. Our space is three
dimensional. So when we add the extra dimension of time we generate a four dimensional spacetime.
There is no easy way to draw a picture of a four dimensional spacetime. Visualizing it can be very hard. But that does not make it mysterious. It is just another sort of space that happens to
transcend simple visualization. In physics, four dimensions are actually quite modest. In statistical mechanics, we routinely deal with phase spaces of 6 x number of molecules in a gas sample. For
even small samples of gas, that can come to 10^25--a space with 10000000000000000000000000 dimensions. So we should not be too awed by a mathematical space with only four dimensions!
Light Cones
That the speed of light is a constant is one of the most important facts about space and time in special relativity. That fact gets expressed geometrically in spacetime geometry through the existence
of light cones, or, as it is sometimes said, the "light cone structure" of spacetime.
To see that structure, we imagine an event at which there is an explosion. Light will propagate out from it in an expanding spherical shell. In a two dimensional space, it will look like an expanding
circle, as shown below.
To see that structure, we imagine an event at which there is an explosion. Light will propagate out from it in an expanding spherical shell. In a two dimensional space, it will look like an
expanding circle.
An animations makes the motion more visible.
Now stack up these spatial snapshots to make a spacetime. The spacetime diagram that corresponds to it looks like a cone. As we proceed up the cone, we look in each instantaneous space to see how far
the light has propagated. Each intersection of the cone with the space will be a circle.
In the figure, the expanding circle of light is represented by the top half of the cone. It is customary to draw in the bottom half of the cone, although it is not part of the expansion of the light.
In fact it represents the opposite. It depicts a circle of light collapsing in towards the original event at the apex of the cone. Here is that collapse, presented also as an animation.
A final animation now shows the association between the different stages of the collapsing and expanding light shell and the cross-sections of the light cone.
To have a light cone, we do not need light to be present. The cones map out the trajectories light would take if light were to be present. Since it is just the possibilities that are mapped out, not
necessarily the trajectories of actual light. Spacetime still has a light cone structure in the dark!
Light Cones Everywhere
To describe the light cone just now, we picked an event in spactime and imagined all the possible trajectories that light could take in propagating through that event. We could have picked any event
in the spacetime. We would have found lightcones at every one of them. That means that spacetime is completely filled with light cones. There is one at every event.
The Right Terminology
There is much potential for confusion in talking about spacetimes. As a result a fairly precise vocabulary has been built up and it is important to to use it correctly. Pay attention to the following
Spacetime When we add the extra dimension of time to a space, we produce a spacetime.
Minkowski spacetime There is nothing special about a spacetime. They can arise in classical physics. So if we mean a spacetime that also behaves the way special relativity demands, then we have a
Minkowski spacetime. (Note for later: when we look at general relativity, we will meet spacetimes that are relativistic but not Minkowski spacetimes.)
Event These are the individual points of a spacetime. They represent points in space at a particular time.
Timelike Worldline This is the trajectory of a point moving less than the speed of light. These curves are contained within the light cone. They represent the trajectories of massive particles.
Lightlike curve This is the trajectory of a point moving at the speed of light--a light signal. They lie on the surface of the light cone.
Spacelike curve This is a curve that lies outside the light cone. If an object is to make this curve its trajectory, it would need to travel faster than light.
Spacelike hypersurfaces These are the instantaneous spatial snapshots of spacetime. They are three dimensional in the case of a four dimensional spacetime.
Past and future light cones All the lightlike curves through an event form the light cone at that event. The part of the cone to the future of that event is the future light cone. The part to the
past is the past light cone.
Light cone structure Since the speed of light is generally taken to be the fastest that causes can propagate their effects, once we know how the light cones are distributed in space we can say a
great deal about what is possible and impossible causally in the spacetime. So this distribution is of great interest to us. It is called the light cone structure of the space.
Timelike geodesic This term will be defined later.
What Connects with What
Knowing the light cone structure of a spactime tells us what connects with what. Its importance is comarable to what is represented on ordinary maps of countries. Here is a map of ancient Greece:
Greece at the beginning of the Peloponnesian war. p. 17 in W. S. Shepherd, Historical Atlas, New York: Henry Holt & Co., 1911.
We read from the map that, if we are in Greece, we can travel North and eventually arrive in Thrace. However we cannot get to Crete; or at least we cannot get to Crete by land. We would need to cross
a sea.
The light cone structure catalogs analogous "what connects with what" information for a spacetime. It reveals its most basic
To see how this works, pick any event "O" in the spacetime. The future light cone at O contains all the events in the spacetime
that can be reached from O by future directed timelike or lightlike curves. If we make the usual assumption that all causal
processes propagate at or less than the speed of light, we conclude that these are all the events that we can causally affect
from O.
More simply, if you are at O, the events in the forward light cone are just those events that you can reach with a physical
signal, such as an ordinary particle or a light flash that you may emit.
For that same event O, the past light cone contains all the events in
spacetime from which one can reach event O by future directed
timelike or lightlike curves.
Assuming again that all causal processes propagate at less than or
equal to the speed of light, we conclude that this past light cone
contains all the events that can causally affect event O.
More simply, if you are at any event in the past light cone of O, you
can always send to O a physical signal consisting of an ordinary
particle or a light flash.
The remaining region of the spacetime is outside both past and future light cones. It is a new sort of region that does not
appear in pre-relativistic spacetimes. It is an "elsewhere" region.
It collects all events that cannot be connected to event O by timelike or lightlike curves. Its events can only be connected to
O by spacelike curves. That is, its events are "spacelike separated" from O.
If we assume that no causal processes propagate faster than light, these events are causally disconnected from O. If we are at O
we cannot causally affect or be causally affected by an occurrence at an event in this "elsewhere" region. Correspondingly, we
cannot exchange signals between the event at O and any spacelike separated event in this region.
There is no corresponding region in a pre-relativistic spacetime. In Newtonian theory, it is assumed that there are propagations
that are arbitrarily fast and even instantaneous. An example of an instantaneus propagation is changes in the Newtonian
gravitational field. If the sun were to disappear, we would know instantly on earth, according to Newtonian theory, for the sun
would no longer exert a gravitational pull on us.
What you should know:
• What a spacetime is.
• The correct use of the particular terms associated with spacetimes.
• What the light cones are and how they divide up the spacetime into different regions relative to each event.
Copyright John D. Norton. January 2001, September 2002; July 2006; February 3, 2007; January 23, September 24, 2008; February 3, 2012. | {"url":"http://www.pitt.edu/~jdnorton/teaching/HPS_0410/chapters_2013_Jan_1/spacetime/index.html","timestamp":"2014-04-18T13:56:03Z","content_type":null,"content_length":"18902","record_id":"<urn:uuid:94ef66c4-5d8e-46f2-b8a5-ceee4a0b7c49>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
The injectivity of a function defined on a set of partitions
October 14th 2012, 05:11 AM #1
Oct 2012
The injectivity of a function defined on a set of partitions
My name is Michael, I'm a freshman CS student and I'm stuck with this problem. I could really use your help:
Let T be a non empty set, and let A and B two sets belonging to P(T), the set of partitions of T ( whatever x belongs to P(T), the complement of x = T - x )
Now, let f be a function, f defined on P(T) with values in the direct product P(A) x P(B), f(x) = ( x intersected with A, x intersected with B )
I must prove that if f is Injective if and only if the Union of A and B is T.
There are two more points to the problem, but I'm hoping if I understand how to do this, I can solve the other two myself. I tried using the Cantor-Bernstein-Schroeder theorem, but I didn't got
me anywhere. I could use any help.
Many thanks,
Re: The injectivity of a function defined on a set of partitions
Let T be a non empty set, and let A and B two sets belonging to P(T), the set of partitions of T ( whatever x belongs to P(T), the complement of x = T - x )
Now, let f be a function, f defined on P(T) with values in the direct product P(A) x P(B), f(x) = ( x intersected with A, x intersected with B )
I must prove that if f is Injective if and only if the Union of A and B is T.
I quite sure that you clearly understand what you posted. I do not.
It seems T is being used in several different ways.
What exactly does P(T) mean? What are the elements P(T)?
The phrase "the set of partitions of T" has a very well defined meaning.
But it does not seem this question is using the standard definition.
Can you post an example?
Re: The injectivity of a function defined on a set of partitions
Thank you for taking interest in my problem.
To exemplify: Say T = { 1, 2, 3 }. P(T) would be: { { {1}, {2}, {3} }, { {1, 2}, {3}}, { {1, 3}, {2} }, { {1}, {2, 3} }, { 1, 2, 3} }. If x belongs to P(T), x could be any of the partitions of T.
Say x is { 1, 2 }. This would mean the complement of x would be T - x. The complement of x is {3} in this case.
In the problem, the exact elements are not specified, it's meant to be a proof for any case.
I'm sorry for not being very clear. And again, thank you for your help. I really need it.
Re: The injectivity of a function defined on a set of partitions
Thank you for taking interest in my problem.
To exemplify: Say T = { 1, 2, 3 }. P(T) would be: { { {1}, {2}, {3} }, { {1, 2}, {3}}, { {1, 3}, {2} }, { {1}, {2, 3} }, { 1, 2, 3} }. If x belongs to P(T), x could be any of the partitions of T.
Say x is { 1, 2 }. This would mean the complement of x would be T - x. The complement of x is {3} in this case.
Thank you for the clarfication. But there is still come missuse of vocabality.
You say "let A and B two sets belonging to P(T)"
That means that of A and B is a partition of T.
How can $A\cup B=T$? How can the union of two partitions of a set equal the set? Partitions are subsets of the power set of a set.
Re: The injectivity of a function defined on a set of partitions
i believe that the OP is mis-using terminology here, and that he (she?) means the power set of T by P(T), NOT "partition of T".
Re: The injectivity of a function defined on a set of partitions
That may be but it is not consistent with the example posted, The power set of three elements contains eight sets whereas the third Bell number is five as in her/his example.
October 14th 2012, 07:58 AM #2
October 14th 2012, 12:30 PM #3
Oct 2012
October 14th 2012, 01:07 PM #4
October 15th 2012, 01:16 AM #5
MHF Contributor
Mar 2011
October 15th 2012, 03:09 AM #6 | {"url":"http://mathhelpforum.com/advanced-algebra/205288-injectivity-function-defined-set-partitions.html","timestamp":"2014-04-16T13:38:23Z","content_type":null,"content_length":"51637","record_id":"<urn:uuid:9d881f0b-7b60-40dc-8c7a-7ed35c94afbd>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/edr1c/asked","timestamp":"2014-04-17T01:31:05Z","content_type":null,"content_length":"106645","record_id":"<urn:uuid:6d9b99af-7db8-429f-a6f9-9e77df68a338>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dynamic Joint Action Perception for Q-Learning Agents :: Institutional Repository
Dynamic Joint Action Perception for Q- Learning Agents
Nancy Fnlda and Dan Ventura
owens, gl) cs. byn. edn, ventnra, gl) cs. byn. edn
Department of Compnter Science
Brigham Yonng University
April 18, 2003
Q- Iearning is a reinforcement learning alg() rithm
that learns expected utilities for stateaction
transitions through successive interactions
with the environment The algorithm ' 5
simplicity as well as its convergence properties
have made it a popular algorithm for
study However; its non- parametric representation
of utilities limits its effectiveness in
environments with large amounts of perceptual
input For example; in multiagent systems;
each agent may need to consider the action
selections of its counterparts in order to
learn effective behaviors This creates a joint
action space which grows exponentially with
the number of agents in the system In such
situations; the Q- learning algorithm quickly
becomes intractable This paper presents a
new algorithm; Dynamic , Joint Action Perception;
which addresses this problem by allowing
each agent to dynamically perceive
only those joint action distinctions which are
relevant to its own payoffs The result is a
smaller joint action space and improved scalability
of Q- learning to systems with many
Keywords: Q- learning; Reinforcement Learning;
lVIultiagent Systems
1. Introduction
Q- learning is a temporal differencing algorithm in
which the agent learns expected time- discounted rewards
Q( 8, a) for each state- action pair [ 16] The basic
Q- learning update algorithm is
tlQ = 0( r( 8t, a,) + , argnta3: a { Q( 8t+ l, an - Q( 8t, ad)
where r( St; at) is a numerical reward ( also called a payoff)
received for performing action a in state S at time 1
t, I) <:: 0 <:: 1 is the learning rate and I) <:: , <:: 1 is the
discount factor Q- learning is guaranteed to converge
to the theoretically optimal Q- values with respect to
the discount factor under specified conditions [ 15]
This algorithm requires that an agent maintain ISI* IA. I
distinct Q- values; where lSI is the size of the state
space and IAI is the size of the action space This representation
both slows the learning speed of Q- learning
systems with large state or action spaces and limits
the tractability of Q- learning as state- or action- space
size increases One area where this problem arises is
in the realm of distributed problem solving and multiagent
systems where many agents work together to
accomplish a common goal Such applications generally
require a strong coupling between specific agents:
the actions of one agent affect the payoffs received by
one or more of its counterparts Hence; each agent
must take the behavior of its companions into account
when estimating Q- values Otherwise; effective system
convergence cannot be achieved
A common approach for applying Q- learning to multiagent
systems is to allow each agent in the system
to perceive the action selections of its counterparts
[ 2; 4; 6] This has proven quite effective for systems
with only a few agents However; the size of the joint
action space to be represented grows exponentially
with the number of agents in the system Each agent
in a system of n agents with jAI distinct actions and
151 distinct states must store 151 * IA. I" Q- values
Some of this combinatorial explosion can be avoided
through careful planning of agent couplings As the
number of agents in a system increases; the chance
that all agents have an equally strong effect on each
other decreases System designers can capitalize on
this tendency by allowing agents to perceive only the
action selections of counterparts who significantly affect
their payoffs In a traffic light control system for
city streets; for exanlple; each agent might be allowed
to perceive the color of lights at nearby intersections;
but not the color of lights across town
Such design strategies can decrease the size of each
agent's perceived joint action space, but they are inapplicable
in many situations for two reasons First,
the strength and structure of agent couplings may not
be intuitively apparent at design time Second, gradual
change in a real- world environment may invalidate
some agent couplings and generate new ones
This paper presents Dynamic Joint Action Perception
( D. JAP), a Q- learning system which allows each agent
to construct its own joint action space dynamically
from a set of available agent actions, thus reducing the
size of the joint action space without intervention from
the system's designer In the DJAP algorithm, action
selections of other agents are modeled as part of an
agent's individual state A tree structure is then used
to create a variable- resolution partitioning of this augmented
state space New state space distinctions are
created as the agent locates percepts ( actions of other
agents) which have a significant effect on its payoffs
2. Related Work
Several researchers have demonstrated the effectiveness
of allowing agents in a multiagent system to perceive
the action selections of their counterparts This
technique is frequently called joint action learning
Littman's Minimax- Q algorithm [ 6], Hu and Wellman's
multiagent Q- learning algorithm [ 3], and Claus
and Boutilier's joint action learners [ 2] are all examples
of this technique applied to Q- learning systems Other
approaches for encouraging optimal behavior in multiagent
reinforcement learning systems indude policy
search [ 13], optimistic updating techniques [ 5], agent
modeling [ 14, 11]' and the establishment of social conventions
[ 12, 7]
\ Vork on state space paltltIOning and variableresolution
state space representations indudes lVIeCalhIm's
U- Tree algorithm [ 9] and Utile Suffix Memory algorithm
[ 8], Munos and Moore's Parti- game algorithm
[ lO], and Chapman's G- algorithm [ 1] Each of these algorithm
selectively distinguishes only those aspects of
the state space which are useful in accomplishing the
given task
The research presented in this paper differs from previous
research in dynanlic state space partitioning because
it applies the partitioning concepts to a new application:
joint action learning in multiagent environments
The research extends previous research in joint
action learning by addressing the issue of scalability
3. Implementation: the Dynamic Joint
Action Perception Algorithm
Dynamic Joint Action Perception ( DJAP) is a new
algorithm designed to improve the tractability of Qlearning
in systems with large numbers of agents The
algorithm achieves this by making three fundemental
assumptions: 1) it assumes that the agents all share
a common goal, 2) it assumes that some agents have
a greater effect on each others' payoffs than others,
and 3) it assumes a first- order correlation between the
behavior of other agents and the observed payoff distributions
These assumptions are, admittedly, restrictive However,
within the bounds of these assumptions Dynamic
Joint Action Perception is able to learn effective strategies
in environments with many interacting agents
Improvements to the algorithm may enable a relaxation
of these requirements
The Dynamic Joint Action Perception algorithm uses
a decision tree to create a variable resolution representation
of the joint action space This process is similar
to that used by Andrew lVIeCallum's U- Tree algorithm
[ 9] The primary distinction is that U- Tree uses a statistical
test to determine which percepts are relevant,
while DJAP uses expected average increase in payoff
This simplification in the DJAP algorithm makes it
less resource intensive
3.1. DJAP Tree Structure
In the DJAP algorithm, action selections of other
agents are modeled as potential percepts which may
be used when determining the agent's individual state
The DJAP algorithm begins execution with a tree consisting
of a single leaf node This leaf node represents
a single state of the DJAP agent in which the actions
of other agents are ignored The leaf node contains
a set of Q- values representing the expected utility of
executing each possible action given the current state
The leaf node contains a set of child fringe nodes indexed
by the set of unused percepts ( i e the action
selections of other agents in the system) Each fringe
node contains a set of joint Q- values which represent
the expected utilities of each action selection given the
current state ( as indicated by the parent leaf node) and
by the observed value of the unused percept to which
the fringe node corresponds An exaniple of this structure
for two unused percepts is shown in Figure 1
The agent is allowed to interact with the environment
until each fringe node Q- value has been updated approximately
k times, where k is a user- defined param-
Action utilities
Unuse~! ~ nused
Percept 1 Percept 2
FRINGE NODE 1 FRINGE NODE 2
Action Utilitie.~ Action IJtilitiu
~ 0 1 1 2 ~ o 0 0 0 C>~ C>~
1~ 1:~:> 2 2 0 0 ~~ 21 21 21 21
Figure 1. Structure of leaf and fringe nodes in the Dynamic
Joint Action Perception Algorithm. Leaves expand along
the unused percept which offers the greatest average increase
in reward for the agent.
eter ( For the experiments documented in this paper,
a value of 50 was used for k) At that point, one of the
unused percepts is selected as the basis for an expansion
of the tree The selection criterion is based on the
increase in expected reward obtainable by the agent if
the unused percept in question were incorporated as
part of the agent's state
For example, in Figure 1, the agent could increase its
average expected reward from 1 to 2 if it were to incorporate
unused percept 1 as part of its internal state,
because for each possible percept value, there is an
action option for the agent which provides a reward
of 2 Unused percept 2, in contrast, does not allow
an increase in expected reward Even when the agent
can perceive the values of unused percept 2, it can obtain
a reward of 2 only approximately 1/ 3 of the time,
regardless of its action selections Thus, in this example,
the leaf node would be expanded along unused
percept 1
When a leaf node is expanded, it is replaced by a
branch node Each branch node has one child for each
possible value of the unused percept which was selected
for expansion Each newly created branch node contains
a set of child leaf nodes The Q- values of these
leaf nodes are taken from the corresponding elements
in the Q- val ue table of the fringe node for the percept
along which the tree was expanded Each newlvcreated
leaf node generates a set of fringe nodes bas~ d
on all of the remaining unused percepts The ini tial
Q- values for these fringe nodes are generalized from
the leaf node Q- values: fringe node Q- values are initialized
based on the Q- value for each action selec-tion,
regardless of the value of unused percept which
the fringe node represents Fringe node Q- value distinctions
based on the percept values will be learned
through further interactions with the environment
Leaf node e:>- 1lansion continues until some user- defined
stopping criterion is reached Examples of potential
stopping criteria include a minimum threshold on the
increase in expected reward required to qualify a percept
for expansion, an upper bound on the depth of
the tree, or a limit on the number of nodes in the tree
The version of DJAP used for this paper implements
no stopping criterion at all The tree is continually expanded
throughout the training period Because previously
learned Q- values are generalized to newly created
fringe nodes, overexpansion is not deterimental to
system performance in this case, although it does have
a negative impact on resource usage and adaptability
to subsequent changes in the environment
In summary, each branch of the decision tree represents
an available percept ( i e the action selections of
another agent) Each branch node has one child for
each possible value of the percept in question Each
leaf node of the tree represents a state of the DJAP
agent, with each state corresponding to a specific combination
of actions of other agents Each leaf node also
maintains a set of fringe nodes, with one fringe node
for every available percept which has not been used
in that section of the tree Leaf nodes are expanded
along the unused percept which offers the greatest potential
increase in average reward Expansion of the
tree continues until a user- defined stopping criterion is
3.2. Learning Rate
A critical factor for any Q- learning algorithm is the
learning rate used In the DJAP algorithm, the objective
is for fringe node Q- values to converge to neariyoptimal
Q- values before expansion of the parent leaf
node occurs Learning rates are therefore dependent
on the user- defined value k, the average number of
updates received by each fringe node Q- value before
expansion occurs
In the current implementation of the DJAP algorithm,
each leaf node and each fringe node maintains individual
learning rates for each Q- value These learning
rates are intialized to 0 1 for fringe Q- values Newlycreated
leaf nodes " inherit" the final learning rates of
the fringe nodes from which they are created Normally,
the inherited learning rate is approximately 0 01
( The root leaf is an exception It uses an initialization
value of 0 1) The learning rate of each Q- value is
decayed by a factor of ( J each time the Q- value is up-
Figure 2. Payoff matrix for four agcnts playing a variant
of thc matching pcnnics gamc. Grid valucs rcprcscnt thc
payoffs for agcnts A, B, C, and D, rcspcctivcly.
An exanIple payoff matrix of this reward structure for
a group size of four is shown in Figure 2 In general,
this reward structure is not learnable by reinforcement
learning agents unless they are able to see the action
selections of their counterparts
Reward Structure: The agents are not only required
to coordinate their actions by selecting a specific penny
side, but they must do so in the face of a temptation
to " defect" If all agents pick heads, then all agents
receive a reward of 1 However, if one or more agents
choose tails, then each agent that selected heads receives
a reward of - 2 and each agent that selected
tails receives a reward of 0
( 0,0,- 2,- 2)
( 0,0,- 2,0)
( 0,0,0,- 2)
( 0,0,0,0)
( 0,- 2,- 2,- 2)
( 0,- 2,- 2,0)
( 0,- 2,0,- 2)
( 0,- 2,0,0)
(- 2,0,- 2,- 2)
(- 2,0,- 2,0)
(- 2,0,0,- 2)
(- 2,0,0, OJ
( I, I, I, I)
(- 2,- 2,- 2,0) (-') -'} °-'})
(- 2:- 2: 0: 0)
Multiple Groups: The playing environment consists
of m groups of n agents playing the matching pennies
game simultaneously Agents are given no information
about the size or number of the playing groups, nor do
they know which other agents are in their groups
Nondeterminism: \ Vith 5% probability on every
round, someone bumps the virtual playing table and
all pennies are flipped to random sides Thus the
agents' rewards are not always correlated with their
action selections
dated The objective is to ensure that the learning
rate has decreased to a target value of 0 01 for fringe
nodes and 0 001 for leaf nodes by the time k updates
per fringe node Q- value have occured
For fringe nodes, the value of fJ is determined by the
equation f] = l/ k * In( O Ol/ O 1) Ft, r leaf nodes, the
value of f] is determined by f] = l/ kp * In( O OOl/ a),
where p is the average number of possible percept values
per fringe node and a is the average learning rate
of the leaf node's current Q- values ( Recall that when
a new leaf node is created, it inherits the Q- values and
current learning rates of the fringe node which was selected
for expansion) For the root leaf node, a = 0 1
3.3. Determining the Optimal Policy
To address this problem, the algorithm uses an optimistic
assumption [ 5] The agent simply assumes
that all other agents will act to maximize its reward
It therefore selects the action which will permit the
agent's most- preferred joint action to be executed Assuming
that all other agents have learned to perceive
the same preferred joint action, and that the optimistic
assumption holds, the system will exhibit optimal behavior
\ Vhen selecting actions for execution once the learning
phase is complete, the DJAP algorithm encounters a
problem The state space of the agent is partially defined
in terms of the action selections of other agents
But these action selections cannot be known until after
the agent has acted How can the agent know which
action to perform if it does not know what state it is
. .) In;
4. Test Problem: Multiagent
Penny- Matching
The DJAP algorithm was tested on a task structure
which is reminiscent of a dassic multiagent coordination
problem: the matching pennies game
In the matching pennies game, two agents are a" ked to
pick a side of a penny: heads or tails If both agents
choose the same side, then they receive a payoff of 1 If
they choose different sides, they receive a payoff of - 1
The objective is for the agents to learn to coordinate
their actions to obtain optimal payoff
The implemented version of the matching pennies
game differs from the dassic exanIple in several ways
Group Size: The ganle is played in groups of n
agents Each group of agents tries to coordinate the
actions of all group members
The penny- matching environment shares many characteristics
with more situated problems such as robot
soccer, formation flying, and rendezvous tasks Each
agent is a member of a much larger global system and
must learn to coordinate its actions with the actions
of some but not all of the other agents in the system
in order to achieve desirable results The agent
does not know in advance which members of the system
will have a significant effect on its rewards This
subset must be learned
5. Results
Thp Dvnamic Joint Action Perception algorithm was
tested in an environment consisting of 32 agents with
4 agents per group This creates a system joint action
space of 2: J2 distinct action combinations
Three types of Q- Iearning agents were compared in this
environment: DJAP agents, more traditional joint ac-
Relabve Performance of DJAP JAL and IL Agents in the Matching Pennies Game
DJAP JA"
L ----.
- 100 _ OL----=,= cc:---,= oo:--:- o= cc:--- o= c~ c ---,~ occ
Training Iterabons
Figure 3. Perfonnance of DJAP~ JAL~ and IL agents on the
matching pennies game. Average of 10 trials.
tion learning ( JAL) agents; and independently learning
( IL) agents The DJAP agents use the Dynamic
Joint Action Perception algorithm described in this
paper to learn a variable- resolution joint action space
The JAL agents use a hand- designed joint action
space: each agent was allowed to see the action selections
of the three other agents in its playing group;
creating a total of 2: J perceived joint actions and 24
Q- values The independently learning agents execute
a normal Q- Iearning algorithm; without regard for the
behavior ( or even the existence) of the other agents in
the system
During the learning period; each agent executed the
action with the highest Q- value ( or the action which
enabled the maximal joint action; in the case of DJAP
and JAL agents) with 80% probability A random action
selection was executed with 20% probability
Figure 3 shows the results for each of these algorithms
in the matching pennies environment As one might
expect; the JAL agents learn the task most quickly; as
they do not have to spend any time learning which percepts
are relevant to their rewards The DJAP agents
also learn surprisingly quickly Their overall performance
is slightly impaired; however; because the algorithm
is not guaranteed to split along the correct
percepts Thus; although most agents learn optimal
policies; some playing groups do not learn to obtain
optimal rewards Additional training time can help
to improve performance; but because each split in the
DJAP tree increases the total number of states; an incorrect
split early in training may take a very long time
to overcome; even if the correct split is taken later on
Figure 4 shows the number of leaf nodes created by the
DJAP agents as a function of the number of training
iterations Analysis of this graph presents a surprising
Number of Leaf NOcFS Created by the DJAP Algorithm
DJAP Tree -
: LI
o_ OL----=,= cc:---,= co:--:- o= cc:--- o= c~ c ---'~ CCO
Trainirl; j Iterations
Figure 4. Number of leaf nodes created per agent by the
DJAP algorithm when learning the matching pennies
game. Average of 10 trials.
result At 200 interactions, the point at which DJAP
cumulative reward jumps to 68 in Figure 3; the DJAP
agents have only two leaf nodes each: they are only
perceiving the actions of one other agent This is less
information than one would expect the agents to be
able to learn an efficient policy with However; the
probabilistic exploration algorithm used by the agents
allows them to learn the task with less information
than they would require if completely random exploration
were used This result is significant because it
demonstrates that even in simple tasks; the DJAP algorithm
can achieve dose to the SallIe performance as
a hand- designed algorithm; but with fewer state distinctions
One difficulty that arose with the DJAP algorithm
during testing is its sensitivity to the exploration strategy
used by the agents In many cases; the 80%- 20%
exploration strategy used to generate Figure 3 was
effective In other cases; however; particularly when
the number of agents interacting in the environment
was small; this exploration strategy failed to produce
a desirable policy A completely random exploration
strategy was similarly irregular in effectiveness This
sensitivity to the exploration pattern used represents
a significant area for future research
6. Conclusion
The Dynamic Joint Action Perception ( DJAP) algorithm
allows Q- Iearning agents to dynamically create
joint action spaces in environments with large numbers
of interacting agents This is of value because
hand- coding agent couplings for joint action learning
systems is often impractical The empirical results presented
in this paper indicate that; at least for some
[ lG] C. J. C. H. \ Vatkink. Lmrninq from Ddi~ yul Rcwiu'ils. PhD
thekik, Univerkity of Calnhridge, 1989.
[ L,)] C. \ Vatkink and P. Dayan. Tedlnical note: Q- learning. Mnchine
Lwrrtinq, 8: 279- 292, 1992.
[ 11] M. Mundhe and S. Sen. Evaluating concurrent reinforcerneni
learnerk. In Lu~ rninq nbO'l~ t, from inul with other i~ g(' fttS workshop,
iJcniYY, 1999.
[ 14] R. Sun and D. Qi. Rationality akkulnptionk and optilnality of
co- learning. In Design nful Applici~ Nons of InteUigent Agents,
LeC'lH'e Notes in ArNfidnl Intelligence, volulne 1881, pagek GIn,
of LU~ f7tMontreal,
[ 12] Ann Nowe, Katja Verheeck, and TOln Lenaertk.
agentk in a honlO egualik kodety. In Proceulinqs
inq Agents Work~ ftop, Agents 2001 Conference,
Canada, 2001.
[ l: J] Leonid Pekhkin, Kee- Eung Kiln, Nicolak Meuleau, and Leklie P.
Kaelhling. Learning to cooperate via policy kearch. In Sixt"
enth Confeunce on UncertninLy in ArNfidnl Intelligence,
pagek : J07-: J14, San Frandkco, CA, 2000. Morgan Kauflnann.
problems; DJAP agents can learn successful policies
with a relatively small joint action space. In some
cases; DJAP can achieve reasonable performance with
a smaller joint action space than that used by a handcoded
set of joint action learners.
The DJAP algorithm offers several potential avenues
for future research. The sensitivity of the DJAP algorithm
to the exploration strategies used by the agents
has already been mentioned. A better understanding
of this sensitivity and the means by which it may be
predicted or avoided would be desirable. Another avenue
for future research involves the stopping criterion
for leaf node expansion. A comparison of various stopping
criteria and their relative advantages and disadvantages
would be of significant value. The possibility
of pruning to eliminate unnecessary state space distinctions
and reduce tree size should be investigated;
a" should methods of seeking higher- order correlations
between unused percepts and observed rewards. Finally,
the D. JAP algorithm should be applied to a
complex; real- world problem to determine whether the
DJAP advantages observed in the matching pennies
game extend to less controlled environments.
[ 1] David Chaplnan. Penguink can lnake cake. Al Mnqi~ zifH',
10( 4): 4ih,) O, 1989.
[ 2] Caroline Clauk and Craig Boutilier. The dynanlick of reinforcelnent
learning in cooperative lnultiagent kyktenlk. In
AAAI/ IAAI, pagek 74G- n2, 1998.
[: J] J. Hu and M. \ Velllnan. Multiageni refinforcernent learning:
theoretical fralnework and an algoritlnn. In Proceulings of the
15th In1ernnNonnl Conference on A1nchifW Lun7tinq, pagek
242- 2,') 0, San Frandkco, 1998. Morgan Kauflnan.
[ 4] J. Hu and M. \ Velllnan. Experilnental rekultk on q- learning for
general- kuln ktodlaktic ganICk. 2000.
[,')] Martin Lauer and Martin Riedlniller. An algoritlnn for diktrihuted
reinforcernent learning in cooperative lnulti- ageni kyk-tenlk.
In Proceulinqs of the 1nh Intef'fH~ Nonnl on
MnchifH' Lu~ rninq, pagek ,'): J,')-,') 42, San Frandkco, 2000.
Morgan Kauflnan.
[ G] Michael Litttnan. Markov ganICk ak a fralnework for lnultiageni
reinforcernent learning. In of th" 11th Internn-
NOfu~ 1 Conference on A1i~ chine 1994.
[ 7] M. J. Mataric. Learning kodal hehavior.
tonomo'us ,' h; stn{ l, s, 20: 191- 204, 1997.
[ 8] Andrew McCallunl. Inktance- haked utile
inforcernent learning with hidden ktate.
the Twdfth Intef'fH~ Nofu~ 1 Confeunce on
pagek : J87-: J9.5, 199.5.
diktinctionk for reIn
Proceulinqs of
A1i~ chine Lu~ rninq,
[ 9] Andrew McCallunl. Learning to Uke kelective attention and
khort- tenn InentOry in kequential takkk. In Prom Animnls to
Animnts, P( H~ rth InternnNonnl Confeunce on Si/ wlJinNon of
AilnpNm' BefuM; ior, Cape Cod, Makkadmkettk, 199G.
[ 10] A.\ V. Moore and C. C:'. Atkekon. The parti- galne algorithnl
for variahle rekolution reinforcernent learning in lnultidilnenkional
ktate- kpacek. Mnchifw Lu~ rrtinq, 21( 4): 199- 2: J4, Decenlher | {"url":"http://contentdm.lib.byu.edu/cdm/singleitem/collection/IR/id/1246/rec/4","timestamp":"2014-04-19T19:34:56Z","content_type":null,"content_length":"228045","record_id":"<urn:uuid:ee0946a6-eca6-4519-8f5d-9091db051296>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
Abacus Sort
Abacus sort is an O(n) (linear time) sorting algorithm. It works most naturally for non-negative integers, but can be extended to real numbers.
Algorithm Description
Abacus sort is most easily described in an analogue fashion. Represent each of the initial n numbers to be sorted by its binary representation and then "stack" the numbers on to vertical abacus rods
by placing a bead to represent 1 and leaving a gap to represent 0. This requires a total of ceiling(log[2] N) rods, where N is the maximum of the numbers to be sorted. The least significant rod
represents 2^0 and the most significant rod represents 2^k, the highest power of 2 less than or equal to N.
Then allow the beads to fall down the rods, so that each rod contains a stack of beads with no gaps in it. Read off the binary value of the bottom row of beads - this is the largest of the numbers.
Continue reading binary values up the rods, until n values have been read - the last value is the smallest of the numbers (it may be zero).
Encoding the binary numbers takes O(n) time. Letting the beads fall takes constant time. Decoding the sorted binary numbers takes O(n) time. Overall, the algorithm is O(n).
Since each bead represents the same binary digit before and after the sorting operation, the sum of the values represented by the bits does not change. Therefore, the sum of the list of numbers is
the same before and after sorting. Since the same number of numbers is returned, the mean of the list of numbers is also the same before and after sorting.
What more could you ask for from a sorting algorithm?!
Home | Esoteric Programming Languages
Last updated: Saturday, 21 March, 2009; 17:12:45 PDT.
Copyright © 1990-2014, David Morgan-Mar. dmm@dangermouse.net
Hosted by: DreamHost | {"url":"http://www.dangermouse.net/esoteric/abacussort.html","timestamp":"2014-04-20T02:28:42Z","content_type":null,"content_length":"3144","record_id":"<urn:uuid:eaeb93dd-7d3a-4d47-a929-e571d5c2cf5e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
A. General formalism
B. Free energy of a ring structure
1. Statistical probability of the first loop
2. Fluctuations of the end point
3. Free energy of the second loop
4. Total free energy of a ring structure | {"url":"http://scitation.aip.org/content/aip/journal/jcp/129/6/10.1063/1.2967860","timestamp":"2014-04-16T16:35:27Z","content_type":null,"content_length":"95800","record_id":"<urn:uuid:537d5f61-29c1-4850-babb-fc8eb3da02e1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mount Ephraim Algebra 2 Tutor
Find a Mount Ephraim Algebra 2 Tutor
...I especially like to make students realize that math is not the enemy and that it is very useful for daily life. I served as an elementary school tutor during my first two years of college.
Before that, I tutored students in elementary math and science.
10 Subjects: including algebra 2, algebra 1, Latin, SAT math
...Scored 770/800 on SAT Reading in high school and 790/800 on January 26, 2013 test. Routinely score 800/800 on practice tests. Able to help students improve reading comprehension through
specific test-taking strategies and pinpoint necessary areas of vocabulary improvement.
19 Subjects: including algebra 2, calculus, statistics, geometry
Hi, my name is Sharon and I am a recent graduate of Rensselaer Polytechnic Institute. I graduated Magna Cum Laude with a degree in Biomedical Engineering and a minor in Sport Psychology. I am
currently tutoring students in subjects ranging from Chemistry and precalculus, to Geometry and English.
30 Subjects: including algebra 2, reading, English, biology
...For the SAT, I implement a results driven and rigorous 7 week strategy. PLEASE NOTE: I only take serious SAT students who have time, the drive, and a strong personal interest in learning the
tools and tricks to boost their score. Background: I graduated from UCLA, considered a New Ivy, with a B.S. in Integrative Biology and Physiology with an emphasis in physiology and human anatomy.
26 Subjects: including algebra 2, English, chemistry, reading
...I am a professional artist and I teach workshops and sell my art. As an illustrator, I have spent plenty of time learning and analyzing the different parts and pieces of an image, from anatomy
to composition to color, and can easily teach and critique. I took Algebra 1 in middle school.
19 Subjects: including algebra 2, calculus, geometry, trigonometry | {"url":"http://www.purplemath.com/Mount_Ephraim_algebra_2_tutors.php","timestamp":"2014-04-21T10:42:01Z","content_type":null,"content_length":"24264","record_id":"<urn:uuid:cc53d241-a956-4957-b4d0-c4677c955e41>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Joe owns a toy car collection. Each car is stored in a box that is 3 in long,2 in high, and 2.5 in wide. He has 40 cars and wants to pack them in a large box. Estimate the volume of a box that would
be enough to hold the rp toy cars.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50bec730e4b09e7e3b85f319","timestamp":"2014-04-18T21:03:19Z","content_type":null,"content_length":"74963","record_id":"<urn:uuid:d94b2017-226c-462f-8d4b-ef887f72a0b2>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hey again everyone,
The past couple weeks I've been reading into matrices, but for the life of me I do not understand how you could use them efficiently for 3d programming. Been programming in java using netbeans and
lib LWJGL. it has a matrix class in it.
If someone could guide me and give me some examples of how it is applied into a 3d environment.
Thanks in advance! | {"url":"http://www.dreamincode.net/forums/topic/269000-understanding-how-matrix-math-is-applied/page__pid__1565195__st__0","timestamp":"2014-04-21T00:53:26Z","content_type":null,"content_length":"139520","record_id":"<urn:uuid:26289b43-769b-45c0-9645-e21d5852b9d3>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
common core math 1
All topics in Algebra 1, Geometry, and Algebra 2
1. Dissect all topics in Algebra 1, Geometry, and Algebra 2 and break down into individual objectives.
2. Rewrite each objective so that it reads like it is more in-depth , remove most of the algorithmic processes, and use the word “modeling” as much as possible.
Note: If done correctly, you should now have about 200 objectives
3. Make a chart that lists all objectives and pick different subsets of the 200 objectives and put them under labels for Common Core 1, 2, and 3.
4. Be sure that each state and each district/county within a state has a different subset of objectives for each course so students can never successfully move between states or counties.
5. Also be sure each course has a set of 5 disjoint groups with objectives under each group such as Algebra, Geometry, Probability,Trigonometry etc. so that students jump from topic to topic rather
than learn in a linear fashion. Math is NOT allowed to be linear, it should remain disjointed as much as possible.
6. Now you have Common Core 1, Common Core 2, and Common Core 3.
Other notes for a successful recipe:
* Try not to use textbooks
* Don’t give students any reference material to follow that relates to each topic as they do their homework every night
* Don’t align assessments with the standards or let a third party make assessments for you
* Make the objectives so difficult to follow that parents don’t have a clue what their students are doing and can’t provide support at home
Example of Chart of Creating Common Core
One states choices for their objectives
Another states choices for their objectives
(Examples of disconnect: Illinois – circles are covered in CCM2, in North Carolina, circles are covered in CCM3. Some states are covering Exponential functions in CCM1, others are waiting until
CCM2. Some are doing more Geometry in CCM1, others in CCM2, and some save most of the Geometry for CCM3! Other states are still following Common Core but on a traditional path which means using the
Common Core objectives but within the context of Algebra 1, Geometry, and Algebra 2 .)
This is my lovely state – they won’t give me a link but a download only so I posted their whole curriculum here.
Math I
The Real Number System N-RN
Extend the properties of exponents to rational exponents.
N-RN.1 Explain how the definition of the meaning of rational exponents follows from extending the properties of integer exponents to those values, allowing for a notation for radicals in terms of
rational exponents. For example, we define 5^1/3 to be the cube root of 5 because we want (5^1/3)^3 = 5(^1/3)^3 to hold, so (5^1/3)^3 must equal 5.
N-RN.2 Rewrite expressions involving radicals and rational exponents using the properties of exponents.
Note: At this level, focus on fractional exponents with a numerator of 1.
Quantities N-Q
Reason quantitatively and use units to solve problems.
N-Q.1Use units as a way to understand problems and to guide the solution of multi-step problems; choose and interpret units consistently in formulas; choose and interpret the scale and the origin in
graphs and data displays.
N-Q.2Define appropriate quantities for the purpose of descriptive modeling.
N-Q.3 Choose a level of accuracy appropriate to limitations on measurement when reporting quantities.
Seeing Structure in Expressions A-SSE
Interpret the structure of expressions.
A-SSE.1Interpret expressions that represent a quantity in terms of its context.
1. Interpret parts of an expression, such as terms, factors, and coefficients.
2. Interpret complicated expressions by viewing one or more of their parts as a single entity. For example, interpret P(1+r)^n as the product of P and a factor not depending on P.
Note: At this level, limit to linear expressions, exponential expressions with integer exponents and quadratic expressions.
A-SSE.2Use the structure of an expression to identify ways to rewrite it. For example, see x^4 – y^4 as (x^2)^2 – (y^2)^2, thus recognizing it as a difference of squares that can be factored as (x^2
– y^2)(x^2 + y^2).
Write expressions in equivalent forms to solve problems.
A-SSE.3 Choose and produce an equivalent form of an expression to reveal and explain properties of the quantity represented by the expression.
1. Factor a quadratic expression to reveal the zeros of the function it defines.
Note: At this level, the limit is quadratic expressions of the form ax^2 + bx + c.
Arithmetic with Polynomials & Rational Expressions A-APR
Perform arithmetic operations on polynomials.
A-APR.1 Understand that polynomials form a system analogous to the integers, namely, they are closed under the operations of addition, subtraction, and multiplication; add, subtract, and multiply
Note: At this level, limit to addition and subtraction of quadratics and multiplication of linear expressions.
Creating Equations A-CED
Create equations that describe numbers or relationships.
A-CED.1Create equations and inequalities in one variable and use them to solve problems. Include equations arising from linear and quadratic functions, and simple rational and exponential functions.
Note: At this level, focus on linear and exponential functions.
A-CED.2Create equations in two or more variables to represent relationships between quantities; graph equations on coordinate axes with labels and scales.
Note: At this level, focus on linear, exponential and quadratic. Limit to situations that involve evaluating exponential functions for integer inputs.
A-CED.3Represent constraints by equations or inequalities, and by systems of equations and/or inequalities, and interpret solutions as viable or non- viable options in a modeling context. For
example, represent inequalities describing nutritional and cost constraints on combinations of different foods.
Note: At this level, limit to linear equations and inequalities.
A-CED.4Rearrange formulas to highlight a quantity of interest, using the same reasoning as in solving equations. For example, rearrange Ohm’s law V = IR to highlight resistance R.
Note: At this level, limit to formulas that are linear in the variable of interest, or to formulas involving squared or cubed variables.
Reasoning with Equations & Inequalities A-REI
Understand solving equations as a process of reasoning and explain the reasoning.
A-REI.1 Explain each step in solving a simple equation as following from the equality of numbers asserted at the previous step, starting from the assumption that the original equation has a solution.
Construct a viable argument to justify a solution method.
Solve equations and inequalities in one variable.
A-REI.3Solve linear equations and inequalities in one variable, including equations with coefficients represented by letters.
Solve systems of equations.
A-REI.5 Prove that, given a system of two equations in two variables, replacing one equation by the sum of that equation and a multiple of the other produces a system with the same solutions.
A-REI.6Solve systems of linear equations exactly and approximately (e.g., with graphs), focusing on pairs of linear equations in two variables.
Represent and solve equations and inequalities graphically.
A-REI.10Understand that the graph of an equation in two variables is the set of all its solutions plotted in the coordinate plane, often forming a curve (which could be a line).
Note: At this level, focus on linear and exponential equations.
A-REI.11 Explain why the x-coordinates of the points where the graphs of the equations y = f(x) and
y = g(x) intersect are the solutions of the equation f(x) = g(x); find the solutions approximately, e.g., using technology to graph the functions, make tables of values, or find successive
approximations. Include cases where f(x) and/or g(x) are linear, polynomial, rational, absolute value, exponential, and logarithmic functions.
Note: At this level, focus on linear and exponential functions.
A-REI.12Graph the solutions to a linear inequality in two variables as a half- plane (excluding the boundary in the case of a strict inequality), and graph the solution set to a system of linear
inequalities in two variables as the intersection of the corresponding half-planes.
Interpreting Functions F-IF
Understand the concept of a function and use function notation.
F-IF.1Understand that a function from one set (called the domain) to another set (called the range) assigns to each element of the domain exactly one element of the range. If f is a function and x is
an element of its domain, then f(x) denotes the output of f corresponding to the input x. The graph of f is the graph of the equation y = f(x).
F-IF.2Use function notation, evaluate functions for inputs in their domains, and interpret statements that use function notation in terms of a context.
Note: At this level, the focus is linear and exponential functions.
F-IF.3 Recognize that sequences are functions, sometimes defined recursively, whose domain is a subset of the integers. For example, the Fibonacci sequence is defined recursively by f(0) = f(1) = 1,
f(n+1) = f(n) + f(n-1) for n ≥ 1.
Interpret functions that arise in applications in terms of the context.
F-IF.4For a function that models a relationship between two quantities, interpret key features of graphs and tables in terms of the quantities, and sketch graphs showing key features given a verbal
description of the relationship. Key features include: intercepts; intervals where the function is increasing, decreasing, positive, or negative; relative maximums and minimums; symmetries; end
behavior; and periodicity.
Note: At this level, focus on linear, exponential and quadratic functions; no end behavior or periodicity.
F-IF.5 Relate the domain of a function to its graph and, where applicable, to the quantitative relationship it describes. For example, if the function h(n) gives the number of person-hours it takes
to assemble n engines in a factory, then the positive integers would be an appropriate domain for the function.
Note: At this level, focus on linear and exponential functions.
F-IF.6Calculate and interpret the average rate of change of a function (presented symbolically or as a table) over a specified interval. Estimate the rate of change from a graph.
Note: At this level, focus on linear functions and exponential functions whose domain is a subset of the integers.
Analyze functions using different representations.
F-IF.7Graph functions expressed symbolically and show key features of the graph, by hand in simple cases and using technology for more complicated cases.
1. Graph linear and quadratic functions and show intercepts, maxima, and minima.
2. Graph exponential and logarithmic functions, showing intercepts and end behavior, and trigonometric functions, showing period, midline, and amplitude.
Note: At this level, for part e, focus on exponential functions only.
F-IF.8Write a function defined by an expression in different but equivalent forms to reveal and explain different properties of the function.
1. Use the process of factoring and completing the square in a quadratic function to show zeros, extreme values, and symmetry of the graph, and interpret these in terms of a context.
Note: At this level, only factoring expressions of the form ax^2 + bx +c, is expected. Completing the square is not addressed at this level.
1. Use the properties of exponents to interpret expressions for exponential functions. For example, identify percent rate of change in functions such as y = (1.02)^t, y = (0.97)^t, y = (1.01)^12t, y
= (1.2)^t/10, and classify them as representing exponential growth or decay.
F-IF.9Compare properties of two functions each represented in a different way (algebraically, graphically, numerically in tables, or by verbal descriptions). For example, given a graph of one
quadratic function and an algebraic expression for another, say which has the larger maximum.
Note: At this level, focus on linear, exponential, and quadratic functions.
Building Functions F-BF
Build a function that models a relationship between two quantities.
F-BF.1Write a function that describes a relationship between two quantities.
1. Determine an explicit expression, a recursive process, or steps for calculation from a context.
2. Combine standard function types using arithmetic operations. For example, build a function that models the temperature of a cooling body by adding a constant function to a decaying exponential,
and relate these functions to the model.
Note: At this level, limit to addition or subtraction of constant to linear, exponential or quadratic functions or addition of linear functions to linear or quadratic functions.
F-BF.2 Write arithmetic and geometric sequences both recursively and with an explicit formula, use them to model situations, and translate between the two forms.
Note: At this level, formal recursive notation is not used. Instead, use of informal recursive notation (such as NEXT = NOW + 5 starting at 3) is intended.
Build new functions from existing functions.
F-BF.3 Identify the effect on the graph of replacing f(x) by f(x) + k, k f(x), f(kx), and f(x + k) for specific values of k (both positive and negative); find the value of k given the graphs.
Experiment with cases and illustrate an explanation of the effects on the graph using technology. Include recognizing even and odd functions from their graphs and algebraic expressions for them.
Note: At this level, limit to vertical and horizontal translations of linear and exponential functions. Even and odd functions are not addressed.
Linear, Quadratic, & Exponential Models F-LE
Construct and compare linear and exponential models and solve problems.
F-LE.1Distinguish between situations that can be modeled with linear functions and with exponential functions
1. Prove that linear functions grow by equal differences over equal intervals, and that exponential functions grow by equal factors over equal intervals.
2. Recognize situations in which one quantity changes at a constant rate per unit interval relative to another.
3. Recognize situations in which a quantity grows or decays by a constant percent rate per unit interval relative to another.
F-LE.2 Construct linear and exponential functions, including arithmetic and geometric sequences, given a graph, a description of a relationship, or two input-output pairs (include reading these from
a table).
F-LE.3 Observe using graphs and tables that a quantity increasing exponentially eventually exceeds a quantity increasing linearly, quadratically, or (more generally) as a polynomial function.
Note: At this level, limit to linear, exponential, and quadratic functions; general polynomial functions are not addressed.
Interpret expressions for functions in terms of the situation they model.
F-LE.5Interpret the parameters in a linear or exponential function in terms of a context.
Congruence G-CO
Experiment with transformations in the plane.
G-CO.1 Know precise definitions of angle, circle, perpendicular line, parallel line, and line segment, based on the undefined notions of point, line, distance along a line, and distance around a
circular arc.
Note: At this level, distance around a circular arc is not addressed.
Expressing Geometric Properties with Equations G-GPE
Use coordinates to prove simple geometric theorems algebraically.
G-GPE.4 Use coordinates to prove simple geometric theorems algebraically. For example, prove or disprove that a figure defined by four given points in the coordinate plane is a rectangle; prove or
disprove that the point (1, √3) lies on the circle centered at the origin and containing the point (0, 2).
Note:Conics is not the focus at this level, therefore the last example is not appropriate here.
G-GPE.5 Prove the slope criteria for parallel and perpendicular lines and use them to solve geometric problems (e.g., find the equation of a line parallel or perpendicular to a given line that passes
through a given point).
G-GPE.6 Find the point on a directed line segment between two given points that partitions the segment in a given ratio.
Note: At this level, focus on finding the midpoint of a segment.
G-GPE.7 Use coordinates to compute perimeters of polygons and areas of triangles and rectangles, e.g., using the distance formula.
Geometric Measurement & Dimension G-GMD
Explain volume formulas and use them to solve problems.
G-GMD.1 Give an informal argument for the formulas for the circumference of a circle, area of a circle, volume of a cylinder, pyramid, and cone. Use dissection arguments, Cavalieri’s principle, and
informal limit arguments.
Note: Informal limit arguments are not the intent at this level.
G-GMD.3 Use volume formulas for cylinders, pyramids, cones, and spheres to solve problems.*
Note: At this level, formulas for pyramids, cones and spheres will be given.
Interpreting Categorical & Quantitative Data S-ID
Summarize, represent, and interpret data on a single count or measurement variable.
S-ID.1Represent data with plots on the real number line (dot plots, histograms, and box plots).
S-ID.2Use statistics appropriate to the shape of the data distribution to compare center (median, mean) and spread (interquartile range, standard deviation) of two or more different data sets.
S-ID.3Interpret differences in shape, center, and spread in the context of the data sets, accounting for possible effects of extreme data points (outliers).
Summarize, represent, and interpret data on two categorical and quantitative variables.
S-ID.5 Summarize categorical data for two categories in two-way frequency tables. Interpret relative frequencies in the context of the data (including joint, marginal, and conditional relative
frequencies). Recognize possible associations and trends in the data.
S-ID.6Represent data on two quantitative variables on a scatter plot, and describe how the variables are related.
1. Fit a function to the data; use functions fitted to data to solve problems in the context of the data. Use given functions or choose a function suggested by the context. Emphasize linear and
exponential models.
2. Informally assess the fit of a function by plotting and analyzing residuals.
Note: At this level, for part b, focus on linear models.
1. Fit a linear function for a scatter plot that suggests a linear association.
Interpret linear models.
S-ID.7Interpret the slope (rate of change) and the intercept (constant term) of a linear model in the context of the data.
S-ID.8Compute (using technology) and interpret the correlation coefficient of a linear fit.
S-ID 9Distinguish between correlation and causation.
Math II
The Real Number System N-RN
Extend the properties of exponents to rational exponents.
N-RN.2Rewrite expressions involving radicals and rational exponents using the properties of exponents.
Quantities N-Q
Reason quantitatively and use units to solve problems.
N-Q.1 Use units as a way to understand problems and to guide the solution of multi-step problems; choose and interpret units consistently in formulas; choose and interpret the scale and the origin in
graphs and data displays.
N-Q.2 Define appropriate quantities for the purpose of descriptive modeling.
N-Q.3 Choose a level of accuracy appropriate to limitations on measurement when reporting quantities.
Seeing Structure in Expressions A-SSE
Interpret the structure of expressions.
A-SSE.1 Interpret expressions that represent a quantity in terms of its context.
1. Interpret parts of an expression, such as terms, factors, and coefficients.
2. Interpret complicated expressions by viewing one or more of their parts as a single entity. For example, interpret P(1+r)^n as the product of P and a factor not depending on P.
Note: At this level include polynomial expressions
A-SSE.2 Use the structure of an expression to identify ways to rewrite it. For example, see x^4 – y^4 as
(x^2)^2 – (y^2)^2, thus recognizing it as a difference of squares that can be factored as (x^2 – y^2)(x^2 + y^2).
Write expressions in equivalent forms to solve problems.
A-SSE.3 Choose and produce an equivalent form of an expression to reveal and explain properties of the quantity represented by the expression.
1. Use the properties of exponents to transform expressions for exponential functions. For example the expression 1.15^t can be rewritten as (1.15^1/12)^12t ≈ 1.012^12t to reveal the approximate
equivalent monthly interest rate if the annual rate is 15%.
Arithmetic with Polynomials & Rational Expressions A-APR
Perform arithmetic operations on polynomials.
A-APR.1 Understand that polynomials form a system analogous to the integers, namely, they are closed under the operations of addition, subtraction, and multiplication; add, subtract, and multiply
Note: At this level, add and subtract any polynomial and extend multiplication to as many as three linear expressions.
Understand the relationship between zeros and factors of polynomials.
A-APR.3 Identify zeros of polynomials when suitable factorizations are available, and use the zeros to construct a rough graph of the function defined by the polynomial.
Note: At this level, limit to quadratic expressions.
Creating Equations A-CED
Create equations that describe numbers or relationships.
A-CED.1 Create equations and inequalities in one variable and use them to solve problems. Include equations arising from linear and quadratic functions, and simple rational and exponential functions.
Note: At this level extend to quadratic and inverse variation (the simplest rational) functions and use common logs to solve exponential equations.
A-CED.2 Create equations in two or more variables to represent relationships between quantities; graph equations on coordinate axes with labels and scales.
Note: At this level extend to simple trigonometric equations that involve right triangle trigonometry.
A-CED.3 Represent constraints by equations or inequalities, and by systems of equations and/or inequalities, and interpret solutions as viable or nonviable options in a modeling context. For example,
represent inequalities describing nutritional and cost constraints on combinations of different foods.
Note: Extend to linear-quadratic, and linear–inverse variation (simplest rational) systems of equations.
A-CED.4Rearrange formulas to highlight a quantity of interest, using the same reasoning as in solving equations. For example, rearrange Ohm’s law V = IR to highlight resistance R.
Note: At this level, extend to compound variation relationships.
Reasoning with Equations & Inequalities A-REI
Understand solving equations as a process of reasoning and explain the reasoning.
A-REI.1Explain each step in solving a simple equation as following from the equality of numbers asserted at the previous step, starting from the assumption that the original equation has a solution.
Construct a viable argument to justify a solution method.
Note: At this level, limit to factorable quadratics.
A-REI.2 Solve simple rational and radical equations in one variable, and give examples showing how extraneous solutions may arise.
Note: At this level, limit to inverse variation.
Solve equations and inequalities in one variable.
A-REI.4 Solve quadratic equations in one variable.
b. Solve quadratic equations by inspection (e.g., for x^2 = 49), taking square roots, completing the square, the quadratic formula and factoring, as appropriate to the initial form of the equation.
Recognize when the quadratic formula gives complex solutions and write them as a ± bi for real numbers a andb.
Note: At this level, limit solving quadratic equations by inspection, taking square roots, quadratic formula, and factoring when lead coefficient is one. Writing complex solutions is not expected;
however recognizing when the formula generates non-real solutions is expected.
Solve systems of equations.
A-REI.7 Solve a simple system consisting of a linear equation and a quadratic equation in two variables algebraically and graphically. For example, find the points of intersection between the line
y = –3x and the circle x^2 + y^2 = 3.
Represent and solve equations and inequalities graphically.
A-REI.10 Understand that the graph of an equation in two variables is the set of all its solutions plotted in the coordinate plane, often forming a curve (which could be a line).
Note: At this level, extend to quadratics.
A-REI.11 Explain why the x-coordinates of the points where the graphs of the equations y = f(x) and
y = g(x) intersect are the solutions of the equation f(x) = g(x); find the solutions approximately, e.g., using technology to graph the functions, make tables of values, or find successive
approximations. Include cases where f(x) and/or g(x) are linear, polynomial, rational, absolute value, exponential, and logarithmic functions.
Note: At this level, extend to quadratic functions.
Interpreting Functions F-IF
Understand the concept of a function and use function notation.
F-IF.2 Use function notation, evaluate functions for inputs in their domains, and interpret statements that use function notation in terms of a context.
Note: At this level, extend to quadratic, simple power, and inverse variation functions.
Interpret functions that arise in applications in terms of the context.
F-IF.4 For a function that models a relationship between two quantities, interpret key features of graphs and tables in terms of the quantities, and sketch graphs showing key features given a verbal
description of the relationship. Key features include: intercepts; intervals where the function is increasing, decreasing, positive, or negative; relative maximums and minimums; symmetries; end
behavior; and periodicity.
Note: At this level, limit to simple trigonometric functions (sine, cosine, and tangent in standard position)with angle measures of 180 or less. Periodicity not addressed.
F-IF.5Relate the domain of a function to its graph and, where applicable, to the quantitative relationship it describes. For example, if the function h(n) gives the number of person-hours it takes to
assemble n engines in a factory, then the positive integers would be an appropriate domain for the function.
Note: At this level, extend to quadratic, right triangle trigonometry, and inverse variation functions.
Analyze functions using different representations.
F-IF.7 Graph functions expressed symbolically and show key features of the graph, by hand in simple cases and using technology for more complicated cases.
1. Graph square root, cube root, and piecewise-defined functions, including step functions and absolute value functions.
2. Graph exponential and logarithmic functions, showing intercepts and end behavior, and trigonometric functions, showing period, midline, and amplitude.
Note: At this level, extend to simple trigonometric functions (sine, cosine, and tangent in standard position)
F-IF.8 Write a function defined by an expression in different but equivalent forms to reveal and explain different properties of the function.
1. Use the process of factoring and completing the square in a quadratic function to show zeros, extreme values, and symmetry of the graph, and interpret these in terms of a context.
Note: At this level, completing the square is still not expected.
F-IF.9 Compare properties of two functions each represented in a different way (algebraically, graphically, numerically in tables, or by verbal descriptions). For example, given a graph of one
quadratic function and an algebraic expression for another, say which has the larger maximum.
Note: At this level, extend to quadratic, simple power, and inverse variation functions.
Building Functions F-BF
Build a function that models a relationship between two quantities.
F-BF.1 Write a function that describes a relationship between two quantities.
1. Determine an explicit expression, a recursive process, or steps for calculation from a context.
Note: Continue to allow informal recursive notation through this level.
1. Combine standard function types using arithmetic operations. For example, build a function that models the temperature of a cooling body by adding a constant function to a decaying exponential,
and relate these functions to the model.
Build new functions from existing functions.
F-BF.3 Identify the effect on the graph of replacing f(x) by f(x) + k, kf(x), f(kx), and f(x + k) for specific values of k (both positive and negative); find the value of k given the graphs.
Experiment with cases and illustrate an explanation of the effects on the graph using technology. Include recognizing even and odd functions from their graphs and algebraic expressions for them.
Note: At this level, extend to quadratic functions and, kf(x).
Congruence G-CO
Experiment with transformations in the plane
G-CO.2 Represent transformations in the plane using, e.g., transparencies and geometry software; describe transformations as functions that take points in the plane as inputs and give other points as
outputs. Compare transformations that preserve distance and angle to those that do not (e.g., translation versus horizontal stretch).
G-CO.3 Given a rectangle, parallelogram, trapezoid, or regular polygon, describe the rotations and reflections that carry it onto itself.
G-CO.4 Develop definitions of rotations, reflections, and translations in terms of angles, circles, perpendicular lines, parallel lines, and line segments.
G-CO.5 Given a geometric figure and a rotation, reflection, or translation, draw the transformed figure using, e.g., graph paper, tracing paper, or geometry software. Specify a sequence of
transformations that will carry a given figure onto another.
Understand congruence in terms of rigid motions
G-CO.6 Use geometric descriptions of rigid motions to transform figures and to predict the effect of a given rigid motion on a given figure; given two figures, use the definition of congruence in
terms of rigid motions to decide if they are congruent.
G-CO.7 Use the definition of congruence in terms of rigid motions to show that two triangles are congruent if and only if corresponding pairs of sides and corresponding pairs of angles are congruent.
G-CO.8 Explain how the criteria for triangle congruence (ASA, SAS, and SSS) follow from the definition of congruence in terms of rigid motions.
Prove geometric theorems
G-CO.10 Prove theorems about triangles. Theorems include: measures of interior angles of a triangle sum to 180°; base angles of isosceles triangles are congruent; the segment joining midpoints of two
sides of a triangle is parallel to the third side and half the length; the medians of a triangle meet at a point.
Note: At this level, include measures of interior angles of a triangle sum to 180° and the segment joining midpoints of two sides of a triangle is parallel to the third side and half the length.
Make geometric constructions
G-CO.13 Construct an equilateral triangle, a square, and a regular hexagon inscribed in a circle.
Similarity, Right Triangles, & Trigonometry G-SRT
Understand similarity in terms of similarity transformations
G-SRT.1 Verify experimentally the properties of dilations given by a center and a scale factor:
1. A dilation takes a line not passing through the center of the dilation to a parallel line, and leaves a line passing through the center unchanged.
2. The dilation of a line segment is longer or shorter in the ratio given by the scale factor.
Define trigonometric ratios and solve problems involving right triangles
G-SRT.6 Understand that by similarity, side ratios in right triangles are properties of the angles in the triangle, leading to definitions of trigonometric ratios for acute angles.
G-SRT.7 Explain and use the relationship between the sine and cosine of complementary angles.
G-SRT.8 Use trigonometric ratios and the Pythagorean Theorem to solve right triangles in applied problems.
Apply trigonometry to general triangles
G-SRT.9(+) Derive the formula A = 1/2 ab sin(C) for the area of a triangle by drawing an auxiliary line from a vertex perpendicular to the opposite side.
G-SRT.11(+) Understand and apply the Law of Sines and the Law of Cosines to find unknown measurements in right and non-right triangles (e.g., surveying problems, resultant forces).
Expressing Geometric Properties with Equations G-GPE
Translate between the geometric description and the equation for a conic section
G-GPE.1 Derive the equation of a circle of given center and radius using the Pythagorean Theorem; complete the square to find the center and radius of a circle given by an equation.
Note: At this level, derive the equation of the circle using the Pythagorean Theorem.
G-GPE.6 Find the point on a directed line segment between two given points that partitions the segment in a given ratio.
Geometric Measurement and Dimension G-GMD
Visualize relationships between two-dimensional and three-dimensional objects
G-GMD.4 Identify the shapes of two-dimensional cross-sections of three-dimensional objects, and identify three-dimensional objects generated by rotations of two-dimensional objects.
Modeling with Geometry G-MG
Apply geometric concepts in modeling situations
G-MG.1 Use geometric shapes, their measures, and their properties to describe objects (e.g., modeling a tree trunk or a human torso as a cylinder).
G-MG.2 Apply concepts of density based on area and volume in modeling situations (e.g., persons per square mile, BTUs per cubic foot).
G-MG.3 Apply geometric methods to solve design problems (e.g., designing an object or structure to satisfy physical constraints or minimize cost; working with typographic grid systems based on
Making Inferences & Justifying Conclusions S-IC
Understand and evaluate random processes underlying statistical experiments
S-IC.2 Decide if a specified model is consistent with results from a given data-generating process, e.g., using simulation. For example, a model says a spinning coin falls heads up with probability
0.5. Would a result of 5 tails in a row cause you to question the model?
Make inferences and justify conclusions from sample surveys, experiments, and observational studies
S-IC.6 Evaluate reports based on data.
Conditional Probability and the Rules of Probability S-CP
Understand independence and conditional probability and use them to interpret data
S-CP.1 Describe events as subsets of a sample space (the set of outcomes) using characteristics (or categories) of the outcomes, or as unions, intersections, or complements of other events (“or,”
“and,” “not”).
S-CP.2 Understand that two events A and B are independent if the probability of A and B occurring together is the product of their probabilities, and use this characterization to determine if they
are independent.
S-CP.3 Understand the conditional probability of A given B as P(A and B)/P(B), and interpret independence of A and B as saying that the conditional probability of A given B is the same as the
probability of A, and the conditional probability of B given A is the same as the probability of B.
S-CP.4 Construct and interpret two-way frequency tables of data when two categories are associated with each object being classified. Use the two-way table as a sample space to decide if events are
independent and to approximate conditional probabilities. For example, collect data from a random sample of students in your school on their favorite subject among math, science, and English.
Estimate the probability that a randomly selected student from your school will favor science given that the student is in tenth grade. Do the same for other subjects and compare the results.
S-CP.5 Recognize and explain the concepts of conditional probability and independence in everyday language and everyday situations. For example, compare the chance of having lung cancer if you are a
smoker with the chance of being a smoker if you have lung cancer.
Use the rules of probability to compute probabilities of compound events in a uniform probability model
S-CP.6 Find the conditional probability of A given B as the fraction of B’s outcomes that also belong to A, and interpret the answer in terms of the model.
S-CP.7 Apply the Addition Rule, P(A or B) = P(A) + P(B) – P(A and B), and interpret the answer in terms of the model.
S-CP.8 (+) Apply the general Multiplication Rule in a uniform probability model, P(A and B) = P(A)P(B|A) = P(B)P(A|B), and interpret the answer in terms of the model.
S-CP.9 (+) Use permutations and combinations to compute probabilities of compound events and solve problems.
Math III
The Real Number System N-RN
Use properties of rational and irrational numbers.
N-RN.3 Explain why the sum or product of two rational numbers is rational; that the sum of a rational number and an irrational number is irrational; and that the product of a nonzero rational number
and an irrational number is irrational.
Quantities N-Q
Reason quantitatively and use units to solve problems.
N-Q.1 Use units as a way to understand problems and to guide the solution of multi-step problems; choose and interpret units consistently in formulas; choose and interpret the scale and the origin in
graphs and data displays.
N-Q.2Define appropriate quantities for the purpose of descriptive modeling.
N-Q.3 Choose a level of accuracy appropriate to limitations on measurement when reporting quantities.
The Complex Number System N-CN
Perform arithmetic operations with complex numbers.
N-CN.1 Know there is a complex number i such that i^2 = –1, and every complex number has the form a + bi with a andb real.
N-CN.2 Use the relation i^2 = –1 and the commutative, associative, and distributive properties to add, subtract, and multiply complex numbers.
Use complex numbers in polynomial identities and equations.
N-CN.7 Solve quadratic equations with real coefficients that have complex solutions.
N-CN.9 (+) Know the Fundamental Theorem of Algebra; show that it is true for quadratic polynomials.
Seeing Structure in Expressions A-SSE
Interpret the structure of expressions.
A-SSE.1 Interpret expressions that represent a quantity in terms of its context.
a. Interpret parts of an expression, such as terms, factors, and coefficients.
1. Interpret complicated expressions by viewing one or more of their parts as a single entity. For example, interpret P(1+r)^n as the product of P and a factor not depending on P.
A-SSE.2 Use the structure of an expression to identify ways to rewrite it. For example, see x^4 – y^4 as (x^2)^2 – (y^2)^2, thus recognizing it as a difference of squares that can be factored as (x^2
– y^2)(x^2 + y^2).
Write expressions in equivalent forms to solve problems.
A-SSE.3 Choose and produce an equivalent form of an expression to reveal and explain properties of the quantity represented by the expression.
1. Complete the square in a quadratic expression to reveal the maximum or minimum value of the function it defines.
A-SSE.4 Derive the formula for the sum of a finite geometric series (when the common ratio is not 1), and use the formula to solve problems. For example, calculate mortgage payments.
Arithmetic with Polynomials and Rational Expressions A-APR
Perform arithmetic operations on polynomials.
A-APR.1 Understand that polynomials form a system analogous to the integers, namely, they are closed under the operations of addition, subtraction, and multiplication; add, subtract, and multiply
Understand the relationship between zeros and factors of polynomials.
A-APR.2 Know and apply the Remainder Theorem: For a polynomial p(x) and a number a, the remainder on division by x – a is p(a), so p(a) = 0 if and only if (x – a) is a factor of p(x).
A-APR.3 Identify zeros of polynomials when suitable factorizations are available, and use the zeros to construct a rough graph of the function defined by the polynomial.
Use polynomial identities to solve problems.
A-APR.4 Prove polynomial identities and use them to describe numerical relationships. For example, the polynomial identity (x^2 + y^2)^2 = (x^2 – y^2)^2 + (2xy)^2 can be used to generate Pythagorean
Rewrite rational expressions.
A-APR.6 Rewrite simple rational expressions in different forms; write ^a^(x)/[b][(x)] in the form q(x) + ^r^(x)/[b][(x)], where a(x), b(x), q(x), and r(x) are polynomials with the degree of r(x)
less than the degree of b(x), using inspection, long division, or, for the more complicated examples, a computer algebra system.
A-APR.7 (+) Understand that rational expressions form a system analogous to the rational numbers, closed under addition, subtraction, multiplication, and division by a nonzero rational expression;
add, subtract, multiply, and divide rational expressions.
Note:Limit to rational expressions with constant, linear, and factorable quadratic terms.
Creating Equations A-CEDCreate equations that describe numbers or relationships.
A-CED.1 Create equations and inequalities in one variable and use them to solve problems. Include equations arising from linear and quadratic functions, and simple rational and exponential functions.
A-CED.2 Create equations in two or more variables to represent relationships between quantities; graph equations on coordinate axes with labels and scales.
A-CED.3 Represent constraints by equations or inequalities, and by systems of equations and/or inequalities, and interpret solutions as viable or nonviable options in a modeling context. For example,
represent inequalities describing nutritional and cost constraints on combinations of different foods.
A-CED.4Rearrange formulas to highlight a quantity of interest, using the same reasoning as in solving equations. For example, rearrange Ohm’s law V = IR to highlight resistance R.
Reasoning with Equations & Inequalities A-REI
Understand solving equations as a process of reasoning and explain the reasoning.
A-REI.1Explain each step in solving a simple equation as following from the equality of numbers asserted at the previous step, starting from the assumption that the original equation has a solution.
Construct a viable argument to justify a solution method.
A-REI.2 Solve simple rational and radical equations in one variable, and give examples showing how extraneous solutions may arise.
Solve equations and inequalities in one variable.
A-REI.4 Solve quadratic equations in one variable.
1. Use the method of completing the square to transform any quadratic equation in x into an equation of the form (x – p)^2 = q that has the same solutions. Derive the quadratic formula from this
2. Solve quadratic equations by inspection (e.g., for x^2 = 49), taking square roots, completing the square, the quadratic formula and factoring, as appropriate to the initial form of the equation.
Recognize when the quadratic formula gives complex solutions and write them as a ± bi for real numbers a andb.
Represent and solve equations and inequalities graphically.
A-REI.10 Understand that the graph of an equation in two variables is the set of all its solutions plotted in the coordinate plane, often forming a curve (which could be a line).
A-REI.11 Explain why the x-coordinates of the points where the graphs of the equations y = f(x) and
y = g(x) intersect are the solutions of the equation f(x) = g(x); find the solutions approximately, e.g., using technology to graph the functions, make tables of values, or find successive
approximations. Include cases where f(x) and/or g(x) are linear, polynomial, rational, absolute value, exponential, and logarithmic functions.
Interpreting Functions F-IF
Understand the concept of a function and use function notation.
F-IF.2 Use function notation, evaluate functions for inputs in their domains, and interpret statements that use function notation in terms of a context.
Interpret functions that arise in applications in terms of the context.
F-IF.4 For a function that models a relationship between two quantities, interpret key features of graphs and tables in terms of the quantities, and sketch graphs showing key features given a verbal
description of the relationship. Key features include: intercepts; intervals where the function is increasing, decreasing, positive, or negative; relative maximums and minimums; symmetries; end
behavior; and periodicity.
F-IF.5 Relate the domain of a function to its graph and, where applicable, to the quantitative relationship it describes. For example, if the function h(n) gives the number of person-hours it takes
to assemble n engines in a factory, then the positive integers would be an appropriate domain for the function.
Analyze functions using different representations.
F-IF.7 Graph functions expressed symbolically and show key features of the graph, by hand in simple cases and using technology for more complicated cases.
1. Graph polynomial functions, identifying zeros when suitable factorizations are available, and showing end behavior.
2. Graph exponential and logarithmic functions, showing intercepts and end behavior, and trigonometric functions, showing period, midline, and amplitude.
F-IF.8 Write a function defined by an expression in different but equivalent forms to reveal and explain different properties of the function.
1. Use the process of factoring and completing the square in a quadratic function to show zeros, extreme values, and symmetry of the graph, and interpret these in terms of a context.
F-IF.9 Compare properties of two functions each represented in a different way (algebraically, graphically, numerically in tables, or by verbal descriptions). For example, given a graph of one
quadratic function and an algebraic expression for another, say which has the larger maximum.
Building Functions F-BF
Build a function that models a relationship between two quantities.
F-BF.1 Write a function that describes a relationship between two quantities.
1. Determine an explicit expression, a recursive process, or steps for calculation from a context.
2. Combine standard function types using arithmetic operations. For example, build a function that models the temperature of a cooling body by adding a constant function to a decaying exponential,
and relate these functions to the model.
F-BF.2 Write arithmetic and geometric sequences both recursively and with an explicit formula, use them to model situations, and translate between the two forms.
Build new functions from existing functions.
F-BF.3 Identify the effect on the graph of replacing f(x) by f(x) + k, kf(x), f(kx), and f(x + k) for specific values of k (both positive and negative); find the value of k given the graphs.
Experiment with cases and illustrate an explanation of the effects on the graph using technology. Include recognizing even and odd functions from their graphs and algebraic expressions for them.
F-BF.4 Find inverse functions.
1. Solve an equation of the form f(x) = c for a simple function f that has an inverse and write an expression for the inverse. For example, f(x) =2 x^3 or f(x) = (x+1)/(x–1) for x ≠ 1.
Linear and Exponential Models F-LE
Construct and compare linear, quadratic, and exponential models and solve problems.
F-LE.3 Observe using graphs and tables that a quantity increasing exponentially eventually exceeds a quantity increasing linearly, quadratically, or (more generally) as a polynomial function.
F-LE.4 For exponential models, express as a logarithm the solution to ab^ct = d where a, c, and d are numbers and the base b is 2, 10, or e; evaluate the logarithm using technology.
Trigonometric Functions F-TF
Extend the domain of trigonometric functions using the unit circle.
F-TF.1 Understand radian measure of an angle as the length of the arc on the unit circle subtended by the angle.
F-TF.2 Explain how the unit circle in the coordinate plane enables the extension of trigonometric functions to all real numbers, interpreted as radian measures of angles traversed counterclockwise
around the unit circle.
Model periodic phenomena with trigonometric functions.
F-TF.5 Choose trigonometric functions to model periodic phenomena with specified amplitude, frequency, and midline.
Prove and apply trigonometric identities.
F-TF.8 Prove the Pythagorean identity sin^2(θ) + cos^2(θ) = 1 and use it to find sin(θ), cos(θ), or tan(θ) given sin(θ), cos(θ), or tan(θ) and the quadrant of the angle.
Congruence G-CO
Experiment with transformations in the plane
G-CO.1 Know precise definitions of angle, circle, perpendicular line, parallel line, and line segment, based on the undefined notions of point, line, distance along a line, and distance around a
circular arc.
Prove geometric theorems
G-CO.9 Prove theorems about lines and angles. Theorems include: vertical angles are congruent; when a transversal crosses parallel lines, alternate interior angles are congruent and corresponding
angles are congruent; points on a perpendicular bisector of a line segment are exactly those equidistant from the segment’s endpoints.
G-CO.10 Prove theorems about triangles. Theorems include: measures of interior angles of a triangle sum to 180°; base angles of isosceles triangles are congruent; the segment joining midpoints of two
sides of a triangle is parallel to the third side and half the length; the medians of a triangle meet at a point.
G-CO.11 Prove theorems about parallelograms. Theorems include: opposite sides are congruent, opposite angles are congruent, the diagonals of a parallelogram bisect each other, and conversely,
rectangles are parallelograms with congruent diagonals.
Make geometric constructions
G-CO.12 Make formal geometric constructions with a variety of tools and methods (compass and straightedge, string, reflective devices, paper folding, dynamic geometric software, etc.). Copying a
segment; copying an angle; bisecting a segment; bisecting an angle; constructing perpendicular lines, including the perpendicular bisector of a line segment; and constructing a line parallel to a
given line through a point not on the line.
Similarity, Right Triangles, & Trigonometry G-SRT
Understand similarity in terms of similarity transformations
G-SRT.2 Given two figures, use the definition of similarity in terms of similarity transformations to decide if they are similar; explain using similarity transformations the meaning of similarity
for triangles as the equality of all corresponding pairs of angles and the proportionality of all corresponding pairs of sides.
G-SRT.3 Use the properties of similarity transformations to establish the AA criterion for two triangles to be similar.
Prove theorems involving similarity
G-SRT.4 Prove theorems about triangles. Theorems include: a line parallel to one side of a triangle divides the other two proportionally, and conversely; the Pythagorean Theorem proved using triangle
G-SRT.5 Use congruence and similarity criteria for triangles to solve problems and to prove relationships in geometric figures.
Circles G-C
Understand and apply theorems about circles
G-C.1 Prove that all circles are similar.
G-C.2 Identify and describe relationships among inscribed angles, radii, and chords. Include the relationship between central, inscribed, and circumscribed angles; inscribed angles on a diameter are
right angles; the radius of a circle is perpendicular to the tangent where the radius intersects the circle.
G-C.3 Construct the inscribed and circumscribed circles of a triangle, and prove properties of angles for a quadrilateral inscribed in a circle.
Find arc lengths and areas of sectors of circles
G-C.5 Derive using similarity the fact that the length of the arc intercepted by an angle is proportional to the radius, and define the radian measure of the angle as the constant of proportionality;
derive the formula for the area of a sector.
Expressing Geometric Properties with Equations G-GPE
Translate between the geometric description and the equation for a conic section
G-GPE.1 Derive the equation of a circle of given center and radius using the Pythagorean Theorem; complete the square to find the center and radius of a circle given by an equation.
G-GPE.2 Derive the equation of a parabola given a focus and directrix.
Modeling with Geometry G-MG
Apply geometric concepts in modeling situations
G-MG.3 Apply geometric methods to solve design problems (e.g., designing an object or structure to satisfy physical constraints or minimize cost; working with typographic grid systems based on
Interpreting Categorical and Quantitative Data S-ID
Summarize, represent, and interpret data on a single count or measurement variable
S-ID.4 Use the mean and standard deviation of a data set to fit it to a normal distribution and to estimate population percentages. Recognize that there are data sets for which such a procedure is
not appropriate. Use calculators, spreadsheets, and tables to estimate areas under the normal curve.
Making Inferences and Justifying Conclusions S-IC
Understand and evaluate random processes underlying statistical experiments
S-IC.1 Understand statistics as a process for making inferences about population parameters based on a random sample from that population.
Make inferences and justify conclusions from sample surveys, experiments, and observational studies
S-IC.3 Recognize the purposes of and differences among sample surveys, experiments, and observational studies; explain how randomization relates to each.
S-IC.4 Use data from a sample survey to estimate a population mean or proportion; develop a margin of error through the use of simulation models for random sampling.
S-IC.5 Use data from a randomized experiment to compare two treatments; use simulations to decide if differences between parameters are significant.
S-IC.6 Evaluate reports based on data.
Using Probability to Make Decisions S-MD
Use probability to evaluate outcomes of decisions
S-MD.6 (+) Use probabilities to make fair decisions (e.g., drawing by lots, using a random number generator).
S-MD.7 (+) Analyze decisions and strategies using probability concepts (e.g., product testing, medical testing, pulling a hockey goalie at the end of a game). | {"url":"http://www.zen5.me/a/tag/common-core-math-1/","timestamp":"2014-04-20T15:56:09Z","content_type":null,"content_length":"101055","record_id":"<urn:uuid:0ca6e3e5-2ca4-42dc-9e7e-7faad8949ede>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vladimir Gershonovich Drinfeld
Born: 14 February 1954 in Kharkov, Ukraine
Click the picture above
to see two larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
Vladimir Drinfeld was born into a Jewish mathematical family. Gershon Ikhelevich Drinfeld (29 February 1908-18 August 2000) was educated at Kiev University and was head of the Mathematics Department
at Kharkov University from 1944 to 1962. By 1950 he was deputy director of the Kharkov Institute of Mathematics but it was closed in that year on the orders of Stalin. Gershon Drinfeld also played a
major role in the Kharkov Mathematical Society. He worked on differential geometry, particularly on measure and integration.
Vladimir Drinfeld's mathematical career started early ([11] or [12]):-
Drinfeld has written his first published paper when he was a schoolboy. He proved there a nice result in the style of Hardy's classic treatise "Inequalities" and solved a problem to which R A
Rankin devoted two notes. This paper still makes interesting reading.
In 1969, at the age of fifteen, he represented the Soviet Union at the International Mathematical Olympiad in Bucharest, Romania, and was awarded a gold medal after obtaining full marks - an
incredible achievement. He studied at Moscow State University from 1969 until 1974. He graduated in 1974 and remained at Moscow University to undertake research under Yuri Ivanovich Manin's
supervision. Ginzburg writes [5]:-
[Drinfeld's] vision of mathematics was, to a great extent, influenced by Yu I Manin, his advisor, and by the Algebraic Geometry Seminar (Manin's Seminar) that functioned with regularity at Moscow
State University for about two decades.
Drinfeld completed his postgraduate studies in 1977 and he defended his "candidate" thesis in 1978 at Moscow University. The "candidate" thesis is the Russian equivalent of the British or American
Ph.D. However, despite being extraordinarily talented, it was difficult for Drinfeld to obtain a position in Moscow. There were basically two reasons for this. Certainly his Jewish origins meant that
he suffered from anti-Semitism, but officially the Soviet Union operated a policy that people had their addresses in their passports and were only allowed to work in the town which appeared in this
address. Since the address which appeared in Drinfeld's passport was not Moscow, he could not get a job there. He went to Ufa, an industrial centre in the Ural mountains, where he obtained a position
teaching mathematics at Bashkir University, one of several universities in the city. In 1981 he moved to Kharkov and lived with his parents. He obtained a position working at the B I Verkin Physical
Engineering Institute of Low Temperatures of the National, part of the Ukrainian Academy of Sciences, in Kharkov.
Drinfeld gave an important lecture at the International Congress of Mathematicians in Berkeley in 1986. Entitled Quantum groups, the talk reviewed the results obtained by Drinfeld and M Jimbo on Hopf
algebras (quantum groups). He discussed the concepts of quantum groups and quantization, and also talked about Poisson groups, Lie bi-algebras and the classical Yang-Baxter equation. In 1988 Drinfeld
defended his "doctor" thesis at Steklov Institute, Moscow. The "doctor" thesis is the Russian equivalent of the German habilitation. On 21 August 1990 Drinfeld was awarded a Fields Medal at the
International Congress of Mathematicians in Kyoto, Japan:-
... for his work on quantum groups and for his work in number theory.
A Jaffe and B Mazur write in [2] about Drinfeld's work which led to the award of the Fields Medal:-
Drinfeld's interests can only be described as "broad". Not only do they span work in algebraic geometry and number theory, but his most recent ideas have taken a strikingly different direction:
he has been doing significant work on mathematical questions motivated by physics, including the relatively new theory of quantum groups.
Drinfeld defies any easy classification ... His breakthroughs have the magic that one would expect of a revolutionary mathematical discovery: they have seemingly inexhaustible consequences. On
the other hand, they seem deeply personal pieces of mathematics: "only Drinfeld could have thought of them!" But contradictorily they seem transparently natural; once understood, "everyone should
have thought of them!"
Manin ends his address to the International Congress of Mathematicians in Kyoto, Japan (which he could not give in person but was read by Michio Jimbo) with these words:-
I hope that I conveyed to you some sense of broadness, conceptual richness, technical strength and beauty of Drinfeld's work for which we are now honouring him with the Fields Medal. For me, ie
was a pleasure and a privilege to observe at a close distance the rapid development of this brilliant mind which taught me so much.
Drinfeld's main achievements are his proof of the Langlands conjecture for GL(2) over a functional field; and his work in quantum group theory. Although he only proved a special case of the Langlands
conjecture, Drinfeld has introduced important new ideas in his solution and made a real breakthrough. He introduced the idea of an elliptic module in his proof and this notion is leading to a whole
new topic within number theory. The interactions between mathematics and mathematical physics studied by Atiyah led to the introduction of instantons - solutions, that is, of a certain nonlinear
system of partial differential equations, the self-dual Yang-Mills equations, which were originally introduced by physicists in the context of quantum field theory. Drinfeld and Manin worked on the
construction of instantons using ideas from algebraic geometry.
Chari and Thakur write [3]:-
Drinfeld introduced Drinfeld modules and solved a substantial part of the Langlands programme when he was just 20 years old and completed the GL(2) case when he was 24. Drinfeld's work on
Langlands conjectures, quantum groups, p-adic uniformizations etc. illustrate his mastery over powerful and involved techniques. On the other hand, his one page proof (jointly with Vladut) giving
a sharp asymptotic upper bound for the number of points of a curve defined over a finite field of order p^(2^n), uses only high-school algebra applied nicely to well-known results. He also gave a
one page proof of the fact that any rotation invariant finitely additive measure on the two or three dimensional sphere is proportional to Lebesgue measure by using a clever combination of known
In 1992 Drinfeld was elected a member of the Ukrainian Academy of Sciences. He continued to live in Kharkov until 1998 when he emigrated to the United States. In December 1998, he was appointed to
the University of Chicago. On Drinfeld's appointment to Chicago, Manin said [10]:-
Drinfeld's work deeply influenced the world of mathematics of the last two decades, Several research monographs, Seminar Notes and hundreds of papers were dedicated to the two new chapters of
mathematics created by him - the so-called Drinfeld modules and quantum groups.
Alexander A Beilinson, also a student of Manin's, was appointed to the University of Chicago in 1998, just a short time before Drinfeld. Beilinson and Drinfeld had known each other for many years and
had already collaborated on two papers before becoming colleagues in Chicago: Affine Kac-Moody algebras and polydifferentials (1994) and Quantization of Hitchin's fibration and Langlands' program
(1996). Their collaboration in Chicago led to the publication of a jointly authored book Chiral algebras published by the American Mathematical Society in 2004. Francisco J Plaza Martin writes in a
This book presents a comprehensive approach to the theory of chiral algebras from the point of view of algebraic geometry. Without a doubt, it will become a standard reference on the subject. ...
Chiral algebras arose in mathematical physics in the study of conformal field theory. On the mathematical side, the local theory of chiral algebras overlaps the theory of vertex algebras [R E
Borcherds], which are normally studied with representation theory techniques. In these two approaches the "operator product expansion" formalism plays an essential role. As the authors say, their
motivation for studying chiral algebras was the understanding of geometric automorphic forms in the D-module setting as well as the description of a spectral decomposition of the category of
representations of an affine Kac-Moody algebra.
One of Drinfeld's most recent articles is Infinite-dimensional vector bundles in algebraic geometry: an introduction. Drinfeld writes in the introduction to the paper:-
The goal of this work is to show that there is a reasonable algebro-geometric notion of vector bundle with infinite-dimensional locally linearly compact fibers and that these objects appear 'in
nature'. Our approach is based on some results and ideas discovered in algebra during the period 1958-1972 by H Bass, L Gruson, I Kaplansky, M Karoubi, and M Raynaud.
Drinfeld was named Harry Pratt Judson Distinguished Service Professor at the University of Chicago on 1 March 2001. In 2008 he was elected to the American Academy of Arts and Sciences.
Article by: J J O'Connor and E F Robertson
Click on this link to see a list of the Glossary entries for this page
List of References (13 books/articles)
Mathematicians born in the same country
Honours awarded to Vladimir Drinfeld
(Click below for those honoured in this way)
Fields Medal 1990
Cross-references in MacTutor
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © July 2009 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is: | {"url":"http://www-history.mcs.st-and.ac.uk/Biographies/Drinfeld.html","timestamp":"2014-04-19T14:30:59Z","content_type":null,"content_length":"22756","record_id":"<urn:uuid:c31e3830-b358-41ae-90f3-14d73e537aba>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Asymptotic solutions of linear ordinary differential equations at an irregular singularity of rank 1.
(English) Zbl 0896.34049
New existence theorems are constructed for asymptotic solutions of linear ordinary differential equations of arbitrary order in the neighborhood of an irregular singularity of rank 1 with distinct
characteristic values.
Let (1) $L\left(y\right)=0$ be an equation of the above type and ${\Lambda }$ be the set of characteristic values of (1). The author builds on the base of ${\Lambda }$ a set of canonical sectors $S=\
left\{{S}_{\lambda }:\lambda \in {\Lambda }\right\}$ and proves the following main theorem. $\forall \lambda \in {\Lambda }$ there exists a unique solution ${w}_{\lambda }$ of (1) such that
${w}_{\lambda }\sim {e}^{\lambda z}{z}^{{\mu }_{\lambda }}\sum _{s=0}^{\infty }\frac{{a}_{s\lambda }}{{z}^{s}}$
as $z\to \infty$, uniformly in any closed sector properly interior to ${S}_{\lambda }$. Furthermore, this asymptotic expansion can be differentiated $n-1$ times under the same circumstances, and the
$n$ solutions ${w}_{\lambda }$ are linearly independent.
34E05 Asymptotic expansions (ODE)
34A30 Linear ODE and systems, general | {"url":"http://zbmath.org/?q=an:0896.34049","timestamp":"2014-04-19T15:00:18Z","content_type":null,"content_length":"22941","record_id":"<urn:uuid:4b4444c7-3f5b-40dd-b63d-2633c68dc85f>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics - Calculus
Mathematics - Calculus (464 results)
Mathematical HandbookContaining the Chief Formulas of Algebra, Trigonometry, Circular and Hyperbolic Functions, Differential and Integral Calculus, and Analytical Geometry, Together With Mathematical
The uses which this book may serve hardly need to be pointed out. Some years ago the writer composed the part relating to Trigonometry and used it as a syllabus for instruction in his college
classes. It served its purpose and soon went out of print. But a stray copy of it found its way to the table of a well-known civil engineer, to whom it proved constantly useful, and by whom it was
often referred to as "his memory." This engineer has suggested a revision and republication of the original book with important enlargements. Accordingly there have been added Sections on Algebra,
the Differential and Integral Calculus, and Analytic Geometry. The subject of Hyperbolic Functions, which now receives much more attention than formerly, has been more fully treated. Tables have been
added, which include not only those universally used, but also some - like those of the Hyperbolic Functions, of the Natural Logarithms of Numbers, and that of the Velocity of Falling Bodies (v= 2
gh) - that have been hitherto not readily accessible.<br><br>Of course no efforts have been spared to secure correctness in the printing of the formulas and the tables; but persons experienced in
such work need not be reminded of the improbability that the first edition of a book of this kind should be absolutely free from error. The writer and the publishers can only add, that notice of any
errors that may be detected will be thankfully received, and the necessary corrections will be promptly made and published. Also, suggestions of desirable additions to the book and of other
improvements are invited with a view to their use in possible future editions.
One of the purposes of the elementary working courses in mathematics of the freshman and sophomore years is to exhibit the bond that unites the experimental sciences. "The bond of union among the
physical sciences is the mathematical spirit and the mathematical method which pervade them." For this reason, the applications of mathematics, not to artificial problems, but to the more elementary
of the classical problems of natural science, find a place in every working course in mathematics. This presents probably the most difficult task of the text-book writer,- namely, to make clear to
the student that mathematics has to do with the laws of actual phenomena, without at the same time undertaking to teach technology, or attempting to build upon ideas which the student does not
possess. It is easy enough to give examples of the application of the processes of mathematics to scientific problems; it is more difficult to exhibit by these problems, how, in mathematics, the very
language and methods of thought fit naturally into the expression and derivation of scientific laws and of natural concepts.<br><br>It is in this spirit that the authors have endeavored to develop
the fundamental processes of the calculus which play so important a part in the physical sciences; namely, to place the emphasis upon the mode of thought in the hope that, even though the student may
forget the details of the subject, he will continue to apply these fundamental modes of thinking in his later scientific or technical career. It is with this purpose in mind that problems in
geometry, physics, and mechanics have been freely used. The problems chosen will be readily comprehended by students ordinarily taking the first course in the calculus.<br><br>A second purpose in an
elementary working course in mathematics is to secure facility in using the rules of operation which must be applied in calculations.
Integral Calculus for BeginnersWith An Introduction to the Study of Differential Equations
The present volume is intended to form a sound introduction to a study of the Integral Calculus, suitable for a student beginning the subject. Like its companion, the Differential Calculus for
Beginners, it does not therefore aim at completeness, but rather at the omission of all portions of the subject which are usually regarded as best left for a later reading.<br><br>It will be found,
however, that the ordinary processes of integration are fully treated, as also the principal methods of Rectification and Quadrature, and the calculation of the volumes and surfaces of solids of
revolution. Some indication is also afforded to the student of other useful applications of the Integral Calculus, such as the general method to be employed in obtaining the position of a Centroid,
or the value of a Moment of Inertia.
Introductory Course Differential EquationsFor Students in Classical and Engineering Colleges
The aim of this work is to give a brief exposition of some of the devices employed in solving differential equations. The book presupposes only a knowledge of the fundamental formulæ of integration,
and may be described as a chapter supplementary to the elementary works on the integral calculus.<br><br>The needs of two classes of students, with whom the author has been brought into contact in
the course of his experience as a teacher, have determined the character of the work. For the sake of students of physics and engineering who wish to use the subject as a tool, and have little time
to devote to general theory, the theoretical explanations have been made as brief as is consistent with clearness and sound reasoning, and examples have been worked in full detail in almost every
case. Practical applications have also been constantly kept in mind, and two special chapters dealing with geometrical and physical problems have been introduced.<br><br>The other class for which the
book is intended is that of students in the general courses in Arts and Science, who have more time to gratify any interest they may feel in this subject, and some of whom may be intending to proceed
to the study of the higher mathematics.
A History of Mathematics
Florian Cajori's A History of Mathematics is a seminal work in American mathematics. The book is a summary of the study of mathematics from antiquity through World War I, exploring the evolution of
advanced mathematics. As the first history of mathematics published in the United States, it has an important place in the libraries of scholars and universities. A History of Mathematics is a
history of mathematics, mathematicians, equations and theories; it is not a textbook, and the early chapters do not demand a thorough understanding of mathematical concepts. The book starts with the
use of mathematics in antiquity, including contributions by the Babylonians, Egyptians, Greeks and Romans. The sections on the Greek schools of thought are very readable for anyone who wants to know
more about Greek arithmetic and geometry. Cajori explains the advances by Indians and Arabs during the Middle Ages, explaining how those regions were the custodians of mathematics while Europe was in
the intellectual dark ages. Many interesting mathematicians and their discoveries and theories are discussed, with the text becoming more technical as it moves through Modern Europe, which
encompasses discussion of the Renaissance, Descartes, Newton, Euler, LaGrange and Laplace. The final section of the book covers developments in the late 19th and early 20th Centuries. Cajori
describes the state of synthetic geometry, analytic geometry, algebra, analytics and applied mathematics. Readers who are not mathematicians can learn much from this book, but the advanced chapters
may be easier to understand if one has background in the subject matter. Readers will want to have A History of Mathematics on their bookshelves.
Differential Calculus for Beginners
The present small volume is intended to form a sound introduction to a study of the Differential Calculus suitable for the beginner. It does not therefore aim at completeness, but rather at the
omission of all portions which are usually considered best left for a later reading. At the same time it has been constructed to include those parts of the subject prescribed in Schedule I. of the
Regulations for the Mathematical Tripos Examination for the reading of students for Mathematical Honours in the University of Cambridge.<br><br>Particular attention has been given to the examples
which are freely interspersed throughout the text. For the most part they are of the simplest kind, requiring but little analytical skill. Yet it is hoped they will prove sufficient to give practice
in the processes they are intended to illustrate.
A Short Account of the History of Mathematics
The subject-matter of this book is a historical summary of the development of mathematics, illustrated by the lives and discoveries of those to whom the progress of the science is mainly due. It may
serve as an introduction to more elaborate works on the subject, but primarily it is intended to give a short and popular account of those leading facts in the history of mathematics which many who
are unwilling, or have not the time, to study it systematically may yet desire to know.<br><br>The first edition was substantially a transcript of some lectures which I delivered in the year 1888
with the object of giving a sketch of the history, previous to the nineteenth century, that should be intelligible to any one acquainted with the elements of mathematics. In the second edition,
issued in 1893, I rearranged parts of it, and introduced a good deal of additional matter. The third edition, issued in 1901, was revised, but not materially altered; and the present edition is
practically a reprint of this, save for a few small corrections and additions.
Lectures on Cauchy's Problem in Linear Partial Differential Equations
In the year 1883 a legacy of eighty thousand dollars was left to the President and Fellows of Yale College in the city of New Haven, to be held in trust, as a gift from her children, in memory of
their beloved and honored mother, Mrs. Hepsa Ely Silliman.<br><br>On this foundation Yale College was requested and directed to establish an annual course of lectures designed to illustrate the
presence and providence, the wisdom and goodness of God, as manifested in the natural and moral world. These were to be designated as the Mrs. Hepsa Ely Silliman Memorial Lectures. It was the belief
of the testator that any orderly presentation of the facts of nature or history contributed to the end of this foundation more effectively than any attempt to emphasize the elements of doctrine or of
creed; and he therefore provided that lectures on dogmatic or polemical theology should be excluded from the scope of this foundation, and that the subjects should be selected rather from the domains
of natural science and history, giving special prominence to astronomy, chemistry, geology and anatomy.<br><br>It was further directed that each annual course should be made the basis of a volume to
form part of a series constituting a memorial to Mrs. Silliman. The memorial fund came into the possession of the Corporation of Yale University in the year 1901: and the present volume constitutes
the fifteenth of the series of memorial lectures.
Vector CalculusWith Applications to Physics
This volume embodies the lectures given on the subject to graduate students over a period of four repetitions. The point of view is the result of many years of consideration of the whole field. The
author has examined the various methods that go under the name of Vector, and finds that for all purposes of the physicist and for most of those of the geometer, the use of quaternions is by far the
simplest in theory and in practice. The various points of view are mentioned in the introduction, and it is hoped that the essential differences are brought out. The tables of comparative notation
scattered through the text will assist in following the other methods.<br><br>The place of vector work according to the author is in the general field of associative algebra, and every method so far
proposed can be easily shown to be an imperfect form of associative algebra. From this standpoint the various discussions as to the fundamental principles may be understood. As far as the mere
notations go, there is not much difference save in the actual characters employed. These have assumed a somewhat national character. It is unfortunate that so many exist.<br><br>The attempt in this
book has been to give a text to the mathematical student on the one hand, in which every physical term beyond mere elementary terms is carefully defined. On the other hand for the physical student
there will be found a large collection of examples and exercises which will show him the utility of the mathematical methods.
Differential and Integral Calculus
This book presents a first course in the calculus substantially as the author has taught it at the University of Michigan for a number of years. The following points may be mentioned as more or less
prominent features of the book.<br><br>In the treatment of each topic, the text is intended to contain a precise statement of the fundamental principle involved, and to insure the student's clear
understanding of this principle, without distracting his attention by the discussion of a multitude of details. The accompanying exercises are intended to present the problem in hand in a great
variety of forms and guises, and to train the student in adapting the general methods of the text to fit these various forms. The constant aim is to prevent the work from degenerating into mere
mechanical routine, as it so often tends to do. Wherever possible, except in the purely formal parts of the course, the summarizing of the theory into rules or formulas which can be applied blindly
has been avoided. For instance, in the chapter on geometric applications of the definite integral, stress is laid on the fact that the basic formulas are those of elementary geometry, and special
formulas involving a coordinate system are omitted.<br><br>Where the passage from theory to practice would be too difficult for the average student, worked examples are inserted.
A Text-Book of Differential CalculusWith Numerous Worked Out Examples
In this work it has been my aim to lay before students a strictly rigorous and, at the same time, simple exposition of the Differential Calculus and its chief applications. The present volume is
intended for beginners and is so designed as to meet the requirements of Part I. of the Cambridge Mathematical Tripos Examination, and of the Examinations for the B.A. and B.Sc. degrees of Indian
Universities.<br><br>The chief characteristics of the present work may be indicated as follows: - (1) The fundamental principles of the Differential Calculus have been based on a purely arithmetical
foundation. Thus, the various theorems have been carefully enunciated and their proofs have been made quite independent of geometrical intuition. In this connection, I may specially mention the
chapters on Rolle's Theorem and Taylor's Theorem, Maxima and Minima, and Indeterminate Forms. (2) Almost every article is followed by worked out examples, specially suited for illustrating the
article. There are also numerous exercises in every chapter. (3) A special chapter deals with curve-tracing and the important properties of the best-known curves. (4) The order in which the chapters
are arranged is intended to enable the beginner to study the simple geometrical applications of the Differential Calculus immediately after he has learnt differentiation.
Calculus, With ApplicationsAn Introduction to the Mathematical Treatment of Science
This little book has been written for two classes of persons: those who wish, for purposes of culture, to know, in as simple and direct a way as possible, what the calculus is and what it is for; and
students primarily engaged in work in chemistry, astronomy, economics, etc., who have not time or inclination to take long courses in mathematics, yet who would like to know how to use a tool as fine
as the calculus. The pure mathematician will note the omission of various subjects that are important from his point of view; but for him there are admirable and lengthy treatises on pure calculus.
Also the student whose experience has led him to conceive of mathematical study as the doing of interminable lists of exercises, will be surprised and, possibly, disappointed. This book is a reading
lesson in applied mathematics. Fancy exercises have been avoided. The examples are, for the most part, real problems from mechanics and astronomy. This plan has been pursued in the conviction that
such problems are just as good as make-believe ones for purposes of discipline, and a good deal better for purposes of knowledge.
Differental Equations
Copyright, 1917, by Earle Raymond Hedrick and Otto Dunkel Copyright, 1945, by Helen B.Hedrick and Otto Dunkel All Rights Reserved. This new Dover edition first published in 1959 is an unabridged and
unaltered republication of the Hedrick-Dunkel translation of A Course in Mathematical Analysis, Volume II, Part Two, Differential Equations. This book is republished by permission of Ginn and
Company, the original publisher of this text. Manufactured in the United States of America. Dover Publications, Inc.180 Varick Street New York 14, N.Y.
Differential and Integral CalculusWith Examples and Applications Calculus
Ik the original work, the author endeavored to prepare a textbook on the Calculus, based on the method of limits, that should be within the capacity of students of average matheuiatical ability, and
yet contain all that is essential to a working knowledge of the subject. In the revision of the book the same object has been kept in view. Most of the text has been rewritten, th i demonstrations
liave been carefully revised, and for the most part new examples have been substituted for tlie old. There has been some rearrangement of subjects in a more natural order. In the Differential
Calculus, illustrations of the derivative have been introduced in Chapter II., and applications of differentiation will be found also among the examples in the chapter immedi-.ately following.
Chapter VII., on Series.is entirely new. In the Integral Calculus, immediately after the integration of standard forms. Chapter XXI. has been added, containing simple applications of integration. In
both the Differential and Integral Calculus, examples illustrating applications to Mechanics and Physics will be found, especially in Chapter X.of the Differential Calculus, on Maxima and Minima, and
in Chapter XXXII. of the Integral Calculus. The latter chapter has been prepared by my colleague. Assistant Professor N.R. George, Jr. The author also acknowledges his special obligation to his
colleagues, Professor H.W. Tyler and Professor F.S. Woods, for important suggestions and criticisms. January 1,
Advanced CalculusA Text Upon Select Parts of Differential Calculus, Differential
The first five show distinctly that the independent variable is ac, whereas the last three do not explicitly indicate the variable and should not be used unless there is no chance of a
misunderstanding.2. The fundamental formulas of differential calculus are derived directly from the application of the dehnition (2)or (3)and from a few fundamental propositions in limits. First may
be mentioned(5) D(u 31; 11) -- Du j;Dv, +vDu. (6) (7)It may be recalled that(4), which is the rule for differentiating a function of a function, follows from the application of the theorem that the
limit of a product is the product of the limits to the fractional identity- -- ;whence Aa: Ay Aa: lim 55: limA 2 lim 534: limi lim 934, which is equivalent to(4). Similarly, if y= f(.1:)and if rc, as
the inverse function of y, be written re :f-1(y) from analogy withy =-sins: and :c=- sin 1 y, the relation(5) follows from the fact that AxAy and AyAa: are reciprocals. The next three result from the
immediate application of the theorems concerning limits of sums, products, and quotients( 21).The rule for differentiating a power is derived in case nis integral by the application of the binomial
theorem. and the limit when A.r=0is clearly n:1: 1.The result may be extended to rational values of the index nby writing n= B, y :xii, 1 I ::xl and by differentiating both sides of the equation and
reducing. To prove that(7) still holds when nis irrational, it would be necessary to have a workable definition of irrational numbers and to develop the properties of such numbers in greater detail
than seems wise at this point. The formula is therefore assumed in accordance with the principle of permanence of form( 178), just as formulas like ama =a +of the theory of exponents, which may
readily be proved for rational bases and exponents, are assumed without proof to hold also for irrational bases and exponents. See, however, 18-25 and the exercises thereunder. It is frequently
better to regard the quotient as the product u- v-1and apply(6). TFor when Arn = 0, then Ay= 0 or AyAn: could not approach a limit.
Elements of the Differential and Integral Calculus
Luc Tfe.tri.76 Entered according to act of Congress, in the year 1853, by William Smyth, A.M. in the Clerk 8 Office of the District Court of the District of Maine.
The Messenger of Mathematics
In annouacing the commencement of a new series, the Editors desire to explain the modifications which will distinguish it from the former series. The Messenger of Mathematics was projected about ten
years ago, chiefly with the view of encouraging original research in the three Universities, among junior graduates and others. It was thought that through the Messenger many valuable papers might be
made public which their authors would not have deemed of sufficient interest to communicate to Scientific Societies. An examination of the Five Volumes already published will make it evident that the
Editors have throughout endeavoured to keep their original purpose steadily in view. While feeling, however, that they have every reason to be satisfied with the success achieved by the Messenger
regarded as a stimulus to original research in junior students, they have also great satisfaction in acknowledging that no inconsiderable proportion of its contents have been supplied by writers of
established reputation, who rank amongst the foremost mathematicians of the age; and it is this fact in particular which now induces them to appeal directly to the mathematical world at large, and to
remove from their title-page any words which might be supposed to limit the sphere of usefulness of the Messenger.
Applied Calculus an Introductory Textbook
This book is intended to provide an introductory course in the Calculus for the use of students of natural and applied science whose knowledge of mathematics is slight. All the mathematics that the
student is assumed to know is algebra up to quadratic equations; elementary trigonometry up to the formulae of sines, cosines, and tangents of compound angles; the elements of geometry; and the
method of graphs.<br><br>Infinite series are essentially difficult and unconvincing unless treated rigorously - as the old conundrum of Achilles and the tortoise shows - and there is no need to use
them in the elementary parts of the subject. They have therefore been avoided altogether.<br><br>Definite problems, dealing with actual things, precede the analytical treatment, which I have tried to
make simple and convincing; and I hope any reader who pursues the subject further in the standard works will find that he has only to extend and qualify the proofs, not to unlearn them.<br><br>I have
introduced and used limits in the first chapter before defining them, for the same reason that I should show a child a herring and tell him about its habits of life before describing it to him as one
of two distinct but closely-allied species of malacopterygian fishes of the genus Clupea.<br><br>The pictures of celebrated mathematicians and scientists are intended to arouse some human interest in
mathematical science and the history of its progress. Some of the founders of the science lived more than ordinarily interesting lives, and if the mathematician ignores the human side of things, he
can hardly expect humanity not to ignore him.<br><br>Perhaps the title of the book needs a word of explanation. In applied mechanics it is usual to discuss the theoretical principles of mechanics as
well as their applications. This line has been followed here, the treatment of practical problems being preceded by a fairly full discussion of the necessary theory.
Syllabus of MathematicsA Symposium Compiled By the Committee
To the Society for the Promotion of Engineering Ediication: The committee was appointed at a joint meeting of mathematicians and engineers held in Chicago, December30-31, 1907, under the auspices of
the Chicago Section of the American Mathematical Society, and Sections A and Dof the American Association for the Advancement of Science, and on the suggestion of oiBScers of the Society for the
Promotion of Engineering Education who were there present, the committee was instructed to report to this Society. The membership of the committee is as follows: Alger, Philip R., t professor of
mathematics, U.S. Navy, Annapolis, Md. Campbell Donald F., professor of mathematics, Armour Institute of Technology, Chicago, Hi. Engler, Edmund A., president of the Worcester Polytechnic Institute,
Worcester, Mass. Haseins Charles N., assistant professor of mathematics, Dartmouth College, Hanover, N.H. Howe, CharlesS., president. Case School of Applied Science, Cleveland, Ohio. KuiCHLiNG, Bmil,
consulting civil engineer. New York City. Magrudbb, William T., professor of mechanical engineering, Ohio State University, Columbus, Ohio. MoDJESKi, Balph, civil engineer, Chicago, Hi. Osgood,
William P., professor of mathematics. Harvard University, Cambridge, Mass. SiiiCHTEB, CharlesS., consulting engineer of the U.S. Reclamation Service, professor of applied mathematics. University of
Wisconsin, Madison, Wis. For an account of the Chicago meeting, see Scienee for 1908 (July 12, 24, and 31;August 7 and 28;and September4). tDeceased.
An Elementary Treatise on the Differential and Integral Calculus
Analytical science, after having been long neglected in these countries as an elementary department of education, has, within a few years, been cultivated by the young aspirants for mathematical
celebrity with an ardour; and prosecuted with a rapidity and success, which its warmest admirers could scarcely have hoped for. This change would probably have taken place at an earlier period, but
for the obstacle opposed to it by the want of treatises on the subject, in our language, of a sufficiently elementary nature. The restless activity of the human mind in the pursuit of knowledge was
not long to be checked by so trifling an impediment, and our students soon found in foreign works that which our own professors had failed to supply; and though the medium of these treatises,
analytical science began, and has continued, to be cultivated at the universities with singular success.
The Theory of Functions of a Real Variable and the Theory of Fourier's Series, By E. W Hobson
TlHE theory of functions of a real variable, as developed during the last-- few decades, is a body of doctrine resting, first upon a definite conception of the arithmetic continuum which forms the
field of the variable, and which includes a precise arithmetic theory of the nature of a limit, and secondly, upon a definite conception of the nature of the functional relation. The procedure of the
theory consists largely in the development, based upon precise definitions, of a classification of function.1-, according as they possess, or do not possess, certain peculiarities, such as
continuity, differentiability, c.throughout the domain of the variable, or at points forming a selected set contained in that domain. The detailed consequences of the presence, or of the absence, of
such peculiarities are then traced out, and are applied for the purpose of obtaining conditions for the validity of the processes of Mathematical Analysis. These processus, which have boon long
employed in the so-called Infinitesimal Calculus, consist essentially in the ascertainment of the existence, and in the evaluation, of limits, and are subject, In every case, to restrictive
assumptions which are necessary conditions of their validity, The object to be attained by the theory of functions of a real variable consists then largely in the precise formulation of necessary and
sufficient conditions for the validity of the limiting processes of Analysis. A necessary requisite in such formulation is a language descriptive of particular aggregates of values of the variable,
in relation to which functions possess definite peculiarities. This language is provided by the Theory of Sets of Points, also known, in its more general aspect, as the Theory of Aggregates, which
contains an analysis of the peculiarities of structure and of distribution in the field of the variable which such sets of points may possess. This theory, which had its origin in the exigencies of a
critical theory of functions, and has since received wide applications, not only in Pure Analysis, but also in Geometry, must be regarded as an integral part of the subject. A most important part of
the theory of functions is the theory of the representation of functions in a prescribed manner, especially by means of secies or sequences of functions of prescribed types.
A Treatise on the Theory of Bessel Functions
This book has been designed with two objects in view. The first is the development of applications of the fundamental processes of the theory of functions of complex variables. For this purpose
Bessel functions are admirably adapted; while they offer at the same time a rather wider scope for the application of parts of the theory of functions of a real variable than is provided by
trigonometrical functions in the theory of Fourier series.<br><br>The second object is the compilation of a collection of results which would be of value to the increasing number of Mathematicians
and Physicists who encounter Bessel functions in the course of their researches. The existence of such a collection seems to be demanded by the greater abstruseness of properties of Bessel functions
(especially of functions of large order) which have been required in recent years in various problems of Mathematical Physics.<br><br>While my endeavour has been to give an account of the theory of
Bessel functions which a Pure Mathematician would regard as fairly complete, I have consequently also endeavoured to include all formulae, whether general or special, which, although without
theoretical interest, are likely to be required in practical applications; and such results are given, so far as possible, in a form appropriate for these purposes. The breadth of these aims,
combined with the necessity for keeping the size of the book within bounds, has made it necessary to be as concise as is compatible with intelligibility.<br><br>Since the book is, for the most part,
a development of the theory of functions as expounded in the Course of Modern Analysis by Professor Whittaker and myself, it has been convenient to regard that treatise as a standard work of
reference for general theorems, rather than to refer the reader to original sources.<br><br>It is desirable to draw attention here to the function which I have regarded as the canonical function of
the second kind, namely the function which was defined by Weber and used subsequently by Schlafli, by Graf and Gabler and by Nielsen. For historical and sentimental reasons it would have been
pleasing to have felt justified in using Hankel's function of the second kind; but three considerations prevented this. The first is the necessity for standardizing the function of the second kind;
and, in my opinion, the authority of the group of mathematicians who use Weber's function has greater weight than the authority of the mathematicians who use any other one function of the second
An Tu the Theory of Infinite Series
AnTu the Theory of Infinite Series was written by Bromwich in 1908. This is a 528 page book, containing 129230 words and 59 pictures. Search Inside is enabled for this title.
Vector AnalysisA Text-Book for the Use of Students of Mathematics and Physics
When I undertook to adapt the lectures of Professor Gibbs on Vector Analysis for publication in the Yale Bicentennial Series, Professor Gibbs himself was already so fully engaged upon his work to
appear in the same series, Elementary Principles in Statistical Mechanics, that it was understood no material assistance in the composition of this book could be expected from him. For this reason he
wished me to feel entirely free to use my own discretion alike in the selection of the topics to be treated and in the mode of treatment. It has been my endeavor to use the freedom thus granted only
in so far as was necessary for presenting his method in text-book form.<br><br>By far the greater part of the material used in the following pages has been taken from the course of lectures on Vector
Analysis delivered annually at the University by Professor Gibbs. Some use, however, has been made of the chapters on Vector Analysis in Mr. Oliver Heaviside's Electromagnetic Theory (Electrician
Series, 1893) and in Professor Föppl's lectures on Die Maxwell'sche Theorie der Electricitāt (Teubner, 1894). My previous study of Quaternions has also been of great assistance.<br><br>The material
thus obtained has been arranged in the way which seems best suited to easy mastery of the subject. Those Arts, which it seemed best to incorporate in the text but which for various reasons may well
be omitted at the first reading have been marked with an asterisk (*). Numerous illustrative examples have been drawn from geometry, mechanics, and physics. Indeed, a large part of the text has to do
with applications of the method.
Mathematics for Engineers
The Directly-Useful Technical Series requires a few words by of introduction. Technical books of the past have arranged themselves largely under two sections: the Theoretical and the Practical.
Theoretical books have been written more for the training of college students than for the supply of information to men in practice, and have been greatly filled with problems of an academic
character. Practical books have often sought the other extreme, omitting the scientific basis upon which all good practice is built, whether discernible or not. The present series is intended to
occupy a midway position. The information, the problems and the exercises are to be of a directly-useful character, but must at the same time be wedded to that proper amount of scientific explanation
which alone will satisfy the inquiring mind. We shall thus appeal to all technical people throughout the land, either students or those in actual practice.
▲ Back to Top | {"url":"http://www.forgottenbooks.org/Mathematics/Calculus","timestamp":"2014-04-17T14:36:06Z","content_type":null,"content_length":"95863","record_id":"<urn:uuid:8ddf2abf-22dd-4b8e-a501-288a8fe536d2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00300-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hobart, IN Math Tutor
Find a Hobart, IN Math Tutor
I have several degrees in Education and Business. I have 4 years' experience serving as a Coordinator of Assessment and Curriculum. I am dedicated to expanding the student learning experience,
while contributing my knowledge of the Public School System to develop a superior curriculum and student development program.
15 Subjects: including geometry, study skills, ACT Math, special needs
...Most importantly, I build trusting and caring relationships with each one of my students so that they will be willing to work hard for me and with me. I have tutored just about every subject
under the sun, and believe that building confidence is the key to success. I hope to have the opportunit...
15 Subjects: including algebra 1, prealgebra, English, reading
...I have tutored many people over the course of 7-8 years. I started tutoring back in my early years of high school in areas such as math, science, and in study skills of other areas such as
history, English, and other miscellaneous courses. There have been many tutoring opportunities handed to me because they know I am fit for the job.
25 Subjects: including geometry, chemistry, calculus, ACT Math
...I am currently teaching Decision Science, which is a applied Linear Algebra class in the Business Department. I have an MBA in Marketing from Keller Graduate School, plus over thirty years of
marketing experience as president of a manufacturers' representative firm, and am a current member of th...
11 Subjects: including statistics, probability, algebra 1, algebra 2
Good day,I can assist with just about any educational service need you may have. I offer comprehensive SAT preparation, as well as assistance with the core subjects for K-12, and some basic
college courses. I have a BA from Olivet Nazarene University.
47 Subjects: including probability, algebra 1, algebra 2, geometry
Related Hobart, IN Tutors
Hobart, IN Accounting Tutors
Hobart, IN ACT Tutors
Hobart, IN Algebra Tutors
Hobart, IN Algebra 2 Tutors
Hobart, IN Calculus Tutors
Hobart, IN Geometry Tutors
Hobart, IN Math Tutors
Hobart, IN Prealgebra Tutors
Hobart, IN Precalculus Tutors
Hobart, IN SAT Tutors
Hobart, IN SAT Math Tutors
Hobart, IN Science Tutors
Hobart, IN Statistics Tutors
Hobart, IN Trigonometry Tutors | {"url":"http://www.purplemath.com/Hobart_IN_Math_tutors.php","timestamp":"2014-04-20T14:02:00Z","content_type":null,"content_length":"23750","record_id":"<urn:uuid:1ce1f33f-f13b-4105-a191-db6d5a17a3f6>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
Array of 2^n
Chris Barker chrishbarker at home.net
Mon Oct 22 19:48:56 CEST 2001
Moshe wrote:
> Okay, python newbie here. I'm liking what I see, but it seems like there
> are holes in the features of python's objects. I remember programming in
> squeak (an extension of smalltalk), and you could do complex operations on
> lists, and it would apply the result to each item. I want to make a n-item
> list such that mylist[i] = 2**i. How can I generate such a list in Python?
IF you really want a list, you have been given a number of options. If,
however, you are doing this a lot, with large sequences of numbers, you
are probably doing other math operations as well, and you want a nice
fast way to do operations on sequences of numbers that has a clean
sequence. For this you want the Numeric module:
Then you can do:
from Numeric import *
myArray = 2**arange(i)
You can also do all kinds fo nifty element-wise math with all the basic
operators and math functions. Here is a quick example for the root mean
square values of a rank-1 array:
Numeric has many other nifty features as well. If you are working with
numbers, it is a wonderful tool.
Christopher Barker,
ChrisHBarker at home.net --- --- ---
http://members.home.net/barkerlohmann ---@@ -----@@ -----@@
------@@@ ------@@@ ------@@@
Oil Spill Modeling ------ @ ------ @ ------ @
Water Resources Engineering ------- --------- --------
Coastal and Fluvial Hydrodynamics --------------------------------------
More information about the Python-list mailing list | {"url":"https://mail.python.org/pipermail/python-list/2001-October/117606.html","timestamp":"2014-04-16T14:17:09Z","content_type":null,"content_length":"4303","record_id":"<urn:uuid:72b561ba-5bd7-4359-9a80-96e37a72aef5>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Clifton Heights Algebra Tutor
Find a Clifton Heights Algebra Tutor
...Before that, I tutored students in high school math as well as elementary math. I especially like to make students realize that math is not the enemy and that it is very useful for daily life.
I served as an elementary school tutor during my first two years of college.
10 Subjects: including algebra 1, algebra 2, Latin, SAT math
...Around that time, I struck out on my own, and provided in home math tutoring to middle school, high school, and college students for several years until 2008. Today, even though my career has
taken me away from tutoring full-time, I continue to tutor math because it is one of my favorite subject...
12 Subjects: including algebra 2, algebra 1, calculus, writing
...I hold Bachelor of Science and Master of Science degrees. Also, I have experience instructing elementary-age children in a home-schooling environment. I consider one of the most important
elements of science to be researching the correct answer.
20 Subjects: including algebra 1, algebra 2, reading, statistics
...I am a recent graduate with a bachelor of science in biological sciences, I tutored general biology for a semester, and have involvement in general biology discussions with friends. In college,
I took one year of general chemistry, one year of organic chemistry in which I received an outstanding...
28 Subjects: including algebra 1, algebra 2, chemistry, biology
...Prior to my time at Temple, I was a tutor at the learning center at York College of Pennsylvania in chemistry, organic chemistry and physics. These various opportunities have taught me how to
communicate and teach at these different levels. When it comes to chemistry, it's like teaching a new language.
6 Subjects: including algebra 2, algebra 1, chemistry, prealgebra | {"url":"http://www.purplemath.com/Clifton_Heights_Algebra_tutors.php","timestamp":"2014-04-20T13:43:34Z","content_type":null,"content_length":"24186","record_id":"<urn:uuid:5e82c7ce-33f1-48d5-9794-93b074bdc4d7>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chevy Chase Village, MD Algebra 2 Tutor
Find a Chevy Chase Village, MD Algebra 2 Tutor
...I have worked with a student whose severe spelling deficit masked a gifted storyteller, building his confidence and skills. I have an extensive background in mathematics rarely found in
elementary teachers. Beyond memorizing facts, I help children to see how math applies to real life.
32 Subjects: including algebra 2, chemistry, reading, biology
...I had the pleasure of teaching him him one hour per week.I have taught Algebra 1 throughout my 44 year history of teaching. I have used the parts of Algebra 2 and material in future courses to
know what is most important to cover in Algebra 1. I have taught Algebra 2 since my second year of tea...
21 Subjects: including algebra 2, calculus, statistics, geometry
...I am a biological physics major at Georgetown University and so I have a lot of interdisciplinary science experience, most especially with mathematics (Geometry, Algebra, Precalculus,
Trigonometry, Calculus I and II). Additionally, I have tutored people in French and Chemistry, even though they a...
11 Subjects: including algebra 2, chemistry, calculus, French
...I am a senior software engineer with over 20 years experience. In college, I had a major in Math with minor in Computer Science. I have a Masters degree in pure mathematics.
36 Subjects: including algebra 2, physics, statistics, calculus
...Math is a key skill and a worthwhile subject. The examples are countless: finding the "best buy" at the store, calculating sales, approximating values, organizing budgets, planning trips,
designing new spaces, forming and following logical arguments, or, maybe most importantly, knowing when some...
19 Subjects: including algebra 2, English, reading, physics | {"url":"http://www.purplemath.com/chevy_chase_village_md_algebra_2_tutors.php","timestamp":"2014-04-18T14:17:55Z","content_type":null,"content_length":"24631","record_id":"<urn:uuid:5340017a-3874-42f4-a761-54e504429e33>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Traditionally, cosmology was the quest for a few numbers. The first were H, q, and ^79 baryons, could not have inflated from something microscopic if baryon number were strictly conserved)
In the 1980s non-baryonic matter became almost a natural expectation, and [b] / [CDM] is another fundamental number.
Another specially important dimensionless number, Q, tells us how smooth the universe is. It's measured by
-- The Sachs-Wolfe fluctuations in the microwave background
-- the gravitational binding energy of clusters as a fraction of their rest mass
-- or by the square of the typical scale of mass- clustering as a fraction of the Hubble scale.
It's of course oversimplified to represent this by a single number Q, but insofar as one can, its value is pinned down to be 10^-5. (Detailed discussions introduce further numbers: the ratio of
scalar and tensor amplitudes, and quantities such as the ``tilt'', which measure the deviation from a pure scale-independent Harrison-Zeldovich spectrum.)
What's crucial is that Q is small. Numbers like H are only well-defined insofar as the universe possesses ``broad brush'' homogeneity - so that our observational horizon encompasses many independent
patches each big enough to be a fair sample. This wouldn't be so, and the simple Friedmann models wouldn't be useful approximations, if Q weren't much less than unity. Q's smallness is necessary if
the universe is to look homogeneous. But it isn't, strictly speaking, a sufficient condition - a luminous tracer that didn't weigh much could be correlated on much larger scales without perturbing
the metric. Simple fractal models for the luminous matter are nonetheless, as Lahav will discuss, strongly constrained by other observations such as the isotropy of the X-ray background, and of the
radio sources detected in deep surveys. | {"url":"http://ned.ipac.caltech.edu/level5/Rees/Rees2.html","timestamp":"2014-04-18T08:07:59Z","content_type":null,"content_length":"3743","record_id":"<urn:uuid:6244353d-a3af-4ba2-9673-6bce159c9bab>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
Altitude where Earth's magnetic field will no longer affect a compass
lots of searching google tonite and quite difficult to find specific answers
but in this PDF file ....
and deep into the info I found this .....
As long as we are located on the earth’s surface, r=R and the quantity (R/r)
equals 1.
But if we travel away from the earth’s surface, r increases, and the dipole field decreases. The reduction behaves like the third power of the distance; i.e., for r=2R (R ≈ 6500km) the field is just
about 0.125 (12.5%) of the field at the earth’s surface. | {"url":"http://www.physicsforums.com/showthread.php?s=ecbe8148c9fa80ba6095032b3075a37f&p=4348773","timestamp":"2014-04-19T02:20:44Z","content_type":null,"content_length":"48800","record_id":"<urn:uuid:cab1fa6a-0cdb-46db-81f8-be54ae83ba76>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of completely normal space
and related branches of
normal spaces
T[4] spaces
T[5] spaces
, and
T[6] spaces
are particularly nice kinds of
topological spaces
. These conditions are examples of
separation axioms
Suppose that X is a topological space. X is a normal space if and only if, given any disjoint closed sets E and F, there are neighbourhoods U of E and V of F that are also disjoint. In fancier terms,
this condition says that E and F can be separated by neighbourhoods.
X is a T[4] space, if it is both normal and Hausdorff.
X is a completely normal space or a hereditarily normal space if every subspace of X is normal. It turns out that X is completely normal if and only if every two separated sets can be separated by
X is a T[5] space, or completely T[4] space, if it is both completely normal and Hausdorff, or equivalently, if every subspace of X is T[4].
X is a perfectly normal space if every two disjoint closed sets can be precisely separated by a function. That is, given disjoint closed sets E and F, there is a continuous function f from X to the
real line R such the preimages of {0} and {1} under f are E and F respectively. The real line can be replaced with the unit interval [0,1] in this definition; the result is the same. It turns out
that X is perfectly normal if and only if X is normal and every closed set is a G[δ] set. Equivalently, X is perfectly normal if and only if every closed set is a zero set. Every perfectly normal
space is automatically completely normal.
X is a T[6] space, or perfectly T[4] space if it is both perfectly normal and Hausdorff.
Note that some mathematical literature uses different definitions for the terms "normal" and "T[4]", and the terms containing those words. The definitions that we have given here are the ones usually
used today, and the ones used in Wikipedia. However, some authors switch the meanings of the two terms in a given pair, or use both terms synonymously for only one condition, and one should take care
to find out which definitions the author is using when reading mathematical literature. (But "T[5]" always means the same as "completely T[4]", whatever that may be.) For more on this issue, see
History of the separation axioms.
Terms like normal regular space and normal Hausdorff space also turn up in the literature; these simply mean that the space both is normal and satisfies the other condition mentioned. In particular,
a normal Hausdorff space is the same thing as a T[4] space. These phrases are useful, since they are less ambiguous given the historical confusion of the terms' meanings. In this encyclopedia, we
prefer these phrases when applicable; that is, "normal Hausdorff" instead of "T[4]", or "completely normal Hausdorff" instead of "T[5]".
Fully normal spaces and fully T[4] spaces are discussed elsewhere; they are related to paracompactness.
A locally normal space is a topological space where every point has an open neighbourhood that is normal. Every normal space is locally normal, but the converse is not true. A classical example of a
completely regular locally normal space that is not normal is the Niemitzki plane.
Examples of normal spaces
Most spaces encountered in mathematical analysis are normal Hausdorff spaces, or at least normal regular spaces:
Also, all fully normal spaces are normal (even if not regular). Sierpinski space is an example of a normal space that is not regular.
Examples of non-normal spaces
An important example of a non-normal topology is given by the Zariski topology on an algebraic variety or on the spectrum of a ring, which is used in algebraic geometry.
A non-normal space of some relevance to analysis is the topological vector space of all functions from the real line R to itself, with the topology of pointwise convergence. More generally, a theorem
of A. H. Stone states that the product of uncountably many non-compact Hausdorff spaces is never normal.
The main significance of normal spaces lies in the fact that they admit "enough" continuous real-valued functions, as expressed by the following theorems valid for any normal space X.
Urysohn's lemma: If A and B are two disjoint closed subsets of X, then there exists a continuous function f from X to the real line R such that f(x) = 0 for all x in A and f(x) = 1 for all x in B. In
fact, we can take the values of f to be entirely within the unit interval [0,1]. (In fancier terms, disjoint closed sets are not only separated by neighbourhoods, but also separated by a function.)
More generally, the Tietze extension theorem: If A is a closed subset of X and f is a continuous function from A to R, then there exists a continuous function F: X → R which extends f in the sense
that F(x) = f(x) for all x in A.
If U is a locally finite open cover of a normal space X, then there is a partition of unity precisely subordinate to U. (This shows the relationship of normal spaces to paracompactness.)
In fact, any space that satisfies any one of these conditions must be normal.
A product of normal spaces is not necessarily normal. This fact was considered surprising when it was first proved by Robert Sorgenfrey. An example of this phenomenon is the Sorgenfrey plane. Also, a
subset of a normal space need not be normal (i.e. not every normal Hausdorff space is a completely normal Hausdorff space), since every Tychonoff space is a subset of its Stone-Cech compactification
(which is normal Hausdorff). A more explicit example is the Tychonoff plank.
Relationships to other separation axioms
If a normal space is R[0], then it is in fact completely regular. Thus, anything from "normal R[0]" to "normal completely regular" is the same as what we normally call normal regular. Taking
Kolmogorov quotients, we see that all normal T[1] spaces are Tychonoff. These are what we normally call normal Hausdorff spaces.
Counterexamples to some variations on these statements can be found in the lists above. Specifically, Sierpinski space is normal but not regular, while the space of functions from R to itself is
Tychonoff but not normal.
• Higher Separation Axioms. In Encyclopedia of General Topology (2004). Elsevier Science. .
• Willard, Stephen (1970). General Topology. Reading, Massachusetts: Addison-Wesley. ISBN 0-486-43479-6 (Dover edition). | {"url":"http://www.reference.com/browse/completely+normal+space","timestamp":"2014-04-19T11:40:47Z","content_type":null,"content_length":"93972","record_id":"<urn:uuid:600fb1b0-348e-44b4-8b71-6dd024843822>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
optimization problem: a physical fitness room consists of a rectangular region with a semicircle on each end. if the perimeter of the room is to be a 200 meter running track, find the dimensions that
will make the are of the rectangular region as large as possible
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b05df7e4b0e906b4a5f861","timestamp":"2014-04-20T10:58:35Z","content_type":null,"content_length":"77917","record_id":"<urn:uuid:4ced5506-719d-41ff-a17b-c7bd0dbab1c8>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
relative number fields
Annegret Weng on Wed, 8 Dec 1999 17:00:31 +0100 (MEZ)
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
I have a problem concerning pseudo-bases. I do not understand what the
ideal list means.
For example:
I have the number field K=Q(sqrt(3)) generated under gp
by nf=nfinit(x^2-3,0). Now I define an extension L=K(sqrt(a))
generated by the element sqrt(a) where a=7+2*sqrt(3). Since Q(sqrt(3)) has
class number one there must be a relative integral basis for L over K.
And in fact I get such a basis by using the function rnfbasis.
My questions:
Can I get the relative integral basis only by using the
function rnfinit?
Can I use the information from rnf[7] which is
for this example [[Mod(1, y^2 - 3), Mod(1, y^2 - 3)*x + Mod(1, y^2 - 3)],
[[1, 0; 0, 1], [1, 1/2; 0, 1/2]]] ?
(Note that the elements [Mod(1, y^2 -3), Mod(1, y^2 - 3)*x + Mod(1, y^2 - 3)] over O_K generate an order, but
not the maximal order O_L.)
How do I apply the ideal list to the pseudo-basis? I read the explanation
in the User`s Guide but I still don't know how to do it.
Thank you in advance to everyone who will help me. | {"url":"http://pari.math.u-bordeaux.fr/archives/pari-users-9912/msg00000.html","timestamp":"2014-04-18T08:04:44Z","content_type":null,"content_length":"4031","record_id":"<urn:uuid:10b2ddc5-cfe0-42ae-843b-3122be2809cc>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
Taking Away Sets
This lesson encourages students to explore another model for subtraction, the familiar set model. Reading one of the many books that feature subtraction set the stage for this lesson in which the
students write story problems, find differences using sets, and present results in a table. In the discussion of the table, they focus on the effects of subtracting all and subtracting 0.
To set the stage for this lesson, you may wish to read another of the counting books. Appropriate books include Ten, Nine, Eight, How Many Snails?, and Mean Machine. Now ask students to create
subtraction story problems that use sets bigger than 1. For example, for 6 – 2, a student could ask, "Jose had 6 marbles and lost 2 of them. How many does he have now?"
Encourage a few volunteers to share their problems with the class. Discuss whether these problems can be solved. If they provide too little or too much information, solicit student help to revise the
story problems.
Then post a large piece of chart paper displaying a Find the Difference chart where all the students can see it. Repeat one of the student’s story problems. Demonstrate how to fill in the columns
labeled "Number of Objects" and "Number Taken Away" with the information from the story. Then explain to the students that the column labeled "Number Left" is the difference and that you will find
the difference together using real objects. Display a chain that matches the story problem. For the example above, you will make a chain of 6 links and take away 2 links. Place the two links where
they are separate from the chain but still visible to students. Ask students to tell you the difference. Enter this information in the chart. Repeat this process with the other student-generated
story problems.
Find the Difference Activity Sheet
Give students the opportunity to practice writing and solving their own subtraction story problems individually or in pairs. After each student or pair of students writes a problem, have them record
the information from their problem in a Find the Difference chart. Provide links for the students to solve their story problems.
When they are ready, call students together to share their story problems and enter their findings on the class chart. Afterward, review the terms take away and difference. Then ask the students what
would be recorded if you started with 7 links and took 7 away. Repeat with a model for 7 – 0. Prompt them to add entries to the chart. At this point, you may choose to encourage children to also
notice rows in which the first column ("Number of Objects") shows the same number.
At the end of the lesson, ask students to choose one of the rows from the chart and draw a picture illustrating that number fact. You may allow students to display these in the classroom or in a more
public place.
• Workmats
• Plastic Colored Links or Connecting Cubes
1. Use the Questions for Students above to assist you in determining your students’ level of understanding. Other questions may suggest themselves as you talk with your children as well. Record your
observations on the Class Notes teacher resource sheet you began earlier in this unit.
2. Have students re-use their subtraction problem (or create a new subtraction problem) using a theme such as "At The Pond." Provide students with appropriate stamps to illustrate their problems.
Assess whether students are able to model subtraction in written and pictorial form.
Questions for Students
1. What happens when we subtract?
[We take something away, and the group usually gets smaller. (The group will not get smaller if the number we take away is 0.)]
2. Which difference on our chart was the greatest?
[Answers will vary.]
3. If we start with 10 links, what is the greatest difference we can get? How do you know?
[The biggest difference is 10. We get the biggest difference when we subtract 0.]
4. What would be the smallest difference we could get with 10 links? How would you get it?
[The smallest difference would be 0. We get 0 if we take all 10 links away.]
Teacher Reflection
• Can students explain the terms difference and take away?
• Were students able to create story problems for subtraction? If not, what activities can you use to give them additional experience?
• Which students need assistance to record what they know and need to find out from story problems?
• Can most of the children justify the difference when 0 is taken away? Can they justify a difference of 0?
• What other books would you use in this lesson?
This lesson, which focuses on the counting model for subtraction, begins with reading a counting book. The students model the numbers as the book is read. Then they make a chain of links and write in
vertical and horizontal format the differences suggested by adding and subtracting one link at a time from their chains. Finally, they draw a chain showing one link being taken away and write in two
formats the difference it represents.
In this lesson, students generate differences using a number line model. Because this model highlights the measurement aspect of subtraction, it is a distinctly different representation from the
models presented in the previous lessons of this unit. The order property for subtraction is investigated. At the end of the lesson, children are encouraged to predict differences and solve puzzles
involving subtraction.
Pre-K-2, 6-8
This lesson encourages students to explore another model of subtraction, the balance. Students will use real and virtual balances. Students also explore recording the modeled subtraction facts in
equation form.
In this lesson, students explore the relation of addition to subtraction with books and links. Then the children search for related addition and subtraction facts for a given number. They also
investigate fact families, including those where one addend is 0 and where the addends are alike.
During this lesson, students use what they know about fact families to play a concentration game. They will also identify subtraction facts they need to learn.
This final lesson reviews the work of the previous lessons and suggests a framework for summative assessment. Students will self-select a solution strategy for subtraction from the models introduced
in this unit. An extension activity is suggested in which students use the mathematical knowledge and skills developed in the previous lessons to demonstrate understanding and ability to apply that
knowledge to playing a new game.
Learning Objectives
Students will be able to:
• Create subtraction story problems
• Explore the results of subtracting sets
• Define the term difference
• Explore the effects of subtracting 0 and subtracting all
• Construct a table showing differences
Common Core State Standards – Mathematics
-Kindergarten, Counting & Cardinality
• CCSS.Math.Content.K.CC.A.2
Count forward beginning from a given number within the known sequence (instead of having to begin at 1).
-Kindergarten, Counting & Cardinality
• CCSS.Math.Content.K.CC.A.3
Write numbers from 0 to 20. Represent a number of objects with a written numeral 0-20 (with 0 representing a count of no objects).
-Kindergarten, Algebraic Thinking
• CCSS.Math.Content.K.OA.A.1
Represent addition and subtraction with objects, fingers, mental images, drawings1, sounds (e.g., claps), acting out situations, verbal explanations, expressions, or equations.
-Kindergarten, Algebraic Thinking
• CCSS.Math.Content.K.OA.A.2
Solve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem.
-Kindergarten, Algebraic Thinking
• CCSS.Math.Content.K.OA.A.5
Fluently add and subtract within 5.
Grade 1, Algebraic Thinking
• CCSS.Math.Content.1.OA.B.4
Understand subtraction as an unknown-addend problem. For example, subtract 10 - 8 by finding the number that makes 10 when added to 8.
Grade 1, Algebraic Thinking
• CCSS.Math.Content.1.OA.C.5
Relate counting to addition and subtraction (e.g., by counting on 2 to add 2).
Grade 1, Algebraic Thinking
• CCSS.Math.Content.1.OA.C.6
Add and subtract within 20, demonstrating fluency for addition and subtraction within 10. Use strategies such as counting on; making ten (e.g., 8 + 6 = 8 + 2 + 4 = 10 + 4 = 14); decomposing a
number leading to a ten (e.g., 13 - 4 = 13 - 3 - 1 = 10 - 1 = 9); using the relationship between addition and subtraction (e.g., knowing that 8 + 4 = 12, one knows 12 - 8 = 4); and creating
equivalent but easier or known sums (e.g., adding 6 + 7 by creating the known equivalent 6 + 6 + 1 = 12 + 1 = 13).
Grade 2, Algebraic Thinking
• CCSS.Math.Content.2.OA.B.2
Fluently add and subtract within 20 using mental strategies. By end of Grade 2, know from memory all sums of two one-digit numbers.
Grade 2, Number & Operations
• CCSS.Math.Content.2.NBT.B.7
Add and subtract within 1000, using concrete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the
strategy to a written method. Understand that in adding or subtracting three-digit numbers, one adds or subtracts hundreds and hundreds, tens and tens, ones and ones; and sometimes it is
necessary to compose or decompose tens or hundreds.
Common Core State Standards – Practice
• CCSS.Math.Practice.MP4
Model with mathematics.
• CCSS.Math.Practice.MP5
Use appropriate tools strategically.
• CCSS.Math.Practice.MP6
Attend to precision. | {"url":"http://illuminations.nctm.org/Lesson.aspx?id=529","timestamp":"2014-04-18T18:11:27Z","content_type":null,"content_length":"84164","record_id":"<urn:uuid:5dff5842-d581-421a-9142-e5eb243bbb3a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sequential Synthesis Using S1S
- In: Proceedings of FACS 2005. ENTCS , 2006
"... We introduce the notion of functional stream derivative, generalising the notion of input derivative of rational expressions (Brzozowski 1964) to the case of stream functions over arbitrary
input and output alphabets. We show how to construct Mealy automata from algebraically specified stream functi ..."
Cited by 19 (7 self)
Add to MetaCart
We introduce the notion of functional stream derivative, generalising the notion of input derivative of rational expressions (Brzozowski 1964) to the case of stream functions over arbitrary input and
output alphabets. We show how to construct Mealy automata from algebraically specified stream functions by the symbolic computation of functional stream derivatives. We illustrate this construction
in full detail for various bitstream functions specified in the algebraic calculus of the 2-adic numbers. This work is part of a larger ongoing effort to specify and model component connector
circuits in terms of (functions and relations on) streams.
- In The Proceedings of the International Workshop on Logic Synthesis , 2003
"... Consider the problem of designing a component that combined with a known part of a system, called the context, conforms to a given overall specification. This question arises in several
applications ranging from logic synthesis to the design of discrete controllers. We cast the problem as solving ab ..."
Cited by 6 (5 self)
Add to MetaCart
Consider the problem of designing a component that combined with a known part of a system, called the context, conforms to a given overall specification. This question arises in several applications
ranging from logic synthesis to the design of discrete controllers. We cast the problem as solving abstract equations over languages and study the most general solutions under the synchronous and
parallel composition operators. We also specialize such language equations to languages associated with important classes of automata used for modeling systems, e.g., regular languages as
counterparts of finite automata, FSM languages as counterparts of FSMs. Thus we can operate algorithmically on those languages through their automata and study how to solve effectively their language
equations. We investigate the maximal subsets of solutions closed with respect to various language properties. In particular, we investigate classes of the largest compositional solutions (defined by
properties exhibited by the composition of the solution and of the context). We provide the first algorithm to compute the largest compositionally progressive solution of synchronous equations. This
approach unifies in a seamless frame previously reported techniques. As an application we solve the classical problem of synthesizing a converter between a mismatched pair of protocols, using their
specifications, as well as those of the channel and of the required service. 1
- In: Proceedings of Design Automation and Test in Europe. (2003 , 2003
"... System design methodology is poised to become the next big enabler for highly sophisticated electronic products. Design verification continues to be a major challenge and simulation will remain
an important tool for making sure that implementations perform as they should. In this paper we present al ..."
Cited by 2 (0 self)
Add to MetaCart
System design methodology is poised to become the next big enabler for highly sophisticated electronic products. Design verification continues to be a major challenge and simulation will remain an
important tool for making sure that implementations perform as they should. In this paper we present algorithms to automatically generate C++ checkers from any formula written in the formal
quantitative constraint language, Logic Of Constraints (LOC). The executable can then be used to analyze the simulation traces for constraint violation and output debugging information. Different
checkers can be generated for fast analysis under different memory limitations. LOC is particularly suitable for specification of system level quantitative constraints where relative coordination of
instances of events, not lower level interaction, is of paramount concern. We illustrate the usefulness and efficiency of our automatic trace analysis methodology with case studies on large
simulation traces from various system level designs. 1
"... The Alloy tool-set has been gaining popularity as an alternative to traditional manual testing and checking for design correctness. Alloy uses a first-order relational logic for modeling
designs. The Alloy Analyzer translates Alloy formulas for a given scope, i.e., a bound on the universe of discour ..."
Cited by 1 (1 self)
Add to MetaCart
The Alloy tool-set has been gaining popularity as an alternative to traditional manual testing and checking for design correctness. Alloy uses a first-order relational logic for modeling designs. The
Alloy Analyzer translates Alloy formulas for a given scope, i.e., a bound on the universe of discourse, to Boolean formulas in conjunctive normal form (CNF), which are subsequently checked using
propositional satisfiability solvers. We present SERA, a novel algorithm that compiles a relational logic formula for a given scope to a sequential circuit. There are two key advantages of sequential
circuits: they form a more succinct representation than CNF formulas, sometimes by several orders of magnitude. Also sequential circuits are amenable to a range of powerful automatic analysis
techniques that have no counterparts for CNF formulas. Our experiments show that SERA, used in conjunction with a sequential circuit analyzer, can check formulas for scopes that are an order of
magnitude higher than those feasible with the Alloy Analyzer. 1
, 2003
"... System design methodology is poised to become the next big enabler for highly sophisticated electronic products. Design verification continues to be a major challenge and simulation will remain
an important tool for making sure that implementations perform as they should. In this paper we present a ..."
Add to MetaCart
System design methodology is poised to become the next big enabler for highly sophisticated electronic products. Design verification continues to be a major challenge and simulation will remain an
important tool for making sure that implementations perform as they should. In this paper we present algorithms to automatically generate C++ checkers from any formula written in the formal
quantitative constraint language, Logic Of Constraints (LOC). The executable can then be used to analyze the simulation traces for constraint violation and output debugging information. Different
checkers can be generated for fast analysis under different memory limitations. LOC is particularly suitable for specification of system level quantitative constraints where relative coordination of
instances of events, not lower level interaction, is of paramount concern. We illustrate the usefulness and efficiency of our automatic trace verification methodology with case studies on large
simulation traces from various system level designs.
"... Abstract—Embedded systems typically consist of a composition of a set of hardware and software IP modules. Each module is heavily optimized by itself. However, when these modules are composed
together, significant additional opportunities for optimizations are introduced because only a subset of the ..."
Add to MetaCart
Abstract—Embedded systems typically consist of a composition of a set of hardware and software IP modules. Each module is heavily optimized by itself. However, when these modules are composed
together, significant additional opportunities for optimizations are introduced because only a subset of the entire functionality is actually used. We propose COSE—a technique to jointly optimize
such designs. We use symbolic execution to compute invariants in each component of the design. We propagate these invariants as constraints to other modules using global flow analysis of the
composition of the design. This captures optimizations that go beyond, and are qualitatively different than, those achievable by compiler optimization techniques such as common subexpression
elimination, which are localized. We again employ static analysis techniques to perform optimizations subject to these constraints. We implemented COSE in the Metropolis platform and achieved
significant optimizations using reasonable computational resources. I. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1820366","timestamp":"2014-04-18T17:25:37Z","content_type":null,"content_length":"29690","record_id":"<urn:uuid:ef8984c6-f028-41ac-800e-36805471266f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
the absolute unit of force in the meter-kilogram-second (MKS) system of physical units. It is defined as that force necessary to provide a mass of one kilogram with an acceleration of one meter per
second per second. One newton is equal to a force of 100,000 dynes in the centimeter-gram-second (CGS) system, or a force of about 0.2248 pound in the foot-pound-second (English, or customary)
system. The newton was named for Sir Isaac Newton, whose second law of motion describes the changes that a force can produce in the motion of a body. | {"url":"http://everything2.com/title/NEWTON","timestamp":"2014-04-19T01:59:10Z","content_type":null,"content_length":"23620","record_id":"<urn:uuid:0bc57f4b-587e-4aa7-bfca-d1dd6d98bcd1>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00288-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
3 x 4 1/4
• one year ago
• one year ago
Best Response
You've already chosen the best response.
So, do you know the "trick" for multiplying mixed numbers? Ie, the 4 1/4 ?
Best Response
You've already chosen the best response.
I think yu have to divide 1 by 4 & add that to 4 then multiply that by 3.... But I have a huge feeling im wrong D; Its been a lsong time since I did these, good luck.!
Best Response
You've already chosen the best response.
all i need is the answer...
Best Response
You've already chosen the best response.
Multiply the denominator by the whole number and add the numerator. In this instance, it would be 4 times 4 plus 1, which equals 5. You're basically taking a fourth and multiplying it four times.
So your answer is going to be in terms of fourths. So have 5 fourths. So 5/4. Now, you still have that 3 to deal with. You're multiplying the rest of the problem by the 3. When multiplying
fractions, all you have to do is multiply the numerators by the numerators and the denominators by the denominators. 3 is really 3/1. We've already got the 5/4. So we just multiply them together,
giving us 15/4. This is because 3 times 5 goes over 1 times 4.
Best Response
You've already chosen the best response.
Learn how to solve it.... I just told yu how I think its solved....
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Ah nevermind. 4 times 4 is 16 plus one is 17
Best Response
You've already chosen the best response.
3 times 17/4 is ...
Best Response
You've already chosen the best response.
i must no the answer....
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
improper fraction?
Best Response
You've already chosen the best response.
Yes. If you want to write it in some other form, you can.
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5078b22ce4b02f109be44ba9","timestamp":"2014-04-18T10:38:42Z","content_type":null,"content_length":"102700","record_id":"<urn:uuid:ee8f9de2-db0f-4e56-b9ef-4b23ec5fa8b4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rainbow Matchings in $r$-Partite $r$-Graphs
Given a collection of matchings ${\cal M} = (M_1, M_2, \ldots, M_q)$ (repetitions allowed), a matching $M$ contained in $\bigcup {\cal M}$ is said to be $s$-rainbow for ${\cal M}$ if it contains
representatives from $s$ matchings $M_i$ (where each edge is allowed to represent just one $M_i$). Formally, this means that there is a function $\phi: M \to [q]$ such that $e \in M_{\phi(e)}$ for
all $e \in M$, and $|Im(\phi)|\ge s$.
Let $f(r,s,t)$ be the maximal $k$ for which there exists a set of $k$ matchings of size $t$ in some $r$-partite hypergraph, such that there is no $s$-rainbow matching of size $t$.
We prove that $f(r,s,t)\ge 2^{r-1}(s-1)$, make the conjecture that equality holds for all values of $r,s$ and $t$ and prove the conjecture when $r=2$ or $s=t=2$.
In the case $r=3$, a stronger conjecture is that in a $3$-partite $3$-graph if all vertex degrees in one side (say $V_1$) are strictly larger than all vertex degrees in the other two sides, then
there exists a matching of $V_1$. This conjecture is at the same time also a strengthening of a famous conjecture, described below, of Ryser, Brualdi and Stein. We prove a weaker version, in which
the degrees in $V_1$ are at least twice as large as the degrees in the other sides. We also formulate a related conjecture on edge colorings of $3$-partite $3$-graphs and prove a similarly weakened
Full Text: | {"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v16i1r119","timestamp":"2014-04-17T15:35:23Z","content_type":null,"content_length":"16216","record_id":"<urn:uuid:19d34eed-f67f-4a74-b502-2908566a0974>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intersections -- Poetry with Mathematics
Exiled Romanian poet Nina Cassian (1924-2014) died last week in Manhattan. Cassian was an outspoken poet whom I admired for her political views; she also was connected to mathematics -- in her
subject matter and her friends. (See, for example, this posting from January 31, 2011.) Equality by Nina Cassian If I dress up like a peacock, you dress like a kangaroo. If I make myself into a
triangle, you acquire the shape of an egg. If I were to climb on water, you'd climb on mirrors.
All our gestures Belong to the solar system. "Equality" is in Cheerleaders for a Funeral (Forrest Books, 1992), translated by the author and Brenda Walker. | {"url":"http://poetrywithmathematics.blogspot.com/","timestamp":"2014-04-21T00:01:43Z","content_type":null,"content_length":"444248","record_id":"<urn:uuid:87c27b57-ef54-42fd-9fd8-be6d20ee3372>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebraic Rearrangement
August 9th 2011, 08:15 AM #1
Aug 2011
Algebraic Rearrangement
Hi, I'm new to this site and I hope you guys can help. I'm not actually pre university but an engineer, yet I'm having some problems rearranging an equation and was hoping someone could help. The
equation is as follow:
and the equation needs rearranging to find A. Can anyone help?
Re: Algebraic Rearrangement
I think that's going to be really hard, because I looked for it @ Wolphram alpha:
solve ((A/B)-1)*((A^2/C^2)-1)=2(A/C)((D*E) ;/F) for A - Wolfram|Alpha
Re: Algebraic Rearrangement
Hi, I'm new to this site and I hope you guys can help. I'm not actually pre university but an engineer, yet I'm having some problems rearranging an equation and was hoping someone could help. The
equation is as follow:
and the equation needs rearranging to find A. Can anyone help?
I start first by distrubuting the exponents to get:
(B/A)*(C^2/A^2) = (2ADE)/(CF)
Next, combine like bases and simplify:
(BC^2)/(A^3) = (2ADE)/(CF)
Multiply both sides by A^3 to get:
BC^2 = (2A^4DE)/(CF)
Multiply both sides by (CF) to get:
BC^3F = 2A^4DE
Divide both sides by 2DE to get:
(BC^3F)/(2DE) = A^4
Take the fourth root of each side to get:
A = fourth_root[(BC^3F)/(2DE)]
Re: Algebraic Rearrangement
I start first by distrubuting the exponents to get:
(B/A)*(C^2/A^2) = (2ADE)/(CF)
Next, combine like bases and simplify:
(BC^2)/(A^3) = (2ADE)/(CF)
Multiply both sides by A^3 to get:
BC^2 = (2A^4DE)/(CF)
Multiply both sides by (CF) to get:
BC^3F = 2A^4DE
Divide both sides by 2DE to get:
(BC^3F)/(2DE) = A^4
Take the fourth root of each side to get:
A = fourth_root[(BC^3F)/(2DE)]
I assumed here, that "-1" meant the inverse.. if its actually minus one, then obviously this is incorrect.
Re: Algebraic Rearrangement
Re: Algebraic Rearrangement
It's a cubic equation in A. Have fun !!!
August 9th 2011, 08:21 AM #2
August 9th 2011, 10:21 PM #3
May 2011
August 9th 2011, 10:22 PM #4
May 2011
August 9th 2011, 10:49 PM #5
Aug 2011
August 10th 2011, 08:12 AM #6
Senior Member
Nov 2010
Clarksville, ARk | {"url":"http://mathhelpforum.com/algebra/185864-algebraic-rearrangement.html","timestamp":"2014-04-16T19:16:39Z","content_type":null,"content_length":"43575","record_id":"<urn:uuid:f8fc7554-7cab-43a1-9442-f9ecb402c885>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Friendship, MD Math Tutor
Find a Friendship, MD Math Tutor
...Within my first two years of teaching, I taught Geometry to a hearing-impaired student and a visually-impaired student. During that time, in addition to teaching Mathematics, I taught one
class of Spanish 1. I have taught and tutored students in Pre-Algebra, Algebra 1, Algebra 2, Integrated Math, and Pre-Calculus.
4 Subjects: including algebra 1, prealgebra, geometry, Spanish
...I completed my degree in Mathematics as an adult in 2005. Since then I have tutored individuals, taught in a homeschool cooperative, and homeschooled my own four children. I recently worked
for two years in an Annapolis tutoring center with middle and high school students.
7 Subjects: including algebra 1, algebra 2, geometry, prealgebra
Throughout my life, I have held a strong commitment to academics. After attending Middlebury College, I continued on to earn a master's of science at Dartmouth. I love to learn, and I love to
share my passion for learning with others.
13 Subjects: including SAT math, algebra 1, grammar, prealgebra
...I've taught all types of students from students very far behind to Honors students as well as summer school courses. I am an alumna of Teach for America and have high scores (greater than 90th
percentile) on the SAT, ACT, GMAT and GRE. Geometry is my favorite math subject!
12 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...Calculus remains one of my favorite subjects to this day. I have a Bachelor's degree in Classics/Latin. I also have experience teaching Latin (I - IV) in High School and Middle School.
18 Subjects: including geometry, music history, classics, algebra 1
Related Friendship, MD Tutors
Friendship, MD Accounting Tutors
Friendship, MD ACT Tutors
Friendship, MD Algebra Tutors
Friendship, MD Algebra 2 Tutors
Friendship, MD Calculus Tutors
Friendship, MD Geometry Tutors
Friendship, MD Math Tutors
Friendship, MD Prealgebra Tutors
Friendship, MD Precalculus Tutors
Friendship, MD SAT Tutors
Friendship, MD SAT Math Tutors
Friendship, MD Science Tutors
Friendship, MD Statistics Tutors
Friendship, MD Trigonometry Tutors | {"url":"http://www.purplemath.com/Friendship_MD_Math_tutors.php","timestamp":"2014-04-20T04:23:26Z","content_type":null,"content_length":"23830","record_id":"<urn:uuid:d5db78f6-19ca-449f-9d7b-53edc0caf7cb>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Completing the square
Completing the square resources
Completing the Square 1
In this iPOD video we consider how quadratic expressions can be written in an equivalent form using the technique known as completing the square. This technique has applications in a number of areas,
but we will see an example of its use in solving a quadratic equation. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is
held by Skillbank Solutions Ltd.
Completing the Square 2
In this iPOD video we consider how quadratic expressions can be written in an equivalent form using the technique known as completing the square. This technique has applications in a number of areas,
but we will see an example of its use in solving a quadratic equation. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is
held by Skillbank Solutions Ltd.
Completing the Square 3
In this iPOD video we consider how quadratic expressions can be written in an equivalent form using the technique known as completing the square. This technique has applications in a number of areas,
but we will see an example of its use in solving a quadratic equation. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is
held by Skillbank Solutions Ltd.
Completing the Square 4
In this iPOD video we consider how quadratic expressions can be written in an equivalent form using the technique known as completing the square. This technique has applications in a number of areas,
but we will see an example of its use in solving a quadratic equation. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is
held by Skillbank Solutions Ltd.
Completing the Square 5
In this unit we consider how quadratic expressions can be written in an equivalent form using the technique known as completing the square. This technique has applications in a number of areas, but
we will see an example of its use in solving a quadratic equation. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held
by Skillbank Solutions Ltd.
Completing the Square 6
In this unit we consider how quadratic expressions can be written in an equivalent form using the technique known as completing the square. This technique has applications in a number of areas, but
we will see an example of its use in solving a quadratic equation. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held
by Skillbank Solutions Ltd. | {"url":"http://www.mathcentre.ac.uk/types/ipod-video/completingsquare/","timestamp":"2014-04-17T12:52:09Z","content_type":null,"content_length":"11178","record_id":"<urn:uuid:fbc0a436-3faf-4e71-ad97-a3385c69ccc2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: -graph twoway (function ...)- question
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: -graph twoway (function ...)- question
From Nick Winter <nwinter@virginia.edu>
To statalist@hsphsun2.harvard.edu
Subject Re: st: -graph twoway (function ...)- question
Date Thu, 24 Jul 2008 09:37:25 -0400
I don't have an answer to your specific question, but another way to go is to plot predicted probabilities. My -oprobpr- package might help for this. (-oprobpr- only works after -oprobit- not
-probit-, but oprobit will estimate the same model as probit if there are only two response categories; just use the -categories()- option to specify which response category you want plotted.
-Nick Winter
Andrea Bennett wrote:
Dear all,
I have estimated average marginal effects from a probit regression with -margeff- and would like to generate a graph which plots the marginal effects for 3 age categories (dummies) dependent on a
continuous variable called cont.
The model is as such:
y = b0 + b1*age2 + b2*age3 + b3*age2*cont +b4*age3*cont + b5*cont + controls
As age2 and age3 are dummies, would the following graph command be correct:
twoway (function age1 = _b[cont]*x, range(0 0.5)) (function age2 = _b[age2] + (_b[cont] + _b[cont_age2])*x, range(0 0.5)) (function age3 = _b[age3] + (_b[cont] + _b[cont_age3])*x, range(0 0.5))
The way I understand it, since I have now marginal effects I can work equivalently to linear models. Then the above graph should be correct, right? Further, this would also be correct when
introducing other interactions with -cont- as long as I am looking at the marginal effects of age groups when -cont- changes?
Many thanks for your considerations,
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
Nicholas Winter 434.924.6994 t
Assistant Professor 434.924.3359 f
Department of Politics nwinter@virginia.edu e
University of Virginia faculty.virginia.edu/nwinter w
PO Box 400787, 100 Cabell Hall
Charlottesville, VA 22904
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-07/msg00892.html","timestamp":"2014-04-18T00:42:02Z","content_type":null,"content_length":"7705","record_id":"<urn:uuid:d9161d5a-b073-41ac-986c-891d84b43726>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
Single electron transistor at high-temperature
From Ck=[{e^(-Scl)*gamma(N+1)*gamma(N+1+u)}/{gamma(1+u)}]/[{gamma(N+1+a-k)*gamma(N+1+b-k)}/{gamma(1+a-k)*gamma(1+b-k)}]
How to prove Ck=[e^(-Scl)*{gamma(1+a-k)*gamma(1+b-k)}]/{gamma(1+u)}
to get Ck=[{gamma(1+a)*gamma(1+b)}/{gamma^2(1+k)*gamma(1+u)}]*e^(Scl)?
by u=[g*Beta*Ec]/[2*(pi)^2]
where the correction of order 1/N may be ignored. Employing the definition of the gamma function ; gamma(N+1+a)=(N+a)(N+a-1)...(1+a)a!
Your post looks like nonsense or trolling to me. What does the body of your post have to do with the title of your thread? You need to provide a lot more explanatory details to make this thread make
sense, IMO. | {"url":"http://www.physicsforums.com/showthread.php?t=729916","timestamp":"2014-04-21T07:21:43Z","content_type":null,"content_length":"24352","record_id":"<urn:uuid:3098008f-5b29-4958-92fc-424ad53fff49>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00612-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Beiträge zur Numerischen Mathematik
12 (1984), 7-19
On improving approximate triangular factorizations
GÖTZALEFELD and JON G. ROKNE
Summary. Newton's methoq. applied iteratively to the improvement of an approximate
triangular factorization of a matrix is discussed in detail. Particular consideration is given to
the effect of rounding errors on the convergence of the iteration. It is shown that on a computer
employing fixed length floating-point arithmetic Newton's method converges with an arbi-
trary starti:m.g value after 2n-l steps to the same value as that obtained by Gaussian elimi-
nation. Finally, a new method is proposed for the iterative improvement of bounds for the
elements of the triangular factorization where the effects of the rounding errors are also
1. Intr:oduction
It is frequently necessary to solve a system of linear equations Ax = b for a variety
of right-hand sides b. Since this is often done by factoring A as (1 + L*) U* and
then solving the resulting simpler sets of equations, it is important to calculate L*
and U* as accurately as possible.
\Vith this in mind J. W. SCHMIDT[3] recently proposed to apply Newton's method
in order to correct an approximate factorization of a non-singular matrix A. SCH:MIDT | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/592/3115987.html","timestamp":"2014-04-17T01:31:36Z","content_type":null,"content_length":"8483","record_id":"<urn:uuid:d64148c1-7d34-4c2a-a12b-eb7c3bbd294a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is the infimum of Salem numbers > 1?
up vote 6 down vote favorite
A Salem number is an algebraic integer $\theta$ such that all the Galois conjugates of $\theta$ are $\leq 1$ in absolute value, and at least one of them lies on the unit circle. Their importance is
derived for example from the fact that the minimal polynomial of a Salem number, $$ P(x) = x^{10} + x^9 - x^7 - x^6 - x^5 - x^4 - x^3 + x + 1 $$ is conjectured to minimize the Mahler measure ($M(P) =
1.17...$) over all $P \in \mathbb{Z}[x]$ with $M(P) > 1$.
The closely related Pisot numbers are algebraic integers $\theta > 1$ such that all the Galois conjugates of $\theta$ are of absolute value $< 1$. Their set is closed and in particular there exists a
smallest Pisot number. This has been found by Siegel to be the plastic constant, $\theta_0 = 1,32471\ldots$, a root of $g(x) = x^3 - x - 1$. It is known that for any monic, non-reciprocal polynomial
$P$ we have $M(P) \geq M(g) = \theta_0$. This is Smyth's theorem.
I am currently reading ``Conjecture de Lehmer et petits nombres de Salem" by Bertin and Pathiaux-Delefosse. The book is from 1989 and it is stated in it that it is still not known if $\inf T > 1$
where $T$ is the set of all Salem numbers. Has there been any developments since then? Is this conjecture still open?
Precisely: Has it been proved or disproved that $\inf T> 1$?
nt.number-theory algebraic-number-theory diophantine-approximation
I am aware that the title of this question is not the most fortunate one, but I think that it captures the gist of the question rather succintly. – blabler Jul 25 '13 at 21:29
Yes according to wikipedia en.wikipedia.org/wiki/Lehmer%27s_conjecture – Anthony Quas Jul 25 '13 at 21:34
Obviously it has not been disproved that $\inf T>1$. Otherwise Lehmer's conjecture would be known to be false. Further, if you look at the (reasonably authoritative wiki page) you will see that
3 there are bounds on the Mahler measure which converge to 0 in the degree. These bounds would be rendered moot if there were a lower bound independent of the degree (i.e. $\inf T > 1$). Hence,
assuming that the editors of the wiki page did not fall asleep for the last few years, one can reasonably conclude that it has neither been proved nor disproved that $\inf T>1$. – Anthony Quas Jul
25 '13 at 23:11
3 Historical note --- Lehmer would remind people that he had not published it as a conjecture, only a question, since he didn't feel he had enough evidence for it to call it a conjecture. – Gerry
Myerson Jul 25 '13 at 23:54
1 Some progress on the Salem number conjecture: ams.org/mathscinet-getitem?mr=1824892 ams.org/mathscinet-getitem?mr=1953192 ams.org/mathscinet-getitem?mr=2105815 – Ian Agol Jul 26 '13 at 3:49
show 9 more comments
2 Answers
active oldest votes
I believe it is the general opinion, at least among those working in diophantine approximations, that the extreme case of Salem numbers (the question of the title) would be just as
difficult as the full Lehmer conjecture. It is not a coincidence that the smallest known Mahler measures are realized by Salem numbers, and I have not heard of any improvement on the
general Dobrowolski bound $\log{M(P)} > \big(\frac{9}{4} - o(1) \big) \Big( \frac{\log{\log{d}}}{\log{d}} \Big)^3$ under restricting to the Salem case.
up vote 4
down vote [The constant $9/4$, due to Loubotin in 1983, is apparently the best that Dobrowolski's method can produce. Voutier has shown that the inequality holds without exception with the
accepted constant $1/4$.]
add comment
According to [1(1992), 2(2011)] it is not known if $T$ is dense in $[1,\infty)$. Therefore it has not been proved that $\inf T>1$.
Since, according to wikipedia, Lehmer's problem is still open it has not been disproved either.
up vote 3 down vote
Your answer is just as good as the other answer. I had to make a pick which one to accept. – blabler Nov 23 '13 at 23:43
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory algebraic-number-theory diophantine-approximation or ask your own question. | {"url":"http://mathoverflow.net/questions/137782/is-the-infimum-of-salem-numbers-1","timestamp":"2014-04-16T07:45:40Z","content_type":null,"content_length":"63806","record_id":"<urn:uuid:973a1459-ac2f-4e5b-b59e-4f75191999c0>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sequences and series
April 12th 2006, 05:26 AM
Sequences and series
The length of the radii of circles form an infinite geameric sequence. The length of the radius of the first circle is 6cm. The length of the radius of each of the circles is 4/5 of the length of
the radius of the previous circle. Show that the total area of all the circles formed in this way is 100pi cm2.
Please help here guys!! Don't know what formula to use and how.
Thanks for your time!
April 12th 2006, 06:01 AM
Originally Posted by FuNkY14
The length of the radii of circles form an infinite geameric sequence. The length of the radius of the first circle is 6cm. The length of the radius of each of the circles is 4/5 of the length of
the radius of the previous circle. Show that the total area of all the circles formed in this way is 100pi cm2.
Please help here guys!! Don't know what formula to use and how.
Thanks for your time!
The radius of the first circle is $6$ cm, the second $6\times \frac{4}{5}$ cm, and the radius of the $n$th circle
is $6 \times \left( \frac{4}{5} \right)^{n-1}$.
So the areas of these circles are:
$\pi\ 6^2$, $\pi\ 6^2\left(\frac{4}{5}\right)^2$ and $\pi\ 6^2 \left( \frac{4}{5} \right)^{2(n-1)}$.
So the total area of the circles:
$<br /> A=\sum_{n=1}^{\infty}\pi\ 6^2 \left( \frac{4}{5} \right)^{2(n-1)}=\pi\ 6^2 \sum_{n=1}^{\infty} \left( \left( \frac{4}{5} \right)^2\right) ^{n-1}<br />$
Now the summation in the last expression above is a geometric series and its sum
$<br /> \sum_{n=1}^{\infty} \left( \left( \frac{4}{5} \right)^2\right) ^{n-1}=\frac{1}{1-(\frac{4}{5})^2}=\frac{25}{9}<br />$,
and so:
$A=\pi 6^2 \frac{25}{9}=100 \pi$
April 12th 2006, 06:09 AM
Yo thank you very much! | {"url":"http://mathhelpforum.com/algebra/2554-sequences-series-print.html","timestamp":"2014-04-16T16:20:14Z","content_type":null,"content_length":"7521","record_id":"<urn:uuid:551ce1dc-928b-4918-a4f0-79cd5b0f4ebb>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
Processos de passeio na reta contínua
Abstract (Summary)
We study a property similar to ergodicity of a class of random processes with localinteraction with continuous space and discrete time. Our process is a sequence of random subsets Ut of a real line,
where t = 0, 1, 2, 3, . . . is called time. These sets are of a special kind: their intersections with any limited piece of the real line are linear combinations of a finite list of _-measures, each
concentrated in a set consisting of several closed mutually non-intersecting segments, which we call blocks. These sets are generated inductively. Initially, when t = 0, our set U0 is empty. At every
time step three operators are applied to Ut to obtain Ut+1. The first operator, W_, includes into our set some of the segments [i, i + 1], where i amp;#8712; Z, chosen at random: each segment is
included with a probability _ independently of others. The second operator, WD, includes into our set all small enough gaps between the blocks. The action of the third operator, Wpas, depends on two
discrete random variables L and R, each taking only a finite set of values. At one application of Wpas, left ends of all the blocks perform one step of random walk distributed as L independently from
each other. The right ends of all the blocks do the same, only using the random variable R instead of L. We say that our process fills the line if for any limited segment the probability that Ut
includes this segment tends to one when time tends to infinity. (This is analog of ergodicity.) We show that our process has two types of behavior: If E(L) lt; E(R) (where E means mathematical
expectation), our process fills the line for any _ gt; 0. If E(L) gt; E(R), our process does not fill the line if _ is small enough. This contrast has been shown for the discrete line and now we
generalize it to the continuous line. Our approach paves the way for a theory of processes with local interaction on a real line, which remains little developed till now
This document abstract is also available in
Bibliographical Information:
Advisor:Andrei Toom
School:Universidade Federal de Pernambuco
School Location:Brazil
Source Type:Master's Thesis
Keywords:probabilidade processo estocástico teorema principal preenchimento e não da reta real estatistica
Date of Publication:02/23/2006 | {"url":"http://www.openthesis.org/documents/Processos-de-passeio-na-reta-324381.html","timestamp":"2014-04-20T20:58:55Z","content_type":null,"content_length":"9984","record_id":"<urn:uuid:d3538a20-9ff6-4858-bbb9-c52ab8b7240b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Mathematics of Biodiversity
Posted by Tom Leinster
Interested in biological diversity? Want to know more about how diversity can be quantified? Maybe diversity comes up in your work. Maybe you’ve heard rumours that there’s serious mathematics
involved, and you want to know more. Or maybe you’re just curious.
If so, come to a meeting in Barcelona! It’s running 2-6 July, and there are grants to cover attendance expenses. (If you want one, please apply as soon as possible.) We also have free slots for
contributed talks.
We’ve assembled what is already a head-spinningly varied group of people, from livestock breeding experts to ostensibly pure mathematicians to evolutionary ecologists. Two or three of your Café hosts
will be there. Details follow.
• Exploratory Conference
• Centre de Recerca Matemàtica, Barcelona
• 2-6 July 2012
What is diversity? How do we measure it? This event brings together life scientists and mathematicians to advance our understanding of diversity and its measurement. We welcome everyone with an
interest in measuring diversity, from microbial biologists to pure mathematicians to conservation ecologists.
Grants for attendance expenses are available. The official deadline to apply is 30 April. If you want to apply but think you will miss the deadline (or have already missed it), all is not lost: let
us know at the addresses below.
We are also taking offers of contributed talks. If you would like to give one, please contact us as soon as possible.
Our current list of speakers is:
• Benjamin Allen (Program for Evolutionary Dynamics, Harvard)
• John Baez (Mathematics, Riverside)
• Michael Bonsall (Zoology, Oxford)
• Anne Chao (Statistics, National Tsing Hua Univ., Taiwan)
• Christina Cobbold (Mathematics, Glasgow)
• Glenn De’ath (Australian Inst. of Marine Science)
• Elizabeth Gillet (GWDG, Göttingen)
• Hans-Rolf Gregorius (GWDG, Göttingen)
• Lou Jost (Ecominga Foundation, Baños, Ecuador)
• Tom Leinster (Mathematics, Glasgow)
• Alison Mather (Sanger Institute, Cambridge)
• Louise Matthews (Infection, Immunity and Inflammation, Glasgow)
• Hans Metz (Mathematics/Biology, Leiden)
• Sandrine Pavoine (Muséum national d’Histoire naturelle, Paris)
• Richard Reeve (Biodiversity, Animal Health and Comparative Medicine, Glasgow)
• Carlo Ricotta (Plant Sciences, Rome)
• William Sherwin (Biological, Earth and Environmental Sciences, Univ. New South Wales)
• John Woolliams (Genetics and Genomics, Roslin Inst., Edinburgh)
Scientific enquiries: Tom,Leinster#glasgow,ac,uk. Administrative enquiries: NPortet#crm,cat (Ms Neus Portet).
Posted at April 26, 2012 6:48 PM UTC
Re: The Mathematics of Biodiversity
Your ‘Measuring diversity’ paper is referred to by Pavlovic here in a paper which prefers ‘proxets’, categories enriched over the multiplicative monoid $[0, 1]$, to the equivalent generalized metric
Posted by: David Corfield on April 27, 2012 10:57 AM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
Thanks, David. I think a bit of your browsing history accidentally peeped through there (don’t worry, nothing embarrassing) — you link to the wrong paper. I think you must mean this one.
Dusko sent me a copy of that paper some months ago, but I’m afraid I didn’t get round to either reading it properly or replying to him, sad to say.
Posted by: Tom Leinster on April 29, 2012 11:13 PM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
Dusko’s paper uses FCA ( = Formal Concept Analysis). There was a Dagstuhl meeting a few years ago at which various people working in FCA met people working in Domain theory and Chu spaces. (For
instance one of the papers there was Zhang, G.Q.: Chu spaces, concept lattices, and domains. In Brookes, S., Panan-
gaden, P., eds.: Electronic Notes in Theoretical Computer Science. Volume 83.,
Elsevier (2004).) Given the link between Chu, domains and logic, is there a `logic of bio-diversity’?
Posted by: Tim Porter on April 30, 2012 7:45 AM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
We’ve assembled what is already a head-spinningly varied group of people,
Can you quantify their diversity?
Posted by: Urs Schreiber on April 27, 2012 11:53 AM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
Can you quantify their diversity?
Yes, he can, and in many different ways! First, you should specify what kind of similarity between these people is most relevant to your interest: academic, geographical, genetic, etc. Next, give
percentages to quantify that similarity — just how similar are experts on livestock breeding and category theory, or Cambridge and the other Cambridge? Finally, are you more interested in the total
number of different specialties/cities/genotypes/etc. represented, or do you find that only the most common ones are relevant, or do you want to graph the entire diversity profile?
Posted by: Mark Meckes on April 27, 2012 5:48 PM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
Can you quantify their diversity?
Yes, he can, and in many different ways!
Not to say: in diverse ways. ;-)
Seriously, it is clear that there are many ways to assign numbers to natural phenomena and to declare that these numbers mean something. But I am imagening that the mathematics of diversity can
somehow give more universal answers?
Can you give me a rough idea of what “mathematics of diversity” can accomplish? What would be a typical theorem in diversity-theory? What do we learn from diversity theory? How is diversity theory
different from just statistics?
Posted by: Urs Schreiber on April 29, 2012 10:02 PM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
>>>Can you quantify their diversity?
>>Yes, he can, and in many different ways!
>Not to say: in diverse ways. ;-)
Actually, can we take that idea seriously? If we pick a probability distribution, its Renyi extropy gives a family of quantities measuring “diversity” depending on a continuous parameter alpha, which
(if we fix a distribution from which to draw alpha) we can interpret as another probability distribution on the set of possible diversities. Could we iterate this process, producing not just
effective numbers, but effective numbers of effective numbers, and so on?
For example, if a population is very evenly divided into n equal populations, then all of its Renyi extropies will be approximately n, and so the diversity of extropies would be only slightly more
than 1. On the other hand, if the Shannon extropy of an arbitrary distribution is equal to n, we would expect the Renyi extropies to be more varied, and the diversity of extropies would be larger.
Perhaps one could even recover, in some special circumstances, the original distribution from knowledge, say, the Shannon extropy, the Shannon extropy of the Renyi extropies, the Shannon extropy of
the Renyi extropies of the Renyi extropies, etc.?
Posted by: Owen Biesel on April 30, 2012 1:27 AM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
Thanks for picking this up in a constructive way, Owen.
I don’t know about you (you all), but I found it amusingly self-referential to say of conference about biodiversity that there are going to be a
head-spinningly varied group of people
and then to exclaim that this can be made precise
in many different ways!
And, yes, while amusing, it immediately seems to raise serious questions for a “theory of diversity” to be.
Posted by: Urs Schreiber on May 2, 2012 8:26 PM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
I don’t know about Tom’s original post, but the self-referentiality in my comment was deliberate. I thought about writing “in many diverse ways”, but that seemed just a little too unsubtle.
More seriously, I haven’t had a chance to think carefully about it, but what Owen is suggesting seems closely related to the way entropies of different flavors appear as rate functions in large
deviations theory.
Posted by: Mark Meckes on May 3, 2012 1:45 PM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
Interesting, Owen, interesting.
I don’t have a direct answer, but here are two things that feel related.
First, if you know all the Rényi extropies of a (finite) probability distribution then you can recover the distribution itself, up to permutation of the points (or ‘species’). In fact, you don’t even
have to know the Rényi extropies of all orders $q$: it’s enough to know them for some sequences of orders $q$ converging to $\infty$.
Second, there’s the thing known as the Giry monad. I don’t know how much category theory you know, but if the answer is “not much” then fear not: there is a direct intuitive explanation, as follows.
Suppose we have a space $X$ of some kind. There are many ways of choosing a point randomly from $X$. (Interpret this, if you like, as “there are many probability distributions on $X$”, though I don’t
want to be too precise or formal here.) Now, suppose you have a random way of choosing a random way of choosing points from $X$. For example, maybe $X = \mathbb{R}$, you choose a number $\sigma \in \
{1, 2, 3, 4, 5, 6\}$ by throwing a fair die, and then you choose a point from $\mathbb{R}$ according to the normal distribution $N(0, \sigma)$ with mean $0$ and standard deviation $\sigma$.
The point is that this extra layer of randomness doesn’t really make the process any more random: it still just reduces to a random way of choosing points from $X$. In other words:
a random way of choosing a random way of choosing points from $X$
gives rise canonically to
a random way of choosing points from $X$.
This is, in essence, the idea behind the Giry monad. As I said, I don’t know if it really has any relevance to your comment; it’s just a hunch.
Posted by: Tom Leinster on May 9, 2012 5:20 AM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
This is, in essence, the idea behind the Giry monad.
I haven’t yet got around to understanding the Giry monad, but I feel like this gives me a good place to start, because it a familiar idea to me which I already know how to think about in my own
terms. Topological subtleties aside, it just says that the space of probability measures on $X$ is convex.
Posted by: Mark Meckes on May 10, 2012 12:50 PM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
Topological subtleties aside, it just says that the space of probability measures on $X$ is convex.
Yes, I suppose it does!
If I were to give a slightly more detailed sketch of the idea behind the Giry monad, I’d add the following two things:
• each point of $X$ gives rise canonically to a random way of choosing points from $X$ (namely, “always choose that point”)
• any map $X \to Y$ (whatever that means) gives rise canonically to a map from (random ways of choosing points from $X$) to (random ways of choosing points from $Y$).
But these are kind of trivial compared to what I said in my previous comment.
Posted by: Tom Leinster on May 10, 2012 5:41 PM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
You might want to take a look at the article I link to here – A Categorical Foundation for Bayesian Probability.
Posted by: David Corfield on May 11, 2012 9:19 AM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
Thanks — I did catch that link in the other thread and put the paper high on my to-read list. (Unfortunately, my progress through that list is likely to be quite slow for the foreseeable future (but
then, isn’t everyone’s?).)
Posted by: Mark Meckes on May 11, 2012 1:51 PM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
If you literally have a to-read list, and actually use it, I think you’re already much more organized than most of us.
Posted by: Tom Leinster on May 11, 2012 4:46 PM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
I do literally have a to-read list, in fact two of them (papers and books). Whether I actually use them in a meaningful way is another matter.
Posted by: Mark Meckes on May 11, 2012 5:03 PM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
Urs wrote…
But I am imagining that the mathematics of diversity can somehow give more universal answers?
I hope not. I view it as part of the general idea of decategorification.
Can you give me a rough idea of what “mathematics of diversity” can accomplish? What would be a typical theorem in diversity-theory? What do we learn from diversity theory? How is diversity
theory different from just statistics?
These are excellent questions that deserve long answers. I’ll try to restrain myself.
The first thing is that there are deep and difficult questions that have nothing to do with statistics. In fact, my own involvement in the mathemtics of diversity is entirely non-statistical. There
are very serious statistical challenges in measuring diversity: for example, how can you possibly guess the number of species too rare to show up in your sample? But personally, my interests lie
Earlier this year, I gave a talk about diversity measurement at the (British) National Centre for Statistical Ecology. (I had to come clean and begin by admitting that I knew almost no statistics and
almost no ecology.) At some point in the talk I showed a slide saying something like “we assume that our community is fully censused”, causing laughter from the audience. But the point is that even
if you know about every last beetle in your community, producing meaningful invariants is still non-trivial.
So, if it’s not statistical, what is it?
Maybe I’ll say more another time, but for now I’ll just point out the information-theoretic connections. Urs, you’ve probably seen enough posts about this to know that there’s a close connection with
entropy. An ecological community of $n$ species can be crudely modelled as a probability distribution
$p = (p_1, \ldots, p_n),$
where $p_i$ represents the relative abundance (or proportion) of the $i$th species. Ecologists often measure the diversity of the community as the Shannon entropy of the distribution, namely
$H(p) = -\sum p_i \log(p_i),$
or (better) as its exponential,
$e^{H(p)} = p_1^{-p_1} p_2^{-p_2} \cdots p_n^{-p_n}.$
Then there are various related entropies, such as the Rényi and Tsallis entropies, appearing in the physics, information theory and statistics literature. Many of them have appeared in the ecology
literature too.
Are these quantities really relevant ecologically? Well, that’s been the subject of lots of debate. Ultimately what we want is some theorem saying: “any diversity measure satisfying conditions X, Y
and Z must be one of the following”. There’s a long tradition of such theorems in information theory, tapping into the theory of functional equations. Some of the conditions involved are clearly
well-motivated ecologically, and some are more tenuous.
In my opinion, the most incisive work on ecological diversity measurement rises high above ecology. It becomes about something far more general, something mathematically universal. That’s really why
I’m interested, attractive as the ecological applications may be.
Here’s a recent example. Take an ecological community partitioned into $m$ geographical areas or “subcommunities”. Ecologists have long asked: how much of the whole community’s diversity can be
attributed to the diversity within the individual subcommunities, and how much to the variation between the subcommunities? You can imagine this might affect decisions on how to allocate resources
for conservation.
The average diversity within each subcommunities is traditionally called the $\alpha$-diversity, the diversity between the subcommunities is called the $\beta$-diversity. Those are loose descriptions
only. To turn them into precise quantities is no mean feat, and in fact people did it wrongly for several decades.
It was shown in 2007 (by sometime Café contributor Lou Jost) that, in fact, if you want $\alpha$- and $\beta$-diversity to be independent in an intuitively obvious sense, then there’s only one
possible way to define them. It’s essentially a theorem about functional equations, but as far as I know it’s not one in the functional equation literature. And it’s a definitive answer; it’s the
canonical way of partitioning diversity.
(I should mention that a similar result had been obtained in the late 1970s, by the Canadian statistician Rick Routledge, unknown to Jost at the time. But either Routledge didn’t realize the
significance of his own work, or he was rather too quiet about it.)
I want to say much more, especially about how this links in to the theory of magnitude/Euler characteristic of enriched categories and lax colimits. But I think this comment is long enough already.
Posted by: Tom Leinster on April 30, 2012 3:53 AM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
Thanks for the reply, Tom!
I wrote:
But I am imagining that the mathematics of diversity can somehow give more universal answers?
Your first reaction to this is to say…
I hope not.
…but I gather there is some misunderstanding between us at this point about which hopes are being discussed, because right afterwards you do allude to precisely such more universal answers, when you
write, explicitly:
the most incisive work on ecological diversity measurement rises high above ecology. It becomes about something far more general, something mathematically universal.
and before that
Ultimately what we want is some theorem saying: “any diversity measure satisfying conditions X, Y and Z must be one of the following”.
Concerning this last point: how would you describe to an information-theorist the difference between information theory and diversity theory?
(That’s probably the question I should have asked instead of “How is diversity theory different from just statistics?”)
Posted by: Urs Schreiber on May 2, 2012 8:22 PM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
I gather there is some misunderstanding
Ah, yes. I wasn’t clear. When I wrote “I hope not”, I meant that I hoped you weren’t merely imagining that the mathematics of diversity can somehow give more universal answers. I hope it really can.
Like, if Elvis walks into the room, you say “is that Elvis, or am I imagining it?”, and I reply, “no, you’re not imagining it!”
Concerning this last point: how would you describe to an information-theorist the difference between information theory and diversity theory?
One difference is that there isn’t really a known thing called “diversity theory” — yet.
Another is in the applications that shape the two areas. There is, of course, a lot of biological literature on diversity measures, with varying degrees of mathematical sensitivity and varying
degrees of biological relevance. (I don’t think those two things are in opposition; on the contrary, I think they pull in the same direction. But, of course, people have different backgrounds.)
There’s also related work in other fields, especially economics (which I know roughly zero about).
On the other hand, information theory grew out of communication theory, and has been nourished by its interactions with statistical mechanics.
Forgetting the applications and thinking about the pure mathematics of it, I have a certain vision of what “diversity theory” is/could be, though here’s probably almost no one else on earth who sees
it the same way. My personal view is that it’s a part of some general story about cardinality-like invariants. It’s the part concerning probability distributions.
I said a lot about how diversity fits into a general story about invariants of size in my two posts on “Entropy, diversity and cardinality”, back in 2008. A bit more recently, a theorem emerged
confirming that there’s a substantial connection here. To a zeroth approximation, it says that “magnitude is maximum diversity”: the magnitude of a metric space equals the maximum diversity of a
probability distribution on it. That’s not quite right, but it conveys the general flavour.
Thanks for asking!
Posted by: Tom Leinster on May 3, 2012 6:02 AM | Permalink | Reply to this
Re: The Mathematics of Biodiversity
Hello all,
It seems like there could be a few people here who might be interested in our paper “Hyperconvexity and Tight Span Theory for Diversities” which is available here:
I wrote it a couple of years ago with Paul Tupper (Simon Fraser), and it is slowly creeping through the journal acceptance process.
I’m afraid the term ‘diversity’ might be getting a bit overloaded. For us, a diversity is a pair $(X,\delta)$ where $X$ is a set and $\delta$ is a non-negative function defined on finite subsets
$\delta(A) = 0 \iff |A| \leq 1$
if $B eq \emptyset$ then $\delta(A \cup C) \leq \delta(A \cup B) + \delta(B \cup C)$.
You’ll see that restricting $\delta$ to 2-sets gives the standard metric axioms. We show that this is the ‘natural’ abstraction for {\em phylogenetic diversities} (hence the name).
The main contribution, however, is that the useful (and rather beautiful) theory of injective hulls for metric spaces generalises quite naturally to diversities. In phylogenetics (my ‘home turf’) the
injective hull has lead to all sorts of methods for analysing and visualising evolutionary data. Originally, the idea came from work trying to extend the Hahn-Banach theory to arbitrary metric
Incidentally, the injective hull is exactly the injective hull of category theory (and we introduce the category theory of diversities).
We balance up a fairly large chunk of theory with some applications and links to other areas.
Sorry to blow my (our) own trumpet, but there had been questions about combining diversity and category theory…..
Posted by: David Bryant on May 8, 2012 4:41 AM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2012/04/the_mathematics_of_biodiversit.html","timestamp":"2014-04-19T09:24:03Z","content_type":null,"content_length":"62662","record_id":"<urn:uuid:939fed50-a9f0-4089-a740-2a8fb694c8aa>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Matlab is a tool for doing numerical computations with matrices and vectors.
Colin J. Williams cjw at sympatico.ca
Thu Mar 10 17:54:20 CST 2005
Travis Oliphant wrote:
>>> I remember his work. I really liked many of his suggestions, though
>>> it took him a while to recognize that a Matrix class has been
>>> distributed with Numeric from very early on.
>> numpy.pdf dated 03-07-18 has
>> "For those users, the Matrix class provides a more intuitive
>> interface. We defer discussion of the Matrix class until later."
> [snip]
>> On the same page there is:
>> "Matrix.py
>> The Matrix.py python module defines a class Matrix which is a
>> subclass of UserArray. The only differences
>> between Matrix instances and UserArray instances is that the *
>> operator on Matrix performs a
>> matrix multiplication, as opposed to element-wise multiplication,
>> and that the power operator ** is disallowed
>> for Matrix instances."
>> In view of the above, I can understand why Huaiyu Zhu took a while.
>> His proposal was much more ambitious.
> There is always a lag between documentation and implementation. I
> would be interested to understand what "more ambitious" elements are
> still not in Numeric's Matrix object (besides the addition of a
> language operator of course).
>> Yes, I know that the power operator is implemented and that there is
>> a random matrix but I hope that some attention is given to the
>> functionality PyMatrix. I recognize that the implementation has some
>> weakneses.
> Which aspects are you most interested in? I would be happy if you
> would consider placing something like PyMatrix under scipy_core
> instead of developing it separately.
Yes, after the dust of the current activity settles, I would certainly
be interested in exploring this although I would see a closer
association with Numeric3 than with scipy.
>>> Yes, it needed work, and a few of his ideas were picked up on and
>>> included in Numeric's Matrix object.
>> I suggest that this overstates what was picked up.
> I disagree. I was the one who picked them up and I spent a bit of
> time doing it. I implemented the power method, the ability to build
> matrices in blocks, the string processing for building matrices, and a
> lot of the special attribute names for transpose, hermitian transpose,
> and so forth.
> There may be some attributes that weren't picked up, and a discussion
> of which attributes are most important is warranted.
>> Good, on both scores. I hope that the PEP will set out these ideas.
> You are probably in a better position time-wise to outline what you
> think belongs in a Matrix class. I look forward to borrowing your
> ideas for inclusion in scipy_core.
My thoughts are largely in the current implementation of PyMatrix.
Below is an extract from the most recent announcement.
I propose to explore the changes needed to use Numeric3 with the new
ufuncs. Do you have any feel for when Alpha binary versions will likely
be available?
Colin W.
Downloads in the form of a Windows Installer (Inno) and a zip file are
available at:
An /Introduction to PyMatrix/ is available:
Information on the functions and methods of the matrix module is given at:
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2005-March/016743.html","timestamp":"2014-04-21T10:12:53Z","content_type":null,"content_length":"7422","record_id":"<urn:uuid:c3874662-1569-49a8-9740-503c6ea59dc9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
One can't background-independently localize field operators in QG
...because the "basis" of coherent states is overcomplete...
Let me begin with something simple. John Preskill asked you "What's inside a black hole?" and offered you four options:
A. An unlimited amount of stuff.
B. Nothing at all.
C. A huge but finite amount of stuff, which is also outside the black hole.
D. None of the above.
Well, the option (D) may have been at the beginning and an obvious suboption of (D), "The black hole interior is a region just like any other region and independent from others", should have been
offered as a special choice (E). A surprising result is that (E) is almost certainly wrong. Instead, (C) is right – at least if we omit the very highly curved region near the singularity that could
justify (A) in a complicated way and if we allow the definition of a black hole to cover its rare microstates – if we only allowed the most generic black hole microstates, the answer would be (B):
the interior has to be empty.
Well, (B) may also be interpreted as a claim allowing a firewall, in which case it's wrong in general (the firewall isn't necessary or generic) but of course that there are rare black hole
microstates that contain something that burns you near the horizon much like there are rare black hole microstates with a bunny in the interior.
This point is simple but often misunderstood. A black hole is defined by its event horizon but it doesn't follow that the interior has to be empty. There can be a bunny in it. However, among
microstates of localized matter, a black hole with a bunny is an exponentially rarer class of microstates. Most of the mass \(M\pm \delta M/2\) black hole microstates look empty – that's why the
entropy-increasing evolution converges towards these states as the black hole keeps on devouring the surrounding matter to clean its interior (and vicinity). But don't make a mistake about it: a
bunny in a black hole (or a nonzero occupation number of freely falling field operator modes) is unlikely yet possible.
But let me switch to a more complicated question.
Suvrat Raju's talk
at a recent
Fuzz-Or-Fire workshop in Santa Barbara
, a core of the pro-firewall/anti-firewall conflict became rather visible. On one hand, the Papadodimas-Raju "state-dependence" of the definition of the black hole interior field operators seems
unacceptable to the firewall champions although
it looks pretty much inevitable to many of us
On the other hand, this disagreement may be described as a criticism of Polchinski's and pals' alternative: They believe that the bulk field operators and especially their location in a quantum
theory with a dynamical geometry may be defined by a recipe described operationally in a "background-independent way". For example, start at the AdS boundary that must be close to the empty AdS
geometry, pick a direction as if it were an empty AdS, and go in this direction for a certain proper time or proper length. Then you turn in the direction of the greatest curvature (defined in some
other way) and walk for 5 meters of proper distance or 2 microseconds of proper time, and so on. You get to a point and there is a scalar field at that point that you may call \(\phi(x,y,z,t)\) and
ask about its eigenvalues or what it does when it acts on a state \(\ket\chi\) etc.
The classical counterpart of such a prescription sounds totally OK in classical general relativity. You may imagine that one particular spacetime geometry is the "right one" and whatever the
spacetime geometry is, the operational procedure involving the proper times, proper distances, and angles may be followed and the right value of the field at the point we just found becomes a
well-defined \(c\)-number (let's talk about scalar functions of local tensor fields and their derivatives only).
Joe Polchinski and others believe that the same background-independent operational definition of field operators at various points may be used in quantum gravity, too. This belief is incorrect. There
are several ways to see why. They seem very different but ultimately they are rooted in the same general properties or at least "spirit" of quantum gravity that is imposed on us by consistency.
To maintain their belief that the background-independent localization of field operators (and therefore state-independence) is possible, the firewall advocates must assume that
• the metric tensor is a good and precisely and uniquely defined degree of freedom (quantum observable) at arbitrarily short distances
• every ket vector in a quantum gravity theory may be uniquely rewritten as a sum of ket vectors each of which comes with a well-defined classical geometry
Both of these assumptions are incorrect, however. In some sense, the second problem is more damning for the firewall advocates' plans than the first one.
The metric tensor isn't any good at (sub)Planckian distances
The first point has to be true because they want to determine proper distances. You need a metric tensor for that. Because the definition must work even in rather general, potentially extreme
environments near collapsing and other black holes where we often need an exponential precision to locate the events (note the coordinate singularity at the horizon etc.) while we have to resist high
matter densities etc., the definition of the metric tensor has to be really exact for Joe's and pals' background-independent operational definitions of the points in a general spacetime to make any
However, quantum gravity doesn't allow you things like that. The metric tensor is only good and well-defined in an effective description of quantum gravity. At shorter distances, it just ceases to be
a good observable. Well-defined observables in quantum gravity are different; the gauge fields in the \(\NNN=4\) Yang-Mills theory involved in the most famous example of the AdS/CFT correspondence
are an example. The matrices \(X,P,\Theta\) in Matrix Theory are another example.
Even if you had something like a "closed string field theory" that would apparently contain the metric tensor "everywhere", you would have to solve the problems of the mixing of its field modes with
some other modes of fields arising from heavy excited string states (with the same charges and spin). To make the procedure well-defined, you would have to overcome the problem that there are many
ways (related by field definitions involving all the massive string fields) how to define the metric tensor. They may be thought of as different "renormalization schemes". You may imagine that a
different "renormalization scheme" amounts to switching the metric from something like the string frame to something like the Einstein frame but the rescaling depends on the massive scalar fields \(h
\) in string/M-theory rather than the dilaton \(\phi_D\). Classically, \(h\) is constant so this rescaling doesn't change much. However, quantum mechanically, \(h\) is a dynamical, fluctuating field
so an \(h\)-dependent redefinition of the metric tensor does matter.
But even if your procedure directing someone to walk over some proper distances in a general spacetime etc. did specify a particular "renormalization scheme", it would still be no good because at
very short, near-Planckian distances, the geometry becomes brutally fluctuating and the proper distances and times, when accurately measured over the violent landscape of the quantum foam, are
probably divergent and/or ill-defined. So Joe's prescription would break down.
My point is that whatever "renormalization scheme" you pick, \(g_{\mu\nu}(x,y,z,t)\) is a fluctuating degree of freedom that has nonzero probability amplitudes to be nonzero and substantial even in
the vacuum state of the spacetime. By dimensional analysis, the magnitude of the contribution \(\delta L\) of these fluctuations to a proper distance \(L\) comparable to the Planck length is
comparable to the Planck length i.e. 100 percent; I believe that this dimensional analysis, assuming \(g_s=O(1)\), is OK even in string theory despite its ability to "calm down" the quantum foam. You
simply shouldn't assume that the flat and peaceful spacetime offers you good expectations about the behavior of proper distances, times, and angles near/below the Planck scale. Try to follow Joe's
algorithms on the quantum foam (the picture at the bottom):
It's pretty obvious that you get caught in the weird tunnels and valleys of this quantum foam whatever recipe you choose. What you actually need is a geometric prescription that is allowed to use the
smooth, nearly flat spacetime similar to the upper part of the figure. But using the proper distances and proper times calculated from the dynamic metric tensor just don't give you anything like a
flat space even in the vacuum-like ket vectors. The quantum foam picture at the bottom of the picture above is an eigenstate of \(g_{\mu\nu}(x,y,z,0)\) and even the Minkowski-like vacuum state in
quantum gravity is a superposition of states whose geometry looks like this. You won't really get anywhere with the background-independent protocols to isolate a location in the spacetime.
Non-uniqueness of a "geometry" associated with a ket vector in QG
But it's the second complaint against Joe's paradigm, if you allow me to call it in this way, that seems more damning and conceptual. You could imagine that for some unknown reasons, string theory
calms down the quantum foam so nicely that the sub-Planckian terrain may still be imagined as a smooth space rather than the quantum foam and the procedure could get through with a potentially
natural choice of the "renormalization scheme".
However, the procedure will still fail due to some facts that don't depend on the short-distance, Planckian physics. What are these general problems with the background-independent approach to the
location of points in a dynamically curved quantum spacetime?
For the sake of simplicity, let's assume that the procedure "go here for 5 meters, turn left etc." is only used to move through a slice of the spacetime at a fixed value of the coordinate \(t\),
whatever it is. If we considered trajectories deviating from the slice, we would open yet another can of worms because the metric tensor doesn't commute with its time derivatives (the uncertainty
principle!) so it's just downright impossible to imagine that these behave classically in any ket vector (this assumption is as wrong as the assumption that arbitrarily sharp trajectories in the
quantum phase space make sense).
Fine. Polchinski's procedure is meant to tell you what is the action of an operator \(\phi(P)\) on a general quantum gravity ket vector \(\ket\psi\). The point \(P\) is specified by an operational,
background-independent procedure of the type "go for 5 meters, turn left, do this and that". Now, Joe believes that the action\[
\] is another well-defined ket vector. We can see it can't be the case. Why? Well, the vector \(\ket\psi\) isn't an eigenstate of the metric tensor operators \(g_{\mu\nu}(Q)\) at the relevant points
\(Q\) that may appear along the trajectory. To avoid the immediate ill-definedness of a recipe based on proper distances, we must decompose \(\ket\psi\) into eigenstates of the metric tensor
variables \(g_{\mu\nu}(Q)\):\[
\ket\psi = \sum_j \ket{\gamma_j}
\] Well, the sum could actually be an integral and the normal people would tend to normalize \(\ket{\gamma_j}\) to unity and write the normalization factor as a special coefficient, and so on, but
the equation above is good enough. In the previous section, I discussed the problems resulting from the violent character of the geometry in the \(g_{\mu\nu}\)-eigenstate. But even if you forget
about these short-distance troubles and ambiguities and you assume that the proper distances through the apparent quantum foam behave just like your long-distance intuition suggests (up to a
universal renormalization coefficient for the distances), you face insurmountable problems, even at long distances. They're related to the short-distance problems discussed previously but the
arguments below hopefully make their independence on the UV physics more obvious.
Imagine that we want to apply the procedure to the most peaceful yet nontrivial state we can imagine, a smooth macroscopic gravitational wave in an otherwise empty spacetime. This state containing a
gravitational wave may be written as a coherent state\[
\ket\psi = \exp\left[\int d^d k\,\alpha(k) c^\dagger(k)\right] \ket 0.
\] It's the exponential of a superposition of creation operators for some graviton states. As a homework exercise ;-), add sums over the polarizations and other indices and everything else you like
or need. Now, additional particles may be created on top of the state \(\ket\psi\) and I think that Polchinski would say that the right way to apply his procedure is for the distances in the states
that contain a few particles on top of the curved spacetime \(\ket\psi\) to use the geometry of this curved spacetime when we try to follow the procedure to "find the location in a general
You should already feel uncomfortable at this point because the state \(\ket\psi\) is an excitation of the Minkowski vacuum state, too. Rewrite the exponential as a Taylor expansion if you want to
make the point more suggestive. Gravitons are particles, too. You might say that it couldn't be a hopeless idea to use the flat spacetime's metric when you try to locate points in the spacetime
except that it would also be obvious why the relationship between the local operators on top of the excited coherent, curved space \(\ket\psi\) with the local operators on top of the Minkowski space
\(\ket 0\) is extremely convoluted.
So let me assume that Polchinski et al. really want to use the curved geometry from the coherent state \(\ket\psi\) when they follow their background-independent procedure. It means that to find the
action of a local operator \(\phi(P)\) on \(\ket\chi\), they need to decompose \(\ket\chi\) into "matter-like" (and therefore geometry unchanging) excitations of coherent states of the type \(\ket\
psi\) above for which the metric tensor is known.
The trouble with this background-independent physics is that the "basis" of the harmonic oscillator Hilbert space consisting of the coherent states is overcomplete.
basic introductions
to coherent states if you have any doubt about the statement. So even if you restrict your calculations to ket vectors \(\ket\chi\) that only contain purely gravitational excitations, you will need
"the" decomposition of such states to coherent vectors to identify \(\phi(P)\) but "the" decomposition actually isn't unique.
This is a problem that makes your background-independent procedure break down even for states \(\ket\chi\) that are as simple as a low-energy, single-graviton excitation of the Minkowski vacuum
state. On one hand, you could consider this excitation to only change the background geometry infinitesimally and use the Minkowski geometry to follow the procedure. The first excited state of a
harmonic oscillator is proportional to a superposition of coherent operators weighted by \(\delta'(a)\) all of which are infinitesimally close to the origin of the phase space (interpreted as a flat
space in the Fock space of gravitons). On the other hand, you may rewrite this first excitation of the harmonic oscillator as some linear superposition of coherent states centered elsewhere, even
very far from the center at zero (effectively a linear superposition of highly curved spacetimes). It's clear that the point \(P\) where you get by following these spacetimes will depend on the way
how you decompose your states to the coherent states. This way isn't unique and the infinitely many choices differ by differences that are unbounded from above.
If the procedure doesn't work for single-graviton states, you may be sure that the problems become exponentially worse if you try to apply the procedure to a black hole spacetime with a significant
density of mass, coordinate singularities, and many other things. It's completely hopeless.
Incidentally, if you tried to replace the decomposition into coherent states by a decomposition into \(g_{\mu\nu}\)-eigenstates – in the harmonic oscillator analogy, \(x\)-eigenstates – discussed at
the beginning, you could cure the overcompleteness problem of the basis but you would also totally delocalize the vectors in the values of \(\partial_t g_{\mu\nu}\) which means that the time-like
geodesics of the recipe would probably become infinitely singular (the coherent states naturally balance the needs of the metric in the spatial and temporal directions); you wouldn't be guaranteed
that the proper distances are well-behaved and finite at short distances. At the end, any attempt to define the recipe will fail because what all of them actually contradict is the equivalence
principle: they are assuming that the spacetime geometry is classical enough so that the proper length/time of some generic trajectories going in many directions may be accurately measured which
isn't so.
An alternative for the background-independent operational localization protocols
Once I have shown that the background-independent way of identifying locations of operators isn't possible, it may seem polite for me to tell you what's a legitimate replacement of it. We could be
saying that no calculations based on strictly local operators attached to "points" are possible in quantum gravity. Except that I think that they are possible. However, you have to assume (manually
and, whenever possible, cleverly choose) a background – a particular "curved space" vacuum-like state of the quantum gravitational theory which may also be obtained as a coherent state built from
other vacuum-like states – and construct many other microstates out of this vacuum-like state by the action of a "finite" (not scaling with various parameters called \(N\) that would be increasing
functions of the curvature radius etc.) number of field operators where these field operators are behaving much like they are behaving in the flat space, at least locally in regions where the
curvature may be neglected.
Papadodimas and Raju
explain these conditions more quantitatively. In some sense, I believe that the ER-EPR correspondence with its ER bridges is a special visualizable "Ansatz" for solutions of such constraints.
Here I must say that people like Lee Smolin have been saying totally idiotic things about "
background independence
for years
. They would even criticize string theory for being able to write the Hilbert space of quantum gravity as a de facto Fock space built upon a particular background. Remember all the silly demagogy
that no backgrounds can ever be talked about because GR imposes a democracy between all of them, and all this rubbish.
Feel free to impose a ban on talking about backgrounds but then you will be unable to make any calculations that may be compared with the experiments, too. The adjective "background-independent" may
be given many meanings and some of them are respectable, at least in some contexts, but be sure that if your interpretation is that "we can't use any backgrounds in calculations at all", then you are
throwing the baby out with the bath water.
Because I properly learned many of the computational techniques that existentially depend on the choice of a background (in the spacetime or the world sheet) from Joe Polchinski, I wouldn't have
believed 14 years ago that he would ever be saying things "remotely similar" to the Smolinian rubbish on the background independence.
If we want to organize a Hilbert space (or, more typically, its subspace) as some collection of states with a spatial interpretation (states that tell us what is being observed here or there), then
we simply need to associate the microstates with a background. We also need to gauge-fix the diffeomorphism gauge symmetry or redundancy, if you wish. Only when it's done, it's possible to define how
local field operators act in between the states in this subspace of the Hilbert space. It's clear that if you create too many things in your background, or if you deform the geometry by too many
gravitons, to be more specific, the added gravitons or the backreaction to the added matter make the original background's geometry an unnatural (or perhaps more accurately, practically not too
useful) way to measure distances and times. You should better pick a different background to parameterize the relevant portion of the Hilbert space if you consider states whose geometry is too
different from the original background. But you must choose
background because trying to leave the "job to measure the geometry" on the microstates without a choice of background requires a decomposition of the gravitons' Fock space states to coherent states
which isn't unique.
ER-EPR's definitions of operators are clearly background-dependent, too
The state-dependence – well, really background-dependence – of the definitions of the black hole interior (and perhaps all other) local field operators is something most tightly associated with the
insights by Papadodimas and Raju. But I believe that the
Maldacena-Susskind ER-EPR correspondence
makes this inevitable background dependence equally if not more self-evident.
It's simple. They say that the Hilbert space of one Einstein-Rosen bridge (a pair of black holes geometrically connected by a non-traversable wormhole) is the same Hilbert space as the Hilbert space
of two faraway black holes (that are allowed to be entangled). Clearly, these two pictures of the same Hilbert space envision completely different background spacetimes – the spacetimes have
different topologies, in fact. So the definitions of field operators in the black hole interior(s) are clearly different in these two pictures. In other words, the definition of local field operators
depends on whether you describe the same Hilbert space as two black holes that can get entangled later (but you're "expanding" around the microstates for which the entanglement is low and the black
holes are assumed to be independent to start with) or the Hilbert space of a single Einstein-Rosen bridge with just "one interior" (you're expanding around a particular microstate for which the
entanglement entropy is maximized; note that there can be many such maximally entangled microstates for which the bridge is correspondingly "twisted"). In other words, the definition of the local
field operators is background-dependent, i.e. dependent on the choice of the spacetime background you have to make manually and subjectively before you start your calculations. It's clear because the
local operators depend even the topology which is totally different in the two choices. The two black holes have two interiors while the Einstein-Rosen bridge only has one component of the interior.
For various situations or classes of microstates, one of the two descriptions is more convenient or practical than the other description, but there can't be a universal law that would make one
description more correct than the other one a priori. You must predecide how many components the interior(s) has (have) before you start to talk about the field operators in the interior(s).
Finally, I must say that I believe that most of what I wrote above aren't my exclusive original insights but just a reinterpretation of some insights made by Papadodimas and Raju which uses different
words. If this is not a legit way to describe what they concluded, they will tell me and I will inform you, too.
I like to think about the ER-EPR correspondence but again, I believe it is just a more specific, visualizable "Ansatz" how to write the field operators at different places and the general,
non-visualized principles for the operators were already found by Raju and Papadodimas (and perhaps others whom I may have slightly overlooked). The Raju-Papadodimas conditions for the mutual
relations between the field operators start to break down once you arrive to short enough distances where the Einstein-Rosen bridges with the Hawking radiation become visible.
snail feedback (8) :
like! I will comment on this! With LM's permission, I might have some questions in 1-2 days... :)
This is a nice inspiring post that gives me something to think about, I'll certainly have to reread it another day before midnight :-)
In particular the renormalization analogy of how Polchinsky and colleagues try to obtain background independent definition of local oparators inside a black hole picks me. So could one also see
that what they try does not work from the fact, that gravity is not renormalizable?
Sorry if I am off the mark ... :-/
I think you must be right - this procedure must be provable to be impossible from GR's non-renormalizability, too, although my attempted proof would probably sound chaotic and imcomplete at this
point. But the non-renormalizability is a technical way to see that the metric tensor can't be a good variable up to arbitrarily short distances, one of the requirements of Polchinski's
prescription to be doable with the fine precision that I pointed out.
Hi Lubos,
Kind off topic: do you find the following article in Nature
balanced, reflecting the current status of ideas? Personally I don’t. Theories in the fringe of physics research, attracting little attention, are overrepresented.
Dear Giotis, I read it yesterday and found it much better than the average. It's not about all of theoretical physics, of course, but I think that fringe theories and a core of the respectable
current research are given about 50% each which is a much higher percentage for the credible physics research than average articles about similar topics. Mark van Raamsdonk may smile as loop
quantum gravity and CDT - which manifestly have nothing to say about the entanglement in QG or thermodynamics in QG - were placed in the middle of his topics as if they were solving some problem
in the heart of his thinking.
Now, I don't think that Mark is the only or unchallenged researcher in similar matters but I surely do find - and have found for years - his papers sensible, original, and careful
and I think he's pretty much the forefather of most recent papers in the whole field that talk about the "origin of spacetime" and its relationships with quantum information (those papers) that
are not obviously wrong.
hear hear! :D
Ok, my comments:
First, let me note that I think analytic continuation has a nice collateral effect when analyzing the above problems. Mathematically speaking the definition of a metric is somehow more
restrictive when dealing with topological spaces. Not the same is true for the definition of continuity which can be constructed easily without the use of notions like distance or "subtraction of
positions". In this sense the use of continuity and continuous mappings is essential. Next, of course, the holographic principle states relatively clearly that field theories over-count the
degrees of freedom. I may wonder if other dualities may have some interesting effects on the problems described... This being stated I understand the BH-horizon as a surface that encodes an
amount of information (the whole of it, by that means, but in a different, more compact way). The holographic principles assures us (more or less) that information must be representable in that
way but it doesn't say it is the only way one can represent it. One can, in the end, represent it quite "inefficiently" giving the "image" of a N dimensional world as we see it around us or as we
may see it inside a BH.
Of course I appreciate the fact that you finally understood what I mean by "geometrical uncertainty principles" and used it in the argument. Of course I agree with my idea ;)
I also have some inclination for the beauty of the arguments related to how one could infer fundamental restrictions on knowledge from apparent engineering type problems...
I would like to understand more about this whole idea of "background independence". If I get it right it cannot have the interpretation that one "cannot use a background"... Of course one can and
the problem appears to me analogous to the connections between geometry and topology. Some aspects of differential geometry can be related to topology, others not (but my analogy may be
Excellent post! I especcially liked the part about changing the background by perturbing the metric. | {"url":"http://motls.blogspot.com/2013/08/one-cant-background-independently.html","timestamp":"2014-04-18T06:09:02Z","content_type":null,"content_length":"227267","record_id":"<urn:uuid:68c04aed-a498-44e5-b694-7ab73cc3fa6b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: how can i make my loop run faster?
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: how can i make my loop run faster?
From Partho Sarkar <partho.ss+lists@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: how can i make my loop run faster?
Date Mon, 19 Sep 2011 10:36:09 +0530
You don't seem to be actually making any use of the panel structure of
the data. Stata has very neat built-in procedures for dealing with
such data.
Very briefly, 2 pointers (I am ignoring the special wrinkle in your
problem that you want to run 20 seoarate regressions for each "firm
i-period t" pair- you would have to adapt the procedure accordingly):
A. I would use -tsfill, full- to fill in the time values and balance the panel.
B. If you use tsset panelvar datavar (or xtset), where panelvar is
your panel identifier, and datevar the date variable, you can use:
statsby _b _se, by(panelvar): regress y x
to do all the regressions in one go (assuming a single regression for
each "firm i-period t" pair), rather than separately within a long
loop. You can collect the results saved in r-class macros, as with
_b & _se above. See -help statsby-
Having said all that, I have never tried to run a set of regressions
with 30,000 firms & 200 time periods in a single run of a program!!!
I suspect this will be painfully slow no matter how efficient your
code. An obvious alternative would be to split the firms into, say, 10
subsets, do the regression for each subset, and put all the results
Hope this helps
Partho Sarkar
Consultant Econometrician
Indicus Analytics
New Delhi, India
On Mon, Sep 19, 2011 at 5:22 AM, Stefano Rossi <sr525@cornell.edu> wrote:
> Dear Statalist Users,
> I wonder if you can help me make a faster loop?
> I have an unbalanced panel of about 30,000 firms and 200 periods, and for each "firm i-period t" pair I need to run 10 regressions on the 12 observations from t-1 to t-12 of the same firm i, and another 10 regressions on the observations from t+1 to t+12 of the same firm i. I have come up with the following program, which works well as it does what it should do, but it is very slow (due to the many ifs I suspect) - here's a simplified version of it with just two regressions:
> forval z = 1/30000 {
> levelsof period if firm==`z', local(sample)
> foreach j of local sample {
> local k = `j' - 13
> capture reg y x if firm ==`z' & period<`j' & period>`k' & indicator==1
> if _rc==0 {
> predict y_hat, xb
> replace before = y_hat[_n-1] if firm == `z' & period == `j'
> drop y_hat
> }
> local w = `j' + 13
> capture reg y x if firm ==`z' & period>`j' & period<`w' & indicator==1
> if _rc==0 {
> predict y_hat, xb
> replace after = y_hat[_n+1] if firm == `z' & period == `j'
> drop y_hat
> }
> }
> }
> Right now, it takes several minutes for each firm, so if I run it for the whole sample it would take weeks.
> Is there any way to make it (a lot) faster?
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2011-09/msg00760.html","timestamp":"2014-04-19T05:04:59Z","content_type":null,"content_length":"11096","record_id":"<urn:uuid:c164a811-3aa2-4c72-83e1-964e66a67fe2>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
write the expression in radical notation. with explanations please. x^2/3y^1/5
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f74df57e4b0f07ddab14acb","timestamp":"2014-04-16T10:22:44Z","content_type":null,"content_length":"56479","record_id":"<urn:uuid:37860387-3645-4064-9f69-dc2fd4f2f387>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
Obtaining fit indices (CFI, RMSEA, etc.) with FIML
Fri, 12/30/2011 - 21:43
I am trying to adapt the sample ssrip for FIML stimation
Sorry for posting this in the wrong place. I couldn't figure out how to open a new topic. I am trying to adapt the sample script on FIML estimation for missing data. In the script below how do I
change the 2 in log2pi %*% 2 in "firstHalfCalc" expression to represent the number of variables that has nonmissing data for that observation? In the example shown 2 worked as the right number for
all observations for likelihood calculation since there was no missing data.
Sorry if this is a silly question. I am quite new to OpenMx.
#row objective specification
bivCorModel <- mxModel(model=bivCorModel,
mxMatrix("Full", 1, 1, values = log(2*pi), name = "log2pi"),
expression=log2pi %*% 2 + log(det(filteredExpCov)),
name ="firstHalfCalc",
expression=(filteredDataRow - filteredExpMean) %&% solve(filteredExpCov) ,
name = "secondHalfCalc",
expression=(firstHalfCalc + secondHalfCalc),
name = "reduceAlgebra",
Mon, 10/11/2010 - 10:19
It is supposed to be possible
It is supposed to be possible to obtain fit statistics with the summary() command. This involves running a saturated model (e.g., saturatedModel) so that it can be compared to a test model (e.g.,
testModel). Then use summary:
summary(testModel, SaturatedLikelihood=saturatedModel)
summary(testModel, SaturatedLikelihood=132) where 132 is the numeric value of the -2 log likelihood from the saturated model.
However, when I used this for a FIML model, the summary() command returned a chi-square difference test, but not adjusted BIC or RMSEA (AIC and BIC values were from the testModel). It may be that
RMSEA is not computed for models with missing data. Does anyone know?
~ Angela
Mon, 10/11/2010 - 10:42
Hmm. Adjusted BIC shouldn't
Adjusted BIC shouldn't depend on the saturated model. Looks like we just plain forgot to include that one.
I don't know if not having RMSEA computed under FIML was a conscious choice. RMSEA technically isn't defined with missing data, though that doesn't stop people from using it.
Mon, 10/11/2010 - 15:56
Adjusted BIC has not been
Adjusted BIC has not been implemented. I think we were looking for volunteers to implement more fit statistics. This would be a good project for someone to start their involvement with the OpenMx
development. I think RMSEA is calculated as:
sqrt((chi / DoF - 1) / retval[['numObs']])
If the value inside of the sqrt() is NaN or negative, then RMSEA is assigned a value of NA. Could that be what's happening for your model?
Mon, 10/11/2010 - 16:04
That looks right, with the
That looks right, with the caveat that if (chi/Dof-1) is less than zero, RMSEA "should" return zero rather than NA. You'll sometimes see the formula stated as one of the following, invoking some type
of "maximum of" function to keep chi/df-1 (or equivalently, chi-df) non-negative.
sqrt(max(chi/df-1, 0)/n)
sqrt(max(chi-df, 0)/df/n)
Mon, 10/04/2010 - 15:02
Yes, you are correct
Yes, you are correct regarding saturated model estimation. With missing data under FIML, we have to estimate the saturated model, where we could simply calculate it without missing data. This
estimation can get time-consuming, so we don't do it by default.
The process for you:
-run your model as usual,
-create a saturated model, allowing all variances, covariances and means to be freely estimated
-compare the two
The mxCompare function allows for the use of likelihood ratio tests and comparisons on AIC and BIC. RMSEA, CFI, etc must be calculated manually.
Mon, 10/17/2011 - 01:21
Is there anything like a
Is there anything like a generic fit index helper function? All the fit indices I have seen rely on: chi^2/N/DF from the model, saturated, or null models. Is that oversimplified? Would there be
interest in such a function? If there is interest and I can rely on relatively consistent slots on the model object holding the likelihoods, DF, and N, I could write something and post it.
I appreciate that all fit indices may not make sense under all circumstances, particularly when using FIML. If the goal is to discourage use by requiring manual calculation, perhaps a succinct
statement indicating this with support could be written?
I am trying to push for change, but I would be interested in some discussion. I have had two clients now come into our universities consulting center for help using OpenMx, and they expect fit
indices to be computed. I am a little torn between computing them for them, or trying to explain why they may not make sense, to which I have gotten the response that Mplus, Amos (insert other SEM
package), compute it and taht reviewers expect it.
If such a generic way to compute fit indices were added, it seems like it may make sense to be an option in either summary or mxCompare. For example:
cor(dat <- prcomp(matrix(rnorm(1000), 200))$x %*%
chol(matrix(c(1, rep(c(rep(.5, 5), 1), 4)), 5, 5,
dimnames = list(NULL, mvars <- paste("X", 1:5, sep = '')))))
dat[1:20, 1:2] <- NA
mSat <- mxRun(mxModel(model = "Saturated Model",
type = "RAM", manifestVars = mvars,
mxData(dat, "raw"), mxPath(mvars, arrows = 2), mxPath("one", mvars),
mxPath(combn(mvars, 2)[1, ], combn(mvars, 2)[2, ])))
mFactor <- mxRun(mxModel(model = "One Factor Model",
type = "RAM", manifestVars = mvars, latentVars = "g",
mxData(dat, "raw"), mxPath("g", mvars), mxPath("one", mvars),
mxPath(c("g", mvars), arrows = 2, free = c(FALSE, rep(TRUE, 5)), values = 1)))
## both of these functions access necessary components
## if something like a NullLikelihood were added for comparative fit indices
## it seems like they could readily be adapted
summary(mFactor, SaturatedLikelihood = mSat)
mxCompare(mSat, mFactor)
I am also a little curious as to why the p differs between summary and mxCompare? Is summary not using deltaDF? findMethods("summary", classes = "MxModel") lead me to try SaturatedDoF, but this did
not seem like a fruitful approach.
Apologies if I should have started a new thread---it seemed to fit in well with this thread so I left it here so people searching for information would find it all in one place. Thoughts and comments
welcomed, and thanks to the developers for a wonderful, flexible matrix optimizer!
Mon, 10/17/2011 - 10:19
Thanks for the interest, and
Thanks for the interest, and a huge thank you for "push[ing] for change" in the program! Apologizes in advance for the wall o' text that's coming. I'd first like to point people towards two relevant
recent threads. The first thread discusses a variety of fit functions that could in theory be implemented, a decent proportion of which are actually in the OpenMx version of summary: http://
openmx.psyc.virginia.edu/thread/765 . Second, I recently fixed a bug in saturated and independence model degrees of freedom that can affect p-values. This is currently available in our source builds,
and will be released as a bux fix release soon: http://openmx.psyc.virginia.edu/thread/1104 .
Yes, there is interest in having helper functions to assist with fit index calculation. I know that we've discussed in the past (we=developer's team) having functions to create saturated and
independence models to be fit, which would allow users to supply these models as arguments to summary() and have the current slate of fit indices calculated (CFI, TLI, RMSEA, in addition to AIC & BIC
variants that don't depend on these models). Descriptions of this functionality should be in the R help file for summary(). This also allows users to estimate the saturated model once, then use it in
a whole host of model comparisons rather than have it be re-estimated every time a model is run. As we have users with very large models (taking hours or days to terminate), we do our best not to add
extra things to the initial model estimation.
We also (ok, I) think that the fact that the common fit indices aren't appropriate in all circumstances is a great reason not to calculate them. The last thing we (I) want is for users to avail
themselves of inappropriate fit indices and make bad modeling decisions because of a convenience feature. When people use definition variables or have clustered/twin data, there can be many
definitions of saturated models, and OpenMx tries to make as few assumptions about your desired model as possible. There are certainly a number of fit indices (things like SMRS) that depend on
residual correlation matrices that cannot be calculated using FIML (each matrix cell would have a different n), and RMSEA has shown a few problems that are documented in the literature. If users
think they're important, they have the tools to add saturated and independence models to OpenMx and get the appropriate indices.
I recognize that this is a different approach than other programs, and that users want things to be easy to get ready for publication. I view it as an extension of the OpenMx philosophy (which every
developer has a slightly different version of), that I state as "we only do what you tell us to." OpenMx doesn't do any more than it is instructed to, has as few defaults as possible, and doesn't do
anything automatically that could potentially be wrong. This philosophy will undoubtedly not be embraced by all, and some people will still want to use other programs because they can just click and
get an answer out, and that's fine. We can and should be better about telling OpenMx to compute fit indices, but that will likely take the form of functions to estimate the relevant models rather
than an option like "computeRMSEA=TRUE".
I also encourage you, in your consulting and other publication dealings, to not simply calculate whatever inappropriate statistic people want and encourage them to use whatever method is correct for
their particular model. People don't always like being told that they're wrong, or that their software package shouldn't give them a particular stat, or that they need to add a paragraph to their
review letter about how RMSEA is completely useless in a particular circumstance. If people are coming to your for your expertise, then you should share it with them.
We should definitely keep talking about this. What other functions are people looking for that we aren't or are undersupporting?
Wed, 12/07/2011 - 07:08
Hi, I have estimated some
I have estimated some models using FIML and want to use the results for a paper. But reading back all threads on fit indices and FIML and particularly this reply, I'm left wondering if it makes sense
to include any fit indice at all using FIML? If somebody has good references in the literature, I would be very thankful as well.
Regards Rob
Thu, 12/08/2011 - 19:06
Fit indices are relative
For the most part, we need to compare the fit of two or models of interest. Some of the parsimony-based indices, such as AIC and BIC should do this as well for FIML as for, e.g., covariance matrix
analysis, as of course does the likelihood-ratio test. I personally find these indices most useful. Other indices are often based around the idea of absolute fit, by comparing the fit of your model
against a saturated model. It is possible to use these indices with FIML also, but it is necessary to fit the saturated model, and this is computationally effortful so one doesn't want to do it every
time one fits a hypothesized model. Yet others are based around comparison with a very simple model - that there are variances and means, but no covariances. Such models are quite easy to fit with
FIML; indeed they don't require estimation, so they are going to appear in later versions of OpenMx.
Finally, note that for some models - particularly mixture distributions, e.g., latent class models - some alternative model fit comparisons must be used. The computationally intensive bootstrap
likelihood ratio test (where one simulates data based on the simpler model parameter estimates, then fits both the simple and the more complex model, and obtains an empirical distribution of
difference in fit for such models) can be very useful in such situations. It is also the sort of thing that would be straightforward to implement as an R function, although some thought would be
needed about the simulation step. Generic LCA models would be easy enough, but the class of models that OpenMx can specify goes well beyond the usual clearly defined sets (SEM, LCA etc), and
simulating data for some class members would need to be hand-tailored.
Mon, 10/17/2011 - 11:15
Oh wow, I meant to say, "I am
Oh wow, I meant to say, "I am ****NOT**** trying to push for change", but I would be interested in some discussion." Talk about a bad typo (surprised I didn't get my head bit off for that one). I
enjoyed your wall of text---thanks for taking the time to write it!
I really like the OpenMx philosophy of not computing saturated and independence models, both in terms of "I have to know what they are" and in terms of not having the computer do extra work I did not
ask for. I think you are right that it is best to educate people and help them craft a paragraph why particular indices are not appropriate in their situation, but I get lazy sometimes.
I like the idea that *if* I have fit and stored a saturated and independent model, I could pass those objects to summary along with the model I am testing and then get more output by default (or with
a simple argument). One of the threads you referenced has a function that looks like it does this already.
In terms of other functions, what about something to estimate indirect effects between two variables (with standard errors or confidence intervals)?
Mon, 10/17/2011 - 08:13
Calculate fit indices automatically
I completely agree. As a user, I think there should be an option to calculate fit various fit indices that reviewers expect. If I recall, the reason that the developers didn't want to calculate some
fit statistics automatically was that it had to be compared to a saturated model, which can take a long time to fit. My suggestion would be to give the user the option to calculate the saturated
model along with the fitted model and to calculate fit indices automatically in those instances. It could be a statement added to mxModel, where saturated = TRUE (set to FALSE by default). That way,
the user isn't fitting the saturated model by default, but is fitting it in cases where he or she needs the fit indices.
I think this is an important function for any SEM program. I love OpenMx, but I think that it needs a way to calculate fit indices automatically in order to compete with the other alternatives.
Wed, 10/06/2010 - 16:52
There are, however, some
There are, however, some pretty heavy caveats with the use of such indices with missing data structures. One is that RMSEA would not notice differences in effective sample size associated with
different statistics. Thus a degenerate dataset such as:
X Y
1.2 NA
3.4 NA
NA 5.6
NA 7.8
NA 9.0
would apparently contain bivariate data, but in fact provide no information whatsoever about the covariance between X and Y. In other less extreme cases, it may provide some but less than for other
statistics. Taking RMSEA=sqrt[(Chisq-df)/(df(N-1))] then the problem becomes obvious because N is no longer consistent across variables. Similar issues exist with all indices that use N, which is one
reason why I tend to prefer the likelihood ratio test and AIC for model comparisons. Not that these are flawless, mind!
Thu, 10/07/2010 - 09:33
That's very helpful. I'm
That's very helpful. I'm mostly looking to see how well my original model fits the data (not to do nested model comparisons to improve fit), so that's why I was looking to calculate RMSEA & CFI, but
I see the problem with calculating RMSEA when using datasets with different amounts of missingness across variables. Given my aim, would you suggest doing a likelihood ratio test against a saturated
model or should I stick with RMSEA?
Also.. In my searches, I've noticed conflicting equations for the calculation of RMSEA:
sqrt[(Chisq-df)/(df(N-1))] -- as you describe, where I assume Chisq represents the chi-square value of the proposed model?
sqrt[((chi/df)-1)/N], where chi is the difference between that model's -2LL and a fully saturated model's -2LL, and df is degrees of freedom (http://openmx.psyc.virginia.edu/thread/
sqrt[((Chisq/df)-1)/(N-1)] (http://davidakenny.net/cm/fit.htm)
These appear to be non-equivalent formulas, so I'm not sure which one to use, or if there is even a generally accepted one.
If I need to estimate the saturated model to calculate RMSEA or the likelihood ratio test, it doesn't appear that I am correctly specifying the saturated model. If my understanding is correct, a
saturated model is one where all variances, covariances, and means are allowed to be freely estimated so that there are as many parameter estimates as there are degrees of freedom. I don't think my
model specification is correct, however, as this is not what I get (I obtain fewer estimated parameters than df). Given my code below, can you see what else I would have to specify to obtain a
saturated model?
### Saturated Model (allowing all variances, covariances, and means to be freely estimated)
# where 'modeldata' includes all variables in the model
# where 'manifests' includes the names of all variables in the model (in 'modeldata') -- it is a manifest-only model (no latent variables)
saturated_model <- mxModel(paste("Saturated Model"),
mxData(observed=modeldata, type="raw")
observed statistics: 276
estimated parameters: 12
degrees of freedom: 264
-2 log likelihood: 777.1855
saturated -2 log likelihood: NA
number of observations: 67
chi-square: NA
p: NA
AIC (Mx): 249.1855
BIC (Mx): -166.4267
adjusted BIC:
RMSEA: NA
Thanks so much for your help guys!
Fri, 10/08/2010 - 09:00
Hi dadivr From the ?mxPath
Hi dadivr
From the ?mxPath documentation:
mxPath(from, to = NA, all = FALSE, arrows = 1,
free = TRUE, values = NA, labels = NA,
lbound = NA, ubound = NA)
from character vector. these are the sources of the new paths.
to character vector. these are the sinks of the new paths.
all boolean value. If TRUE, then connect all sources to all sinks.
So you need to put all=TRUE as an argument to your first mxPath() call. The second one is then redundant. However, I would not use the same value as a starting value throughout the variances and
covariances. You might try building a matrix of starting values with something like:
mxPath(from=manifests, to=manifests, arrows=2,values=as.vector(diag(length(manifests))),free=TRUE,all=TRUE)
or use the sd() function (and square it) to get the variances about right (as opposed to using an identity matrix returned by the hopelessly overloaded diag() function. | {"url":"http://openmx.psyc.virginia.edu/thread/697","timestamp":"2014-04-17T07:32:52Z","content_type":null,"content_length":"76938","record_id":"<urn:uuid:1b223067-ad0e-414e-abfd-e5e009de0cc9>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00048-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/ambercat21/answered","timestamp":"2014-04-18T19:04:38Z","content_type":null,"content_length":"94063","record_id":"<urn:uuid:6d33fd50-9661-4c9e-b502-2d6b16e94d8c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00237-ip-10-147-4-33.ec2.internal.warc.gz"} |
Head & Tail problem
September 9th 2005, 12:37 PM #1
Sep 2005
Head & Tail problem
Say, I have unfair coin. When I toss it, probability of a head is 0.1, and probability of a tail is 0.9.
I toss this coin 10 times. What is the probability I get 7 heads and 3 tails?
I tried to draw a tree, but after 10 levels I gonna have 1024 branchs. My calculator has Pr and Cr functions, but I don't even know it's a Pr or Cr problem.
Sorry I'm not a good math people.
You can solve this very easily if you know what the Bernoulli law is.
You have two opposite results:
Let's call the probability to get head: p = 0.1
Let's call the probability to get tail: q = 0.9
Since 1-0.9=0.1 the results are opposite. (First condition to apply Bernoulli Law)
The number of times we toss the coin is called: n = 10
Each toss is independant from the previous one. (Second condition to apply Bernoulli Law)
Finally we call the number of times we get head: x
In that case:
$P(x=7) = C^{7}_{10} \cdot 0.1^7 \cdot 0.9^3 = 8.748 \cdot 10^{-6}$
September 9th 2005, 11:00 PM #2
September 10th 2005, 01:32 PM #3
Sep 2005 | {"url":"http://mathhelpforum.com/statistics/858-head-tail-problem.html","timestamp":"2014-04-20T10:28:37Z","content_type":null,"content_length":"34056","record_id":"<urn:uuid:a4a5d2ee-2c27-4a09-b9f3-0e4100bae467>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
Awareness Applet
Note: Applet requires sun-java6-jre and sun-java6-plugin or openjdk-6-jre and icedtea6-plugin under Linux or Java JRE and Java
under Windows.
Awareness applet provides a convenient way to evaluate network conditions and resulting awareness ranges in VANET environment where communication is based on the cooperative awareness messages. With
the help of this applet one can also choose optimum network configurations (minimizing the channel load) to achieve desired awareness range.
Awareness is defined as a probability of receiving at least n packets in a time window T. The awareness range is then a maximum range at which awareness probability is greater than or equal to a
desired awareness probability P[A]. Probability of reception of each individual packet (PPR) is calculated based on the empirical model, packets are sent with the same frequency (same transmission
rate) and have the same reception probability during the time window T.
On the left hand side one can specify required network and awareness parameters, the results are plotted and summarized in the table on the right hand side.
Three options are provided:
• "calculate maximum Awareness Range" - calculates maximum awareness range that corresponds to the specified network and awareness parameters. Awareness probability P[A] is plotted together with
corresponding packet reception probability PPR. The table summarizes transmission rate, achieved awareness range and corresponding load on the channel.
• "calculate minimum Tx Range" - calculates minimum transmission range that is needed to achieve indicated awareness range. Transmission range values are provided for a specified awareness
parameters, given vehicular density and all possible transmission rates (normally 1-10Hz). Awareness probability P[A] is plotted together with corresponding packet reception probability PPR. The
table summarizes the combinations of transmission rate and transmission range as well as corresponding load on the channel to achieve desired awareness range.
• "calculate minimum Tx Rate" - calculates minimum transmission rate for specified awareness parameters, specified vehicular density and transmission range. Resulting awareness ranges are plotted
additionally for other possible transmission rates and summarized in the table along with corresponding channel load.
The concept of this awareness applet can be used to evaluate vehicle-2-X applications as described in the publication | {"url":"http://dsn.tm.kit.edu/misc_awareness-applet.php","timestamp":"2014-04-20T21:25:22Z","content_type":null,"content_length":"13298","record_id":"<urn:uuid:3ea97324-8ac4-40e2-91fe-9bc177aff449>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cryptology ePrint Archive: Report 2002/179
Parallel Algorithm for Multiplication on Elliptic CurvesJuan Manuel Garcia Garcia and Rolando Menchaca GarciaAbstract: Given a positive integer $n$ and a point $P$ on an elliptic curve $E$, the
computation of $nP$, that is, the result of adding $n$ times the point $P$ to itself, called the \emph{scalar multiplication}, is the central operation of elliptic curve cryptosystems. We present an
algorithm that, using $p$ processors, can compute $nP$ in time $O(\log n+H(n)/p+\log p)$, where $H(n)$ is the Hamming weight of $n$. Furthermore, if this algorithm is applied to Koblitz curves, the
running time can be reduced to $O(H(n)/p+\log p)$.
Category / Keywords: public-key cryptography / Elliptic curve cryptosystemPublication Info: Published on Proceedings of the ENC'01Date: received 18 Nov 2002, last revised 18 Nov 2002Contact author:
jmgarcia at sekureit comAvailable format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation Version: 20021121:235941 (All versions of this report) Discussion forum: Show
discussion | Start new discussion[ Cryptology ePrint archive ] | {"url":"http://eprint.iacr.org/2002/179/20021121:235941","timestamp":"2014-04-18T18:12:11Z","content_type":null,"content_length":"2611","record_id":"<urn:uuid:bb8ea525-dfa9-441f-bc02-7fc841e49e8c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00279-ip-10-147-4-33.ec2.internal.warc.gz"} |
Please explain this word problem.
October 22nd 2012, 05:19 AM
Please explain this word problem.
A note book is made up of sheets folded in the middle and stapled, each sheet forms 2 leaves (i.e) 4 pages. On removing some papers of the first half and second half of the book, Joe found the
number of leaves, in the first case as odd and in the second case as even. If the sum of the numbers of the pages on the last leaf of the book is 63,then what could be the maximum possible sum of
the numbers on the pages of leaves that were left in book.
(1) 435 (2) 420 (3) 451 (4) 471..
Please explain clearly..(Crying) | {"url":"http://mathhelpforum.com/math-topics/205874-please-explain-word-problem-print.html","timestamp":"2014-04-20T04:44:22Z","content_type":null,"content_length":"3572","record_id":"<urn:uuid:59a715e9-1af3-4a15-9a51-59874a17f172>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Atlantic: Breaking News, Analysis and Opinion on politics, business, culture, international, science, technology, national
The Large Hadron Collider, receiving the infrastructural improvements that have kept it offline since February 2013 (CERN)
This year's Nobel Prize in Physics has been awarded to François Englert and Peter W. Higgs for the prediction of the Higgs boson, which was experimentally confirmed 50 years later with the help of
the Large Hadron Collider (LHC). But how did Englert and Higgs theorize their particle, so long before the evidence was in hand? With math.
Even among scientists, it is assumed that mathematics plays a secondary role: It is thought of as a toolkit, not the research itself. A biologist, say, would collect data and then try to build a
mathematical model fitting these data, perhaps with some help from a mathematician. While this is an important mode of operation, mathematics actually plays a much bigger role in science: It enables
us to make groundbreaking leaps that we couldn't make otherwise.
For example, Albert Einstein wasn’t trying to fit any data into a mathematical model when he realized that gravity causes our space-time to curve. In fact, there was no such data. No one could even
imagine that our space is curved; everyone “knew” that our world was flat! (Note that I am not talking here about the Earth being curved which of course had been known for centuries; I am talking
about the four-dimensional space-time we inhabit being curved.) How did Einstein come up with this far-out idea? He tried to generalize his special relativity theory to allow acceleration, using his
insight that gravity and acceleration have the same effect. And Einstein followed in the footsteps of a mathematician, Bernhard Riemann, who laid the foundations of the theory of curved spaces 50
years earlier. It was math that gave the answer.
The human brain is wired in such a way that we simply cannot imagine curved shapes of dimension greater than two. Like all of us, Einstein could not possibly visualize a curved universe. He could
describe it only using the language of mathematics. A subsequent experiment proved Einstein right. It turned out that our universe was indeed curved: A ray of light does not travel along a straight
line, but bends passing near a star, as if pulled by an invisible force—a startling revelation.
The prediction of the Higgs boson is another beautiful example of mathematics driving progress in natural science. In the 1960s, physicists struggled with the fact that an attractive mathematical
theory governing the behavior of elementary particles gave a nonsensical answer: It predicted massless particles that no one had seen. What we now know as the Higgs boson solved this problem.
Inserted into the equations in just the right way, it gives particles their masses. The rest is history.
The Higgs boson was the last missing piece of the Standard Model, and its experimental discovery was the end of an era. Most physicists agree that the Standard Model is not the ultimate theory of the
quantum world: For one thing, it gives us no clue about the mysterious dark matter that takes up over 80 percent of the total matter in the universe. We need new ideas to go beyond the Standard
Model, but it’s quite possible that no new particles or phenomena will be discovered at the LHC within its energy range. Should we build another accelerator to reach energies even higher? A bigger
accelerator would cost much more than the 10 billion dollars the LHC had cost and take a long time to build, with no guarantee that any new physics will be found from it. It may well be that the next
breakthrough in quantum physics will again come from mathematics, just as it did through the work of Englert, Higgs, and others that we celebrate this week.
Upon hearing that a telescope at the Mount Wilson Observatory was needed to study the cosmos, Albert Einstein's wife Elsa remarked: “Well, my husband does that on the back of his envelope.”
Experiment is the ultimate judge of a theory, and that’s why we do need expensive and sophisticated machines. But the amazing fact is that scientists like Einstein and Higgs have used the most
abstract mathematical knowledge to unlock the deepest secrets of the universe.
Charles Darwin wrote in his autobiography: “I have deeply regretted that I did not proceed far enough at least to understand something of the great leading principles of mathematics, for men thus
endowed seem to have an extra sense.” Mathematics is not about studying boring and useless equations: It is about accessing a new way of thinking and understanding reality at a deeper level. It
endows us with an extra sense and enables humanity to keep pushing the boundaries of the unknown.
This article available online at: | {"url":"http://www.theatlantic.com/technology/print/2013/10/the-nobel-prize-in-physics-is-really-a-nobel-prize-in-math/280430/","timestamp":"2014-04-19T02:45:25Z","content_type":null,"content_length":"18684","record_id":"<urn:uuid:0abb065e-ff4a-402e-90a3-2fa261513753>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
05-30-2004, 01:42 PM
Refrigerant Question
If you have a fixed volume of refrigerant at a fixed temperature how do you find the percentage of liquid and the percentage of vapor? For example if I have a 1 ft³ of R-22 at 75°F (132.2psig),
what volume is liquid and what volume is vapor? I know it's kind of an off the wall question, but it's something that's been bothering me, and I can't figure out how to calculate it. Vapor and
liquid have different densities (right?) so that's my stumbling block. Thanks, Adam.
05-30-2004, 04:22 PM
Use a enthalpy chart for R-22
05-30-2004, 04:36 PM
Assuming first that this is a stable, static condition at equilibrium, given the parameters you stated there is no way to determine the amount of vapor or liquid that is present.
Consider a container of known volume that's been sitting quietly in an air conditioned space at 75 Deg F for a long time, you know that it contains R22 and you measure the pressure as 132 psig.
Without also knowing the total weight, there is no way to calculate what you're looking for.
05-30-2004, 07:04 PM
'If you have a fixed volume of refrigerant at a fixed temperature how do you find the percentage of liquid and the percentage of vapor? For example if I have a 1 ft³ of R-22 at 75°F (132.2psig),
what volume is liquid and what volume is vapor?'
ME: Dang..i had this same dilemna
just last week. I was on my last job of the day, and i knew i was getting close to being out of R22 in my last 30 lb cylinder and was wondering how much was still in there in liquid form. So...as
i opened the side doors to my 1998 Chevy Cargo Van with the 305 cid motor in it....i fixed my eyes on the R22 cylinder as i contemplated if i would have enough to recharge this customers a/c
unit. So...with great apprehension and determination, i grasped the cylinder with my right hand on the top while i placed my left hand on the center of the cylinder. This allowed me a good grasp
of the cylinder as i was hovering over it. Then, my brain signaled my hands as i coordinated a steady yet swift action which resulted in me picking up that cylinder and shaking it simular to
shaking a blender of Margaritas. I did this for perhaps as long as 3 full seconds at which time i put the R22 back down and concluded that there was about 5 lbs of liquid monochlordifloromethane
laying on the bottom of the cylinder.
Hope that helps in some way.
05-30-2004, 07:44 PM
Some of you, not all but some, think way too much.
05-30-2004, 09:56 PM
Let me know when you find out.
05-31-2004, 11:03 PM
Ok, I was going through one of my textbooks (Modern Refrigeration and Air Conditioning, Althouse-Turnquist-Bracciano) and I found this chart on page 343. It lists Volume Vapor of R-22 at 75°F at
.37 ft³/lb and Density Liquid at 74.8 lb/ft³. So if I understand this correctly (and I don't think I do), if I had 1 lb of R-22 there would be .37 ft³ liquid and since there are 7.481 gal to 1
ft³ that means there are 2.77 gal of liquid?
06-05-2004, 04:57 PM
Tom R
I was hoping that someone who could explain this better would jump in here with a procedure you could use. Since that hasn't happened I will try to outline a procedure that may be used to
calculate liquid to vapor volume ratios.
From a practical standpoint, what you want to know may not be of much value to you as a Tech. However I don't presume to know what is important to you and what isn't. So I will give you a
procedure and you can spend the time to work through a few different scenarios, then you can makeup your own mind if it is something you need to remember or if it is a total waste of time.
If you work the examples and can follow through the process it may clarify the saturation behavior of refrigerants and show the relationships of Temperature/Pressure, Vapor and Liquid densities
and Volumes. So even though the actual ratio of liquid to vapor volumes may not be of significant interest to you in the future, just understanding the saturated state behavior of the refrigerant
will be a big benefit to you.
If you know the actual volume capacity of the system (or container) and the weight of refrigerant charge you can use a table of saturation properties for that refrigerant and calculate how much
of the refrigerant is vapor and how much is liquid (under static saturated conditions). There are several ways to do it (that I know of, there may be more?). The method I will type up is
laborious and takes a lot of steps but just uses simple math and takes a number of logical steps through the phase changing process.
A couple of things to keep in mind as you follow through these examples are---
1. When liquid and vapor refrigerant are in contact with one another (in a confined space) they will behave as predicted by the saturation properties.
2. Under static condition a saturated refrigerant is controlled by the temperature of the container so the temperature will be the controlling variable and the pressure will be the controlled
The following explanation is compiled from some of the notes I have collected over the years: It is impossible to cover even just the basics of this subject without ending up with a very lengthy
discourse but in the interest of helping out a student I will do the best I can.
WARNING!! All information beyond this point is technogeek so if you are offended by technogeek do not proceed beyond this point. ;)
The following saturation proprieties have been taken from a list of R22 proprieties
For the conditions you listed Temp @75°F
R22 Saturation Properties
Liquid Den. 74.67002 #/FT^3 Most Charts will list liquid properties as density. The reciprocal of density will give the volume of the liquid. .
(Volume = 1/ density) = (1/74.67002)= 0.013392256 FT^3/#
From the Saturations proprieties chart Vapor Volume (Sat)= 0.37513 FT^3/#
You can convert the values to whatever units you are most comfortable working with and you can round off values to whatever precision level you feel is close enough for your purpose.
Below is the Liquid volume of R22 @ 75°F expressed in Cubic ft per Pound, Cubic inch per pound, and Cubic inch per ounce
Liquid Volume (sat). = 0.013392256 FT^3/#
Liquid Volume (sat) = 23.14181783 In^3/#
Liquid Volume (sat). = 1.446363614 In^3/Oz.
Below is the Vapor Volume of R22 @ 75°F expressed in Cubic ft per Pound, Cubic inch per pound, and Cubic inch per ounce
Vapor Volume (sat) = 0.37513 FT^3/#
Vapor Volume (sat) = 648.22464 In^3/#
Vapor Volume (sat) = 40.51404 In^3/Oz.
Comment: I have a boatload of reference material on the proprieties of R22 and I don't think that any two sources are in complete agreement on the values; but they are all reasonably close to one
In these examples I will try to use (sat) to denote when the value is to come from the saturated properties table and (act) when you should be using the actual quantities that are present in the
system. I addition I will use the OZ/In^3 units so that we don't end up with such small decimal values.
If you were to begin with an evacuated container or vessel that had 0.13368 FT^3 or 231 Cubic inches of displacement (1 gallon) and would carefully weigh in 16 Oz. (1 lb.) charge of R22 while the
container was exposed to 75° ambient conditions the liquid would take up only 23.142 cubic inches of that container (if you could somehow keep it from vaporizing). The remaining void of the
container would be (Vessel vol. - liquid volume (act). Or 231 - 23.142) = 207.858 Cubic Inches. In actual practice this void is filled with vapor as the liquid "boils off". Once sufficient
vaporizing of the liquid had occurred the vapor expanding into the void would raise the pressure to the Saturated Pressure Temperature value of 75° and 132.2 psig and the P/T would stabilize here
at the saturated properties equilibrium.
We use this Saturate Liquid Volume of the complete charge's total mass compared to Vessel's Volume (act) to find the mass that needs to vaporize to fill that void.
Void Volume = (Vessel Vol.-(Liquid Volume (sat)* oz). use the total charge weight here
(231-((1.446363614 * 16)) = 207.8581822 Cubic inch
When a given amount of liquid "boils" off, the vapor it creates will greatly expand (see the ratio of Vapor vol. compared with Liquid vol.)
However the liquid that boils off will decrease the amount of liquid in the system so the total effective volume "gain" Is found by subtracting the Liquid volume of a given weight from the Vapor
volume of the same given weight. Vaporizing 1 oz of R22 at 75°F saturated conditions would create 40.51404 IN^3 of vapor but at the same time decrease the Liquid volume by 1.4463 IN^3. Since the
first 1.4463 IN^3 of vapor is used to replace the void left behind when vaporizing the ounce of liquid only 39.0677 IN^3 of vapor is left to help fill up the containers initial void volume.
The table below will give the Volume Gain in Cu.ft/#, Cu.in/#, and Cu.in/Oz. For the 75°F saturated condition.
Vol. Gain =(Vapor Volume (sat) - Liquid Volume (sat)
Vol. Gain = 0.361737744 FT^3/# (0.37513 - 0.013392256)
Vol. Gain = 625.0828222 In^3/# (648.22464 - 23.14181783)
Vol. Gain = 39.06767639 In^3/Oz. (40.51404 - 1.446363614)
The next step would be to calculate how much of the original mass would need to be vaporized to fill the void.
Refrigerant mass in Vapor = Void Volume/Volume. Gain
( 207.8581822 / 39.06767639) = 5.32046442 Oz
Which of course would leave 10.67953558 OZ in Liquid mass
To find the area the actual vapor occupies = Vapor mass (act) * Vapor Volume (sat)
(5.32046442 * 40.51404) = 215.5535083 Cu in of the container is used for Vapor.
Vapor Volume (act) / total vessel volume = Vapor fill
215.5535083/ 231= .9331 or 93.31% of the containers area is filled with vapor
To find the area the actual liquid occupies = Liquid mass (act) * Liquid Volume (sat)
(10.67953558 * 1.446363614)= 15.44649168 Cu in of the container is used for Liquid.
15.44649168/231= .0669 or 6.69% of the container is Liquid filled.
All of the above examples assumed working under saturated static conditions where the saturated refrigerant is controlled by the temperature of the container so the temperature will be the
controlling variable and the pressure will be the controlled variable.
Try changing some of the values like the amount of # of charge and see what effect it has, then change the volume of the container and watch the effect there. If you don't have enough container
volume the charge will be all liquid with no vapor, or not in the saturated range, with the math the void volume will show up as a negative value (in the real world something would blow up when
the temperature increased), if the charge mass doesn't have enough volume to fill the cylinder it becomes all vapor and once again not in the saturated range so you would end up with a Liquid
Volume (act) that was negative (in the real world the charge would be all vapor so the pressure would be under the value listed for the given ambient temperature on the Chart). We have to have
both liquid and vapor in the cylinder to have saturated conditions. . If you do spreadsheets set it up in a spreadsheet and by just changing the necessary data you can instantly see what effect
your changes have made.
If your school still has some of the old calibrated charging cylinders like the "Dial-A-Charge" units you can actually perform the experiment from this example by plugging in the actual volume of
your cylinder into these equations. The neat thing about using the charging cylinder is that you will be able to watch the liquid level change with different charge amounts and different
Notice that under static conditions the pressure verses temperature readings only tell you that you are at a saturated condition (both liquid and vapor in the vessel) they do not tell you how
much refrigerant is there or anything about the vapor to liquid ratio. A 30# cylinder of R22 with a few pounds of refrigerant left in it will give a pressure reading that corresponds with the
saturated temperature from the chart just as it did when the cylinder was completely full. Even though we know that the nearly empty cylinder will be nearly full of vapor and contain very little
Another point to keep in mind: Under dynamic conditions (refrigerant in an operating vapor compression system) the pressures will be the controlling variable and the temperature will be the
controlled variable in areas operating in the saturated region. Also under dynamic operating conditions some areas of the system may be operating with saturated refrigerant (both vapor and liquid
in that area) while other areas containing liquid only may be sub-cooled (temp=less than Saturated value). At the same time there may be other areas where the refrigerant is 100% vapor so
superheating(temp. = greater than Saturated value) may be present in that area.
Another thing to be aware of is that contamination such as non-condensable, a blend that has fractionated, or mixed refrigerants can also cause reading that are not consistent with the saturation
06-05-2004, 05:27 PM
Open the can for a bit, soon you will see the frost line on it, that's how much liquid ya got left:)
06-05-2004, 11:03 PM
What is the purpose of someone trying to mathematically or otherwise , figure out how much liquid vs. vapor is in a refrigerant drum ??? It reminds me of when i went thru trade school..the
instructor had us calculate using a very long formula how many pounds per hour a compressor would pump under certain conditions --- totally useless in real life of an HVAC Tech. Same for 80% of
what was learned (and now forgotton)-- from High School.
Your brain can only remember so much , so, why not fill it with useful info that can be useful to you making a living or....info that can be used regularly or...at least once in a while ?! When
youre on a job and your freon tank runs out when your charging a unit... you can figure that its empty . Then, you go get another one from your truck. Make sense ???! Didnt mean to get snotty
with you....just trying to be practical .
06-06-2004, 12:12 AM
I don't see the point of knowing this. Unless you are looking for an exercise, of your mathmatics skills.
I have a container of r-22 that weighs ten pounds. The can weighs six pounds. How much refrigerant do I have.
06-06-2004, 01:25 AM
Originally posted by Diceman
Open the can for a bit, soon you will see the frost line on it, that's how much liquid ya got left:)
This is the easiest method.:p
06-06-2004, 01:27 AM
Originally posted by frozensolid
I don't see the point of knowing this. Unless you are looking for an exercise, of your mathmatics skills.
I have a container of r-22 that weighs ten pounds. The can weighs six pounds. How much refrigerant do I have.
Like TomR stated, it's really none of our concern whether or not it's important to him.
We needed a question like this. It had been waaaay too long.
BTW, you don't have enough.
AAMOF, you don't have any. The container is holding four pounds. | {"url":"http://hvac-talk.com/vbb/printthread.php?t=53147&pp=13&page=1","timestamp":"2014-04-16T11:57:23Z","content_type":null,"content_length":"26654","record_id":"<urn:uuid:51489ea7-44b2-418a-8439-a690b3f22e67>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of theory numbers
ring theory
is the study of
algebraic structures
in which addition and multiplication are defined and have similar properties to those familiar from the
. Ring theory studies the structure of rings, their
, or, in different language,
, special classes of rings (
group rings
division rings
universal enveloping algebras
), as well as an array of properties that proved to be of interest both within the theory itself and for its applications, such as
homological properties
polynomial identities
Commutative rings are much better understood than noncommutative ones. Due to its intimate connections with algebraic geometry and algebraic number theory, which provide many natural examples of
commutative rings, their theory, which is considered to be part of commutative algebra and field theory rather than of general ring theory, is quite different in flavour from the theory of their
noncommutative counterparts. A fairly recent trend, started in the 1980s with the development of noncommutative geometry and with the discovery of quantum groups, attempts to turn the situation
around and build the theory of certain classes of noncommutative rings in a geometric fashion as if they were rings of functions on (non-existent) 'noncommutative spaces'.
Please refer to the glossary of ring theory for the definitions of terms used throughout ring theory.
The study of rings originated from the theory of
polynomial rings
and the theory of
algebraic integers
. Furthermore, the appearance of
hypercomplex numbers
in the mid-nineteenth century undercut the pre-eminence of
in mathematical analysis.
Richard Dedekind introduced the concept of a ring.
The term ring (Zahlring) was coined by David Hilbert in the article Die Theorie der algebraischen Zahlkörper, Jahresbericht der Deutschen Mathematiker Vereinigung, Vol. 4, 1897.
The first axiomatic definition of a ring was given by Adolf Fraenkel in an essay in Journal für die reine und angewandte Mathematik (A. L. Crelle), vol. 145, 1914.
In 1921, Emmy Noether gave the first axiomatic foundation of the theory of commutative rings in her monumental paper Ideal Theory in Rings.
Elementary introduction
Formally, a ring is an Abelian group (R, +), together with a second binary operation * such that for all a, b and c in R,
$a * \left(b*c\right) = \left(a*b\right) * c$
$a * \left(b+c\right) = \left(a*b\right) + \left(a*c\right)$
$\left(a+b\right) * c = \left(a*c\right) + \left(b*c\right)$
also, if there exists a multiplicative identity in the ring, that is, an element e such that for all a in R,
$a*e = e*a = a$
then it is said to be a
ring with unity
. The number 1 is a common example of a unity.
It is simple to show that any ring in which e = 0 must have just one element; any such ring is called a zero ring.
Rings that sit inside other rings are called subrings. Maps between rings which respect the ring operations are called ring homomorphisms. Rings, together with ring homomorphisms, form a category
(the category of rings). Closely related is the notion of ideals, certain subsets of rings which arise as kernels of homomorphisms and can serve to define factor rings. Basic facts about ideals,
homomorphisms and factor rings are recorded in the isomorphism theorems and in the Chinese remainder theorem.
A ring is called commutative if its multiplication is commutative. Commutative rings resemble familiar number systems, and various definitions for commutative rings are designed to recover properties
known from the integers. Commutative rings are also important in algebraic geometry. In commutative ring theory, numbers are often replaced by ideals, and the definition of prime ideal tries to
capture the essence of prime numbers. Integral domains, non-trivial commutative rings where no two non-zero elements multiply to give zero, generalize another property of the integers and serve as
the proper realm to study divisibility. Principal ideal domains are integral domains in which every ideal can be generated by a single element, another property shared by the integers. Euclidean
domains are integral domains in which the Euclidean algorithm can be carried out. Important examples of commutative rings can be constructed as rings of polynomials and their factor rings. Summary:
Euclidean domain => principal ideal domain => unique factorization domain => integral domain => Commutative ring.
Non-commutative rings resemble rings of matrices in many respects. Following the model of algebraic geometry, attempts have been made recently at defining non-commutative geometry based on
non-commutative rings. Non-commutative rings and associative algebras (rings that are also vector spaces) are often studied via their categories of modules. A module over a ring is an Abelian group
that the ring acts on as a ring of endomorphisms, very much akin to the way fields (integral domains in which every non-zero element is invertible) act on vector spaces. Examples of non-commutative
rings are given by rings of square matrices or more generally by rings of endomorphisms of Abelian groups or modules, and by monoid rings.
Some useful theorems
Any ring can be seen as a
preadditive category
with a single object. It is therefore natural to consider arbitrary preadditive categories to be generalizations of rings. And indeed, many definitions and theorems originally given for rings can be
translated to this more general context.
Additive functors
between preadditive categories generalize the concept of ring homomorphism, and ideals in additive categories can be defined as sets of
closed under addition and under composition with arbitrary morphisms.
• R.B.J.T. Allenby (1991). Rings, Fields and Groups. Butterworth-Heinemann. ISBN 0-340-54440-6.
• Atiyah M. F., Macdonald, I. G., Introduction to commutative algebra. Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont. 1969 ix+128 pp.
• T.S. Blyth and E.F. Robertson (1985). Groups, rings and fields: Algebra through practice, Book 3. Cambridge university Press. ISBN 0-521-27288-2.
• Faith, Carl, Rings and things and a fine array of twentieth century associative algebra. Mathematical Surveys and Monographs, 65. American Mathematical Society, Providence, RI, 1999. xxxiv+422
pp. ISBN 0-8218-0993-8
• Goodearl, K. R., Warfield, R. B., Jr., An introduction to noncommutative Noetherian rings. London Mathematical Society Student Texts, 16. Cambridge University Press, Cambridge, 1989. xviii+303
pp. ISBN 0-521-36086-2
• Herstein, I. N., Noncommutative rings. Reprint of the 1968 original. With an afterword by Lance W. Small. Carus Mathematical Monographs, 15. Mathematical Association of America, Washington, DC,
1994. xii+202 pp. ISBN 0-88385-015-X
• Nathan Jacobson, Structure of rings. American Mathematical Society Colloquium Publications, Vol. 37. Revised edition American Mathematical Society, Providence, R.I. 1964 ix+299 pp.
• Nathan Jacobson, The Theory of Rings. American Mathematical Society Mathematical Surveys, vol. I. American Mathematical Society, New York, 1943. vi+150 pp.
• Lam, T. Y., A first course in noncommutative rings. Second edition. Graduate Texts in Mathematics, 131. Springer-Verlag, New York, 2001. xx+385 pp. ISBN 0-387-95183-0
• Lam, T. Y., Exercises in classical ring theory. Second edition. Problem Books in Mathematics. Springer-Verlag, New York, 2003. xx+359 pp. ISBN 0-387-00500-5
• Lam, T. Y., Lectures on modules and rings. Graduate Texts in Mathematics, 189. Springer-Verlag, New York, 1999. xxiv+557 pp. ISBN 0-387-98428-3
• McConnell, J. C.; Robson, J. C. Noncommutative Noetherian rings. Revised edition. Graduate Studies in Mathematics, 30. American Mathematical Society, Providence, RI, 2001. xx+636 pp. ISBN
• Pierce, Richard S., Associative algebras. Graduate Texts in Mathematics, 88. Studies in the History of Modern Science, 9. Springer-Verlag, New York-Berlin, 1982. xii+436 pp. ISBN 0-387-90693-2
• Rowen, Louis H., Ring theory. Vol. I, II. Pure and Applied Mathematics, 127, 128. Academic Press, Inc., Boston, MA, 1988. ISBN 0-12-599841-4, ISBN 0-12-599842-2
• Connell, Edwin, Free Online Textbook, http://www.math.miami.edu/~ec/book/ | {"url":"http://www.reference.com/browse/theory+numbers","timestamp":"2014-04-20T06:48:25Z","content_type":null,"content_length":"88128","record_id":"<urn:uuid:1c3ab871-317a-40f5-8de8-4838e522e727>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Greatest Common Factors - Problem 7
Factoring is the process of turning a sum or difference into a product, or multiply problem. When the terms that you're trying to add or subtract have a common factor, we can "undistribute" it from
both terms. Here we look at examples when that greatest common factor is a binomial, meaning a parentheses with a two terms being added or subtracted inside.
Transcript Coming Soon!
binomial GCF | {"url":"https://www.brightstorm.com/math/algebra/factoring-2/greatest-common-factors-problem-7/","timestamp":"2014-04-17T09:34:32Z","content_type":null,"content_length":"60519","record_id":"<urn:uuid:968b6573-1f85-4386-becd-7629bc53ae94>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
Precalculus Archive | June 05, 2011 | Chegg.com
e^(2x) + 2e^(x) = 8 and solve for X
I put this into the Ti89 calculator and used the solve feature under f2 and I got
x = ln(2)
I then went to wolfram alpha and I got the following:
when you click on show steps you should get one of the answers as x = ln(2)
However, I tried to use the logarithmic rules to solve this. I know if I have exponential I should be able to take the natural log of it to get rid of it so I did the following:
e^(2x) + 2e^(x) = 8 so I first take the natural log of both sides:
ln(e^2x) + ln(2e^(x) = ln(8) this equals:
3x + ln(2) = ln(8) so I then substract ln(2) from both sides
3x = ln(8) - ln(2) now when we have substraction like that we can rewrite the natural log using log rules
3x = ln(8/2) which equals:
3x = ln(4) now the ln(4) also is same as ln(2)^2 which is same as 2ln(2) so we can rewrite and solve for x:
x = (2/3)(ln(2))
so I have a 2/3 out in front of mine. What rule did I break why am I getting a different answer than the calculator and mathmatica. I know the logs rules are as follows:
ln(e^x) = x {natrual log and exponential cancel out}
log (uw) = logu + logw
log(u/w) = logu - logw
log(u^c) = c* logu
I dont see where I am breaking or missing a rule yet I have a different answer can someone give me a detailed and step by step reasong why I am getting a different answer. This started out because I
was trying to help another student on a forums and when I checked my answer to calculator and mathmatica I got somoething different. Its driving me crazy that it did not work out neatly!
• Show less | {"url":"http://www.chegg.com/homework-help/questions-and-answers/precalculus-archive-2011-june-05","timestamp":"2014-04-19T00:34:31Z","content_type":null,"content_length":"26990","record_id":"<urn:uuid:9276b305-9ff2-413e-b993-888cfe59e49b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00209-ip-10-147-4-33.ec2.internal.warc.gz"} |
Functions for Manipulating these Objects
Consider a typical ``stack'' of control frames.
Suppose the model required that we express the idea of ``the most recent frame whose return program counter points into MAIN.''
The natural expression of this notion involves
function application -- ``fetch the return-pc of this frame''
case analysis -- ``if the pc is MAIN, then ...''
iteration or recursion -- ``pop this frame off and repeat.''
The designers of ACL2 have taken the position that a programming language is the natural language in which to define such notions, provided the language has a mathematical foundation so that models
can be analyzed and properties derived logically.
Common Lisp is the language supported by ACL2. To be precise, a small applicative subset of Common Lisp is the language supported by ACL2. | {"url":"http://www.cs.utexas.edu/users/moore/acl2/v6-0/Functions_for_Manipulating_these_Objects.html","timestamp":"2014-04-20T06:56:23Z","content_type":null,"content_length":"1670","record_id":"<urn:uuid:ba11c143-03ee-4661-a95f-4be7dfc59d13>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus! #28: Early Transcendentals 2.8A
Yeehaw! We’re one section away from getting into the nitty gritty. But, this last section of chapter 2 is very important. This is where you stop thinking of the derivative as a trick and start
thinking of it as a function!
To reiterate once more: A function is how you map from one set to another. You have a set called x. You do an operation called f(). That gives you a new set: f(x). You can also change the f() to map
from x to a different set. Some ways you can do this are trivial. For example, you might have a function f(x) = 1 that maps from all numbers to 1. Not very exciting. You can then change it to f(x) =
1 + x. Now you’re mapping from every real number to every real number plus 1. You could change it again to f(x) = 1 + x + $x^2$ and get further adjustments still.
The derivative is just another way to map. Here’s it’s definition:
$f'(x) = lim_{h \to 0} \dfrac{f(x+h) - f(x)}{h}$
This is the way you map from x to f’(x), aka the derivative of f at x.
The next part of this section is about developing your graphical intuition. This can be a little mindscrewing, but it’s not so bad once you get it. Recall that the derivative at a point is
graphically the same as a tangent at that point. So, if you start with graph A, then make a new graph (graph B) built of graph A’s tangents, you’ll find that B is the derivative of A.
Here’s a simple example: Say you have a function, f(x) = 2x. You should quickly see that as the equation of a line. Specifically it’s a line that increases 2 vertical ones whenever it increases 1
horizontal unit. So, the tangent’s pretty easy here – remember the tangent is just the slope at a point. This function has the same slope everywhere: 2. So, if we take the tangent anywhere, we find
it’s a line with a slope of 2. So, we know that the derivative of f(x) = 2x is simply 2. That is, f’(x) = 2.
f’(x) is thus very easy to plot. It’s a constant function that’s always at 2. By now, you may see the logical extension. Any linear function’s derivative is just the coefficient of the variable.
With more complicated functions, it gets… well… more complicated. We’ll save that for later chapters. For now, try drawing out a random curve on a graph and see if you can figure out the derivative.
It can often be very counterintuitive. But, if you remember rules like the one above, for lines, it can help. For instance, anywhere on your random curve that has a roughly constant slope should have
a roughly horizontal line for its derivative. If the slope is harshly up, that constant is a high value. If it’s sharply down, it’s a low value. I encourage you to play around and try to find more
such tricks.
The book also has you find some derivatives using the equation that’s higher in this blog post. To be honest, I’m not sure how much utility that is. It’s probably good in the way your dad making you
shovel snow out of the driveway is good – builds character. But, you’ll soon learn much simpler ways to deal with these problems.
Next section: Leibniz notation.
This entry was posted in Uncategorized. Bookmark the permalink. | {"url":"http://www.theweinerworks.com/?p=894","timestamp":"2014-04-20T08:25:47Z","content_type":null,"content_length":"19366","record_id":"<urn:uuid:4ab1cc65-905a-4e99-b5b0-5297177d44f2>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to play Nonograms
Nonograms are also known by other names, including Paint by Numbers, Griddlers, Pic-a-Pix, Picross, PrismaPixels, Pixel Puzzles, Crucipixel, Edel, FigurePic, Grafilogika, Hanjie, Illust-Logic, and
Japanese Crosswords. If you know how to play one of these, then the rules are the same!
Your aim in these puzzles is to colour the whole grid in to black and white squares. At the top of each column, and at the side of each row, you will notice a set of one or more numbers. These
numbers tell you the runs of black squares in that row/column. So, if you see '10 1', that tells you that there will be a run of exactly 10 black squares, followed by one or more white square,
followed by a single black square. There may be more white squares before/after this sequence.
This is an example of a 15*15 Nonogram. A bigger Nonogram will usually inidicate a harder challenge.
When trying to solve any size Nonogram, look first at the bigger numbers. We will choose to look at the second column, the one with '10 1' in it. There are going to be a total of 11 black squares
here, and 4 white squares. Out of a total of 15 squares, we know already that it will be mostly black squares. There aren't that many possibilites here for this combination so let's look at two of
I have chosen here the two extremes. In the first possibility, the first run of 10 black squares starts on the first square. In the second possibility, the first run of 10 black squares finishes on
the 13th square.
We don't know exactly where this run of 10 black squares starts or finishes, but we do know that if it isn't one of these two extremes, if must be somewhere in the middle. So, we can colour in black
where the two extremers overlap:
It turns out that we can also do this for some of the other columns:
We can also do the same thing for rows. But, let's look at something else first. Look at the 7th row. This has '2 6' as the clues, but we already have 3 isolated black squares in here already! If
we're going to have tow unbroken runs of black squares, two of these must be joined somehow. The only way this can happen is if the 2nd and 3rd squares are joined to make up the '6'. We don't know
exactly where the 6 starts and finishes, but we do know that they must be joined. So we can fill in black the squares in the middle.
Looking at this row again, although we don't know where the '6' starts, we do know that it can't extend more than a further two squares to the left. The '2' is constrained as well, so we can start to
fill some white squares in here:
We can use similar techniques to fill in more of the black squares. More rows have three isolated black squares, but only two clues. Further, looking at the 8th row, the clues are '5 8', the 5 is
constrained by the first black square already there, it must start in the 1st or 2nd square. This allows us to fill some more black squares in:
We continue through the grid like this, we generally fill the black squares in first, and the white squares start to come later. Here is the completed grid:
This example is actually our daily Nonogram for November 6th 2010,
want to play now? | {"url":"http://puzzlemadness.co.uk/howtoplaynonograms.php","timestamp":"2014-04-21T12:18:39Z","content_type":null,"content_length":"7687","record_id":"<urn:uuid:1dec5af0-ddc7-4471-ad49-acfb330a2563>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Return to List
Functional Analysis and Semi-groups
             
Colloquium "This book is devoted to the study of semigroups (associative multiplicative systems for which no cancellation rules are postulated) and their linear representations in Banach
Publications spaces. Commutative semigroups receive generous but not exclusive attention."
1996; 808 pp; -- M. H. Stone
"This volume is a revised edition, with Phillips as co-author, of the well-known book of Hille under the same title. The revision is a thorough one, involving rearrangement, the
Volume: 31 addition of new background material, and the incorporation of many new results. Each part and each chapter commences with a summary, and each chapter closes with a short list of
general references to the bibliography presented at the end of the book. Other references are liberally supplied in the body of the text. There is an extensive index."
0-8218-1031-6 -- Mathematical Reviews
ISBN-13: Part One. Functional Analysis
• Abstract spaces
List Price: US$65 • Linear transformations
• Vector-valued functions
Member Price: • Banach algebras
US$52 • General properties
• Analysis in a Banach algebra
Order Code: COLL/ • Laplace integrals and binomial series
Part Two. Basic Properties of Semi-Groups
• Subadditive functions
• Semi-modules
• Addition theorem in a Banach algebra
• Semi-groups in the strong topology
• Generator and resolvent
• Generation of semi-groups
Part Three. Advanced Analytical Theory of Semi-Groups
• Perturbation theory
• Adjoint theory
• Operational calculus
• Spectral theory
• Holomorphic semi-groups
• Applications to ergodic theory
Part Four. Special Semi-groups and Applications
• Translations and powers
• Trigonometric semi-groups
• Semi-groups in \(L_p(-\infty ,\infty )\)
• Semi-groups in Hilbert space
• Miscellaneous applications
Part Five. Extensions of the theory
• Notes on Banach algebras
• Lie semi-groups
• Functions on vectors to vectors
• Bibliography
• Index | {"url":"http://ams.org/bookstore?fn=20&arg1=collseries&ikey=COLL-31","timestamp":"2014-04-19T07:14:44Z","content_type":null,"content_length":"15951","record_id":"<urn:uuid:07c80e71-6a86-463f-9ab8-436c195a7658>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is Chaos Theory?
Chaos is the science of surprises, of the nonlinear and the unpredictable. It teaches us to expect the unexpected. While most traditional science deals with supposedly predictable phenomena like
gravity, electricity, or chemical reactions, Chaos Theory deals with nonlinear things that are effectively impossible to predict or control, like turbulence, weather, the stock market, our brain
states, and so on. These phenomena are often described by fractal mathematics, which captures the infinite complexity of nature. Many natural objects exhibit fractal properties, including landscapes,
clouds, trees, organs, rivers etc, and many of the systems in which we live exhibit complex, chaotic behavior. Recognizing the chaotic, fractal nature of our world can give us new insight, power, and
wisdom. For example, by understanding the complex, chaotic dynamics of the atmosphere, a balloon pilot can “steer” a balloon to a desired location. By understanding that our ecosystems, our social
systems, and our economic systems are interconnected, we can hope to avoid actions which may end up being detrimental to our long-term well-being.
Principles of Chaos
• The Butterfly Effect: This effect grants the power to cause a hurricane in China to a butterfly flapping its wings in New Mexico. It may take a very long time, but the connection is real. If the
butterfly had not flapped its wings at just the right point in space/time, the hurricane would not have happened. A more rigorous way to express this is that small changes in the initial
conditions lead to drastic changes in the results. Our lives are an ongoing demonstration of this principle. Who knows what the long-term effects of teaching millions of kids about chaos and
fractals will be?
• Unpredictability: Because we can never know all the initial conditions of a complex system in sufficient (i.e. perfect) detail, we cannot hope to predict the ultimate fate of a complex system.
Even slight errors in measuring the state of a system will be amplified dramatically, rendering any prediction useless. Since it is impossible to measure the effects of all the butterflies (etc)
in the World, accurate long-range weather prediction will always remain impossible.
• Order / Disorder Chaos is not simply disorder. Chaos explores the transitions between order and disorder, which often occur in surprising ways.
• Mixing: Turbulence ensures that two adjacent points in a complex system will eventually end up in very different positions after some time has elapsed. Examples: Two neighboring water molecules
may end up in different parts of the ocean or even in different oceans. A group of helium balloons that launch together will eventually land in drastically different places. Mixing is thorough
because turbulence occurs at all scales. It is also nonlinear: fluids cannot be unmixed.
• Feedback: Systems often become chaotic when there is feedback present. A good example is the behavior of the stock market. As the value of a stock rises or falls, people are inclined to buy or
sell that stock. This in turn further affects the price of the stock, causing it to rise or fall chaotically.
• Fractals: A fractal is a never-ending pattern. Fractals are infinitely complex patterns that are self-similar across different scales. They are created by repeating a simple process over and over
in an ongoing feedback loop. Driven by recursion, fractals are images of dynamic systems – the pictures of Chaos. Geometrically, they exist in between our familiar dimensions. Fractal patterns
are extremely familiar, since nature is full of fractals. For instance: trees, rivers, coastlines, mountains, clouds, seashells, hurricanes, etc.
“As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.”
-Albert Einstein
-Albert Einstein
16 Responses to “What is Chaos Theory?”
1. Dear Fractal Folk
Thank you for sharing the wonderful world of fractals! I have visited your site many times since finding it about a year ago and am encouraging my kids to take an interest. Thank you. Louise,
2. Chaos is one of Physics Foibles. As Godel revealed – maybe we can’t prove everything!! long term weather prediction is trumped by Chaos Theory.
3. Thank you for your useful writing.It helped me to write my article.Somayeh,Iran,Khomeini shahr
4. i think each and erything in the universe inter related and interconnected.
5. A puzzle for “chaos thinking” is that the universe is NOT chaotic in the sense of actual butterfly wings having an effect even one meter away, never mind Texas. The world of reality is not an
approximation. By contrast, the world of chaos theory is full of approximations, because one CANNOT specify initial conditions to an infinite degree.
The “butterfly effect” does not apply to the macroscopic real world. The “butterfly effect” is a mathematical construct, or perhaps better, an artifact of “digitization for computation.”
Beautiful, but not real.
6. I have my objections to what meteorologicalengineer wrote. He is wrong on what mathematically “initial conditions” are and as a result he claims that chaos deals with a lot of approximations.
That is entirely untrue because chaos is remarkably precise in terms of chaotic behavior. Turbulences, triggers, fractal expansions, borders of chaos possess the quality of exactness. The key
feature of chaotic behavior is order and precision. It is also untrue that “the butterfly effect” is a construct or an artifact nonexistent in reality. I can present loads of heuristic evidence
from the financial charts like indices, stocks, bonds and currencies falsifying meateorologicalengineer’s conjecture.
It also forces me to ask him, do you really think that mathematics together with its constructs and artifacts is simply manmade? The theory of chaos denies it.
7. I do agree with the engineer with the world of chaos theory is full of approximations and Paul for chaotic behavior. Imagine if, the butterfly can affect in some way the world, how would effect:
the direct and horrible acts like; war, terrorism, war and violent movie or war-game and more and more… which in every second has a direct influence in our world and life, as Calvino says, rains
in our mind.
8. I am by no means an expert in the field of nonlinear dynamics, but I’ve taken a few college courses on it and I’ve read a few pop-science books (Chaos by James Gleick is great, as is Does God
Play Dice? by Ian Stewart). So, I consider myself knowledgeable enough on the subject to clarify a few points. The beauty of so-called “chaos math” is that we can simulate impossibly complex
systems (i.e. everyone’s favorite, weather) with incredibly simple models. Even more importantly, the behavior of those simple systems is complex and erratic. I forget which of the two books I
mentioned before the quote is in, but one of my favorite quotes about models used in nonlinear dynamics is paraphrased as “there is no point in having a map of a city which is as large and
complex as the city itself”. What this means is that mathematicians want to model complex activity with as simple of a model as possible. This is the inherent beauty of chaos.
Now, on a more personal aside, I have a pet peeve against the phrase “butterfly effect”. It’s misleading, in my opinion. It gives the layperson a nice perspective, but the way it’s portrayed (“a
butterfly flapping its wings causes a hurricane across the globe”) implies a direct causality. The effect is not that straightforward, which I think is the point the meteorological engineer is
trying to make.
9. Thank you for this wonderful, yet simplistic explanation of chaos theory! I use this theory (and consequently this page) to inform and empower friends and strangers alike. Too many believe they
are too small to make a difference in most if not all of the aspects of modern civilization, but wisdom has shown us for thousands of years that it’s not so. Thank you simplifying such a
monumental concept and keeping the mathematics and science intact. You do the alchemists of old great justice!
10. My perspective, obviously, is a stark contrast to the calculating outlook some in math and science tend to have. I am horrible at math/physics, but I have a tremendous respect and admiration for
their tremendous power. So here’s the artist/spiritualist perspective: Chaos theory existed long before the great minds of physics and mathematics named it such. In fact, the Chinese word for
“chaos” contains the root word meaning “opportunity”. Dating some philosophies and ideas of chaos theory around the 14th century B.C. Although the connection may seem like grasping straws to
some, the study of alchemy (which evidence suggests occured as early as ancient Sumeria) directly correlates with some of the ideas of chaos theory in very transcendent ways. Though equations may
sometimes seem to disprove or augment our core ideas of what is natural or “real”, more often than not it is information we lack that renders our understanding, which is received through
lifetimes of apt research and relentless study. Perhaps we do not understand reality to the extent in which science leads us to believe. History is full of prideful human error made by many
ingenious minds, swept away by the vastness of their own intelligence. To make my point, many scholars, philosophers, psychologists, physicists etc. have dedicated a portion, if not the entirety
of their lives piecing together the mysteries of global consciousness and interconnectedness as well as the intimate relationship between order and disorder. Carl Jung spent a great deal of time
on the subject of enlightenment by means of chaos. The butterfly effect is indeed indirect when we investigate the lineage of cause and effect, however the point is not HOW influential the
butterfly is but rather that it has any influence at all. The idea of a universal consciousness is becoming more feaseable with the help of science and mathematics, that we somehow affected the
butterfly, who affected the hurricane, which affected us (to grossly oversimplify), a never ending fractal cycle of cause and effect. Lately, those seasoned in hard fact finding have had to play
a bit of catch up with metaphysics, as larger numbers of people have spiritual or unexplainable experiences in their lives that have nothing to do with religion and actually, more to do with the
ideas of chaos theory.
11. I just say yes to all because that is chaos in its self. I believe chaos is universal while being precise and linear it is non linear and unpredictable which by definition causes chaos in its
self and I call this true chaos and chaos is everywhere. This maybe a simple idea compared to all other ideas but it is how I believe chaos is.
12. I’m a high school student looking at chaos theory for a Math project, so I may not be correct about my idea of what the chaos theory is but please try to understand as I have not received any
“advance” education as I am only a high school student. Now I have visited many websites and read many of the comments on their and here. Everyone seems to hate the Butterfly fly effect as they
say it is “irrational” or “misleading”. The fact that a butterfly flapping its wings in China can cause a hurricane in another far away place, I can understand, that many would find it to be
ambiguous. The butterfly effect is indeed indirect when we investigate the lineage of cause and effect, however the point is not HOW influential the butterfly is but rather that it has any
influence at all. The idea of a universal consciousness is becoming more feasible with the help of science and mathematics, that we somehow affected the butterfly, who affected the hurricane,
which affected us (to grossly oversimplify), a never ending fractal cycle of cause and effect.
13. I have coined a term for this topic…Computational Density…..it states the deeper the zoom, the greater the number of iterations needed for viewing and the greater an objects robustness. This will
lead to a blending of fractals, chaos, celluar automata, and the simulation hypothesis. I will be brief. Consider the Mandlebrot set, infinitely many copies, smaller and smaller and smaller, each
one a complete set unto itself. Computational density states that a trillion iteration M set is in fact more “powerful” than the 1st zoom level.
Now, allow me to take out my galaxy scale microscope. On this slide I have the milkyway. Let’s take a look at the universal spin fractal. 1st zoom the galaxy….1,000 zooms the solar
system…..10,000 the earth…..100,000 zooms a hurricane……1,000,000 zooms a tornado….1,000,000,000,000 your DNA. According to Computational density in a simulated universe….you are much harder to
render than the galaxy as a whole.
Now, the interesting question is what does one do with all that power. Can the one make a butterfly that spawns up the tree of scale. Can one use simple tools, taking advantage of the Joseph
effect, using complexity and iteration to participate more fully. Much love, Quartaro Industries
14. Without life, it would be possible to predict the fate of the universe at any given time, given enough initial conditions and powerful enough supercomputer
15. I’ve been a fan of Chaos Theory and fractals ever since I picked up the book Chaos (Gleick). However, I’m a bit confused (I am in no way a math whiz or scientist of any kind), doesn’t the theory
basically state that even in chaotic systems patterns are identifiable, one just has to look close enough or far enough away to identify them?
16. thanks!! it’s a wonderful site!! It helped me so much!!fantastic!! North Korea jiyoon
Feedback Please
Let us know what you think... | {"url":"http://fractalfoundation.org/resources/what-is-chaos-theory/","timestamp":"2014-04-19T13:04:38Z","content_type":null,"content_length":"57642","record_id":"<urn:uuid:826b5db3-b239-4bda-a36a-7cacb093dfb1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to define the equivalence of Maurer-Cartan elements in an $L_{\infty}$-algebra?
up vote 3 down vote favorite
First let $L^{\bullet}$ be a pro-nilpotent differential graded Lie algebra (dgla). We have the set of Maurer-Cartan elements in $L^{\bullet}$ ($MC(L^{\bullet})$) which are $\alpha \in L^1$ such that
it satisfies the Maurer-Cartan equation $$ \partial \alpha+ \frac{1}{2}[\alpha,\alpha]=0. $$ We have a definition of gauge equivalance: $\alpha_0,\alpha_1\in MC(L^{\bullet})$ are called gauge
equivalent if and only if there exists a $\xi \in L^0$ such that $$ e^{\text{ad}\xi}\circ(\partial +\text{ad}\alpha_0)\circ e^{-\text{ad}\xi}=\partial +\text{ad}\alpha_1 $$ or in other words $$ e^{\
text{ad}\xi}\alpha_0-\frac{e^{\text{ad}\xi}-1}{\text{ad}\xi}\partial\xi=\alpha_1. $$ From the first definition it is easy to see that gauge equivalence is really an equivalent relation. From the
second definition we can define a path between $\alpha_0$ and $\alpha_1$. Let $$ \alpha(t)=e^{t\text{ad}\xi}\alpha_0-\frac{e^{t\text{ad}\xi}-1}{\text{ad}\xi}\partial\xi. $$ Then $\alpha(t)$ is a
power series of $t$ in $L^{\bullet}$, $\alpha(0)=\alpha_0$, $\alpha(1)=\alpha_1$ and we can prove $\partial\alpha(t)+ \frac{1}{2}[\alpha(t),\alpha(t)]=0.$
Now we come to $L_{\infty}$ algebra $L^{\bullet}$ with higher bracket $[\cdot,\ldots,\cdot]_n$ with $n$ auguments. We still have Maurer-Cartan elements in $ L^{\bullet} $ ( $MC(L^{\bullet})$) which
are $ \alpha \in L^1$ such that it satisfies the Maurer-Cartan equation $$ \partial \alpha+ \sum \frac{1}{k!}[\alpha,\ldots,\alpha]_k=0. $$
My question is how to define equivalence of Maurer-Cartan elements in $ L^{\bullet} $?
Of course we can define $\alpha_0,\alpha_1\in MC(L^{\bullet})$ are "equivalent" if and only if there exists a power series $\alpha(t)\in L^{\bullet}$ such that $\alpha(0)=\alpha_0$, $\alpha(1)=\
alpha_1$ and $\alpha(t)$ satisfies the $L_{\infty}$ Maurer-Cartan equation. However, it is difficult to show that this is an equivalent relation, for example, how to connect two paths?
It seems that a generalization of gauge equivalence is what we want. But $e^{\text{ad}\xi}$ is not enough since we have higher bracket.
add comment
2 Answers
active oldest votes
To add a bit to what Damien says, addressing your question on how to generalise the gauge approach (which is equivalent to the approach outlined by Damien, as proved by several people):
You can view gauge symmetries in DGLAs via solving the differential equation $$ \frac{d\alpha}{dt}=-\partial\xi-[\alpha,\xi], $$ where $\xi$ is the given element of degree $0$. This
generalises to homotopy Lie algebras as follows: consider the differential equation $$ \frac{d\alpha}{dt}=-\partial\xi-[\alpha,\xi]-\frac12[\alpha,\alpha,\xi]-\ldots-\frac{1}{p!}[\
underbrace{\alpha,\ldots,\alpha,}_{p \text{ times}}\xi]\_{p+1}-\ldots, $$ where the right hand side is simply the negative of $[\xi]\_1^\alpha$, the first structure map of the twisted
up vote 4 Lie-infinity structure $$ [x\_1,\ldots,x_k]\_k^\alpha:=\sum\_{p\ge0}\frac{1}{p!}[\underbrace{\alpha,\ldots,\alpha,}_{p \text{ times}}x_1,\ldots,x_k]\_k^\alpha. $$ From that it is almost
down vote obvious that moving along the integral curves of this equation preserves the property of being Maurer--Cartan, since the Maurer--Cartan condition for $\alpha+\beta$, where $\alpha$ is a
accepted Maurer--Cartan element, and $\beta$ is infinitesimal becomes $$ \partial\beta+[\alpha,\beta]+\frac12[\alpha,\alpha,\beta]+\ldots+\frac{1}{p!}[\underbrace{\alpha,\ldots,\alpha,}_{p \text{
times}}\beta]\_{p+1}+\ldots, $$ that is $[\beta]_1^\alpha=0$, and so $\beta=[\xi]\_1^\alpha$ satisfies that, $[\cdot]\_1^\alpha$ being a differential of the twisted structure. This circle
of ideas is explained in many places, one important reference is ``Lie theory for nilpotent $L\_\infty$-algebras'' by Ezra Getzler (Ann. of Math. (2) 170 (2009), no. 1, 271--301.).
Thank you very much! I will look at paper you suggested. By the way, since the equivalent class of Maurer-Cartan elements forms an $\infty$-groupoid, does it means that the "composition"
of two equivalences is not unique? – Zhaoting Wei Aug 31 '12 at 4:24
Yes, that is a good point. In particular, if you look at the formulas above, you of course realise that unlike the case of DGLAs, $L_0$ is not a Lie subalgebra of $L$, so you cannot
expect the words "gauge equivalence" to be interpreted via honest Lie group symmetries. However, it's not too bad, since you have the desired properties hold up to homotopy. – Vladimir
Dotsenko Aug 31 '12 at 7:19
add comment
This is explained in Section 4.5.2 of "deformation quantization of poisson manifolds" by Kontsevich (http://arxiv.org/abs/q-alg/9709040).
The way you wrote the homotopy between two Maurer-Cartan elements is not enough : as it is explained in the above reference you also need a 1-parameter family of infinitesimal gauge
up vote 3 down A quick reformulation of Kontsevich definition is the following. An equivalence between two Maurer-Cartan elements $a$ and $b$ in $\mathfrak g$ is a Maurer-Cartan element $c$ in $DR
vote ([0,1])\otimes\mathfrak g$ such that $a=c(0)$ and $b=c(0)$.
Note that $DR(...)$ stands for the de Rham algebra of "...".
add comment
Not the answer you're looking for? Browse other questions tagged differential-graded-algeb deformation-theory formality operads homotopy-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/105904/how-to-define-the-equivalence-of-maurer-cartan-elements-in-an-l-infty-algeb","timestamp":"2014-04-21T15:16:49Z","content_type":null,"content_length":"59498","record_id":"<urn:uuid:620dabd3-0245-403a-8881-82b851e96a35>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear function
In mathematics, the term linear function refers to two different, although related, notions:^[1]
As a polynomial function[edit]
In calculus, analytic geometry and related areas, a linear function is a polynomial of degree one or less, including the zero polynomial (the latter not being considered to have degree zero).
When the function is of only one variable, it is of the form
where a and b are constants, often real numbers. The graph of such a function of one variable is a nonvertical line.
For a function $f(x_1, \ldots, x_k)$ of any finite number independent variables, the general formula is
$f(x_1, \ldots, x_k) = b + a_1 x_1 + \ldots + a_k x_k$,
and the graph is a hyperplane of dimension k.
A constant function is also considered linear in this context, as it is a polynomial of degree zero or is the zero polynomial. Its graph, when there is only one independent variable, is a horizontal
In this context, the other meaning (a linear map) may be referred to as a homogeneous linear function or a linear form. In the context of linear algebra, this meaning (polynomial functions of degree
0 or 1) is a special kind of affine map.
As a linear map[edit]
In linear algebra, a linear function is a map f between two vector spaces that preserves vector addition and scalar multiplication:
$f(\mathbf{x} + \mathbf{y}) = f(\mathbf{x}) + f(\mathbf{y})$
$f(a\mathbf{x}) = af(\mathbf{x}).$
Here a denotes a constant belonging to some field K of scalars (for example, the real numbers) and x and y are elements of a vector space, which might be K itself.
Some authors use "linear function" only for linear maps that take values in the scalar field;^[4] these are also called linear functionals.
See also[edit]
External links[edit] | {"url":"http://blekko.com/wiki/Linear_function?source=672620ff","timestamp":"2014-04-19T04:28:22Z","content_type":null,"content_length":"16866","record_id":"<urn:uuid:79de29a9-a180-40b9-b408-80d74ea3b88c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00304-ip-10-147-4-33.ec2.internal.warc.gz"} |
Word problem and second derivatives
December 17th 2008, 03:40 AM
Word problem and second derivatives
Colin sets off for school, which is 800m from home. His speed is proportional to the distance he still has to go.
Let x meters be the distance that he has gone, and y meters be the distance he still has to go.
a) Sketch the graph of x against t and y against t.
I need help with writing an expression for this word problem in both formats so that I can sketch the graph. I am bad with word problems (Worried)
December 17th 2008, 04:01 AM
Colin sets off for school, which is 800m from home. His speed is proportional to the distance he still has to go.
Let x meters be the distance that he has gone, and y meters be the distance he still has to go.
a) Sketch the graph of x against t and y against t.
I need help with writing an expression for this word problem in both formats so that I can sketch the graph. I am bad with word problems (Worried)
First of all, translate all the information given in the text. Then you may have useless information, but you cannot know at first sight.
(No, the fact that the boy's name is Colin is not an important information (Rofl))
Note that y=800-x. Why ?
Let s be Colin's speed.
We know that the speed is proportional to y. This means that there exists a constant k such that $s=k*y$.
Let's deal with "x against the time"
Remember that the speed corresponds to the derivative of the distance with respect to the time.
Thus $s=\frac{dx}{dt}$
$s=k*y=k*(800-x)=k'-kx$, where k'=800k.
So you're now with the following equation :
I hope you know how to solve a differential equation (Surprised)
December 17th 2008, 04:26 AM
Note that y=800-x. Why ?
Actually, that didn't occur to me. Thanks :D
December 18th 2008, 12:54 AM
On another thought, I still don't get it (Doh) ...
Is this the derivative dx / dt?
$<br /> <br /> k'-kx=\frac{dx}{dt}<br />$
P.s. I am not sure why is the k' notation used. Also, I need the $\frac{d^2x}{dt^2}$
December 19th 2008, 12:17 AM
December 19th 2008, 12:26 AM
k' is just a constant. I wrote it above : k'=800k
$\frac{d^2x}{dt^2}=-k \frac{dx}{dt}=-kk'+k^2 x$ (differentiate implicitly and then substitute dx/dt) | {"url":"http://mathhelpforum.com/calculus/65355-word-problem-second-derivatives-print.html","timestamp":"2014-04-19T04:39:52Z","content_type":null,"content_length":"9750","record_id":"<urn:uuid:57e2848b-cb2a-4d9a-9b49-4141820ba9f0>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00086-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] setting decimal accuracy in array operations (scikits.timeseries)
[Numpy-discussion] setting decimal accuracy in array operations (scikits.timeseries)
Robert Kern robert.kern@gmail....
Wed Mar 3 14:33:41 CST 2010
On Wed, Mar 3, 2010 at 14:09, Marco Tuckner
<marcotuckner@public-files.de> wrote:
> Hello,
> am using the scikit.timeseries to convert a hourly timeseries to a lower
> frequency unsing the appropriate function [1].
> When I compare the result to the values calculated with a Pivot table in
> Excel there is a difference in the values which reaches quite high
> values in the total sum of all monthly values.
> I found out that the differnec arises from different decimal settings:
> In Python the numbers show:
> 12.88888888
> whereas in Excel I see:
> 12.8888888888888
> The difference due to the different decimals is small for single values
> and accumulates to a 2-digit number for the total of all values.
> * Why do these differences arise?
> * What can I do to achive comparable values?
We default to printing only eight decimal digits for floating point
values for convenience. There are more under the covers. Use
numpy.set_printoptions(precision=16) to see all of them.
If you are still seeing actual calculation differences, we will need
to see a complete, self-contained example that demonstrates the
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-March/049132.html","timestamp":"2014-04-16T16:54:09Z","content_type":null,"content_length":"4584","record_id":"<urn:uuid:4d4d92e8-9bed-4281-8648-42afb57efd00>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
The finite-section approximation for ill-posed integral equations on the half-line
• Integral equations on the half of line are commonly approximated by the finite-section approximation, in which the infinite upper limit is replaced by apositie number called finite-section
parameter. In this paper we consider the finite-section approximation for first kind intgral equations which are typically ill-posed and call for regularization. For some classes of such
equations corresponding to inverse problems from optics and astronomy we indicate the finite-section parameters that allows to apply standard regularization techniques. Two discretization schemes
for the finite-section equations ar also proposed and their efficiency is studied. | {"url":"https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1260","timestamp":"2014-04-17T01:34:37Z","content_type":null,"content_length":"16784","record_id":"<urn:uuid:67a2112c-dc83-44b1-8e0b-2bce71601385>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analytical expression for formation energy
Hey folks,
I'm using Molecular Dynamics to calculate the formation energy of an interstitial in FCC metal.
Basically what I did is to minimize the perfects structure, and then find the minimized energy of the lattice with [100] dumbbell interstitial, to get the difference in energies.
I want to compare the result to analytical solution by calculating the two energies with local density dependent interaction energies, but I haven't found these expressions in Ashcroft nor Kittel.
Since i'm not a Materials Physicist\Chemist, I'm quite stuck... Does any of you know where I can find the appropriate analytical expressions ? | {"url":"http://www.physicsforums.com/showthread.php?p=3773888","timestamp":"2014-04-18T13:54:17Z","content_type":null,"content_length":"20284","record_id":"<urn:uuid:1c042e44-19a6-4934-83ea-1b645bbe5b69>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: A DIAGRAMMATIC DESCRIPTION OF THE MULTIVARIATE
1. Introduction
From their introduction in Jones's Planar Algebras I, planar algebras have been
linked to knot invariants [12]. Here we present a planar algebra which can be used
to describe the multivariate Alexander polynomial used by Bigelow for the single
variable Alexander polynomial. Alexander discovered what would become known as
the Alexander polynomial of a knot and published it in his 1928 paper Topological
Invariants of Knots and Links [2]. In 1969, Conway discovered the skein relation
- = (q - q-1
which could be used for both identifying when a knot invariant was equivalent to
Alexander's and for calculating the invariant, by making local changes to crossings.
A year later, Conway discovered the Conway potential function, or the multivariate
Alexander polynomial [7], which is more effective at distinguishing links than the
single variable invariant. In 1993, Murakami published a list of axioms for the
multivariate Alexander polynomial in his paper A State Model for the Multi-variable
Alexander Polynomial [14]. This contribution was analogous to the discovery of
the skein relation for the single variable version. One could then determine if a | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/514/5395085.html","timestamp":"2014-04-21T15:06:55Z","content_type":null,"content_length":"8345","record_id":"<urn:uuid:58534250-bc8d-44d9-90be-d9d6033e3557>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
By Kyle Gann
Math phobes can get lost this week. God, I love numbers. My high school math teachers thought I should go into math. Come to think of it, so did my music teachers. And when La Monte Young sets up one
of his vibrating sinetone sculptures such as the one that's running Thursdays and Saturdays from two to 12 at the Mela Foundation, 275 Church Street, I get to use music as an excuse to bathe in the
algebra I left behind. Let others get their ears massaged by the pulsating drones. I like to gaze at the tuning diagrams and let my mind slither naked through the mysterious clusters of luscious
And what integers there are: large prime numbers, octaves of primes, whole classes of primes newly categorized for musical purposes. Having captured another octave of the Overtone series, Young has
strung his aural hammock between the 1792nd and 2304th overtones, where he's basking peacefully. The installation, whose 107-word title begins The Base 9:7:4 Symmetry in Prime Time ... (I save more
space by not completing it than I waste with this parenthesis), consists of 35 sine tones stretched across 10 octaves, 20 of them squeezed into a small band in the seventh octave, some separated by
only 1/14th of a half step.
Young likes the effect of large prime-numbered ratios, including Mersenne Primes (primes that conform to the formula 2P - 1 , such as 31) and what he calls twin primes (primes separated by only 2 ,
such as 59 and 61). He's even invented a new type: Young's Primes, expressible by the formula pXMn- 1, where p is a prime, m is a positive integer that isn't a power. of 2, and n is an integer
greater than 1. Example: 71.
"This is over my head,". you're saying, but listen. The point of all those "minus ones" is that Young uses tones that approximate the most consonant overtones, but are far more complex in their
resulting combined wave forms. His math gives him a variety of sizes of seventh and ninth intervals, all closing in on the octaves over a fundamental B (actually a quarter-tone flat)., In each
octave, all the pitches are within the major third between A and C sharp. Imagine a ladder of 1 0 octaves of the same pitch. Now imagine the rungs bent and diffracted into lots of different tones,
the lower rungs slightly lowered, the upper rungs raised. And because even these exotic overtones of a single low pitch are theoretically more harmonious than the scientifically irrational tuning of
a modem piano, you're hearing a wild frontier of tonality that has never been explored, the outer edge of consonance.
Walk into The Base 9:7:4 Symmetry and you'll hear a whirlwind of pitches swirl around you. Stand still, and the tones suddenly freeze in place. Within the room, every pitch finds its own little niche
where it resonates, and with all those close-but-no-cigar intervals competing in one space (not to mention their elegantly calculated sum- and difference-tones), you can alter the harmony you
perceive simply by pulling on your earlobe. If you visited Young's installation The Romantic Symmetry (over a 60 cycle base) at Dia Art Foundation back 'in '89, you remember the effect. But while
Romantic Symmetry was more "melodic" in a sense, since its overtones were more evenly spread through the range, The Base 9:7:4 Symmetry is more textural. Moving your head makes those tones leap from
high to low and back, while that cluster in the seventh octave, with its wild prime ratios like 269:271, fizzes in and out. Marian Zazeela's light sculptures in the same space are the perfect visual
analogue. Her Ruine Window 1992 for example, is a simple geometric construction of white boards illuminated with magenta light from one side, blue from the other.
Since she's working with colored shadows instead of colored surfaces, and light behaves differently from pigment, the colors combine opposite to the way we expect. (You only learn light-color theory
in art school, Zazeela says, if you go into television.) Stand in front of Ruine Window 1992 for a while, and let your eyes move up and down the verticals: not only will the colors take on a deep
intensity, creating an illusion of two-dimensionality, but the edges will flicker in your peripheral vision.
As the shimmering of Young's overtones resists being recorded, Zazeela's shadows fall flat when photographed one reason she's never been sufficiently celebrated in the art world for her originality
of her minimalist constructions. Both the sound and light sculptures are static entities that move wildly within your eyes and ears, proving with pure wave forms how subjective perception is. Since
we're more sophisticated visually than aurally, I figured out an exercise that, if you can hum, will help you hear more precisely what Young's sculpture is about., If you can isolate one of the lower
drones (not easy), slowly hum a major scale up from that pitch. (The beginning of "Row, Row, Row Your Boat" will do.) By the time you reach the third, fourth, and fifth steps, you'll be humming
pitches that find no resonance among the other drones-you'll be in the empty spaces. Hearing a gap within an articulated pitch space, as some European works of the '50s and '60s like Xenakis's
Pithoprakta asked us to do, is usually a task beyond mortal ears. But here, in these sustained sine waves, even earthlings can make out the negative musical spaces between the rungs of Young's
overtone ladder.
Why would you want to do that? Because it's there. Because music isn't always just background, or something familiar. Because you've never heard so complex a chord so pure. Because music that refuses
to change subverts capitalism. Because you'll never get any closer to the music of the spheres this side of enlightenment. And because there are more numbers in the musical universe than I IV V I.
This article originally appeared in the Village Voice. It appears here with the permission of the author. | {"url":"http://melafoundation.org/gann.htm","timestamp":"2014-04-20T10:46:32Z","content_type":null,"content_length":"10572","record_id":"<urn:uuid:be3ce351-584f-484a-b634-c9f4b41ae158>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US7334133 - Method for making a computer system implementing a cryptographic algorithm secure using Boolean operations and arithmetic operations and a corresponding embedded system
The present invention relates to security for computer systems and more particularly to a method for securing and protecting computer systems, particularly those, such as smart cards, employing
encryption algorithms for the protection of sensitive information.
Paul Kocher et al. introduced in 1998 [5] and published in 1999 [6] the concept of “Differential Power Analysis,” also known as DPA as a method of encrypted messages. The initial targets were
symmetric cryptosystems such as the Data Encryption Standard (DES) or Advanced Encryption Standard (AES) candidates, but public-key cryptosystems have since proven equally vulnerable to DPA attacks.
In 1999, Chari et al. [2] suggested a generic countermeasure that consisted of separating all the intermediate variables. A similar “duplication” method was proposed by Goubin et al. [4], in a
particular case. These general methods generally sharply increase the amount of memory or the computation time required, as noted by Chari et al. Furthermore, it has been demonstrated that even the
intermediate steps can be attacked by DPA, so the separation of the variables must be performed in every step of the algorithm. This makes the question of additional memory and computation time even
more crucial, particularly for embedded systems such as smart cards.
In 2000, Thomas Messerges [8] studied DPA attacks applied to the AES candidates. He developed a general countermeasure that consisted of masking all the inputs and outputs of each elementary
operation executed by the microprocessor. This generic technique allowed him to assess the impact of these countermeasures on the five AES candidates.
However, for algorithms that combine Boolean functions and arithmetic functions, it is necessary to use two types of masks. One therefore needs a method for converting between the Boolean masking and
the arithmetic masking. This is typically the case for IDEA [7] and for three of the AES candidates: MARS [1], RC6 [9] and Twofish [10].
T. Messerges [8] has proposed an algorithm for performing this conversion. Unfortunately, Coron and Goubin [3] have described a specific attack showing that the “BooleanToArithmetic” algorithm
proposed by T. Messerges is insufficient for protecting oneself against DPA. Likewise, his “ArithmeticToBoolean” algorithm isn't foolproof either.
The object of the present invention is to propose two novel “BooleanToArithmetic” and “ArithmeticToBoolean” algorithms, which have proven to be foolproof against DPA attacks. Each of these algorithms
uses only operations that are very simple: XOR (exclusive OR), AND, subtraction, and the “shift left” of a register. The “BooleanToArithmetic” algorithm uses a constant number (equal to 7) of such
elementary operations, while the number of elementary operations involved in the “ArithmeticToBoolean” algorithm is proportional (it equals 5K+5) to the size (i.e., the number of bits K) of the
registers of the processor.
FIG. 1 is a block diagram of a smart card employing conversion means in the information storage elements for protection against DPA attacks.
FIG. 2 is a graphical representation of the Boolean to Arithmetic algorithm
FIG. 3 is a graphical representation of the Arithmetic to Boolean algorithm.
“Differential Power Analysis” (DPA) is an attack that makes it possible to obtain information on the secret key (contained in a smart card or cryptographic token, for example), by exploring
characteristic behaviors of transistor logic gates and software running in smart cards and other cryptographic devices and performing a statistical analysis of recordings of electric power
consumption measured over a large number of calculations with the same key.
This attack does not require any knowledge of the individual power consumption of each instruction, or of the position of each of these instructions in time. It is applied in exactly the same way as
soon as the attacker knows the outputs of the algorithm and the corresponding consumption curves. It is based solely on the following fundamental hypothesis:
Fundamental hypothesis: There is an intermediate variable, appearing during the calculation of the algorithm, such that the knowledge of a few bits of the key, (in practice less than 32 bits) makes
it possible to decide whether or not two inputs, (or respectively two outputs), give the same value for this variable.
The Masking Method
The present invention concerns the “masking” method suggested by Chari et al. [2]. “Towards Sound Approaches to Counteract Power Analysis Attacks, Proceedings of Advanced Cryptology, CRYPTO '99,
Springer-Vertag, pp. 398-412.
The basic principle consists of programming the algorithm so that the above fundamental hypothesis on which DPA is based is no longer verified (i.e., no intermediate variable ever depends on the
knowledge of an easily accessible subset of the secret key). More precisely, using a key sharing schema, each of the intermediate variables appearing in the cryptographic algorithm is separated into
several parts. This way, an attacker is obligated to analyze distributions from several points, which increases his task exponentially in terms of the number of elements of the separation.
The Conversion Problem
For algorithms that combine Boolean functions and arithmetic functions, two types of masking must be used:
□ A Boolean masking: x′=x⊕r.
□ An arithmetic masking: A=x−r modulo 2^K.
In this case, the variable x is masked by the random value r, which gives the masked value x′ (or A). The objective is to find an effective algorithm for switching from the Boolean masking to the
arithmetic masking and vice versa, while making sure that the intermediate variables are de-correlated from the data to be masked, which ensures DPA resistance.
Throughout the present document, the processor is assumed to be using K-bit registers (in practice, most of the time K is equal to 8, 16, 32 or 64). All of the arithmetic operations (such as addition
“+,” subtraction “−,” or doubling “z→2.z” are considered to be modulo 2^K. For purposes of simplicity, the “modulo 2”” will often be omitted herein.
To this end, the invention concerns a method and apparatus for securing and protecting sensitive information within a computer system comprising a processor and a memory, and a cryptographic
algorithm stored in the further memory. The cryptographic algorithm is implemented to protect sensitive information handled by the computer. Boolean operations and arithmetic operations, are utilized
to protect the sensitive information. At least one variable is separated into several parts, in a Boolean separation using the Boolean operation, and in the arithmetic separation using an arithmetic
operation. In order to switch from either of these operations to the other, a predetermined number of Boolean and arithmetic operations is performed on said parts and at least one random number by
means of the processor, so that for each of the values appearing during the operation, there is no correlation with said variable, the operation producing a result stored in the memory.
Advantageously, in order to switch from the Boolean separation to the arithmetic separation, the method includes the following steps:
□ separating all but one of the parts into at least two elements;
□ calculating at least two partial results that never depend on all the elements of a part;
□ in order to obtain all but one part of the arithmetic separation, gathering at least two of said partial results.
Advantageously, the separation of said parts into at least two elements uses a Boolean operation.
Advantageously, said gathering of two of said partial results is done by means of a Boolean operation.
Advantageously, the Boolean operation used for the separation of said parts into at least two elements is the “exclusive OR” operation.
Advantageously, the Boolean operation used for the gathering of said partial results is executed by means of the “exclusive OR” operation.
Advantageously, in order to switch from the Boolean separation to the arithmetic separation, only the “exclusive OR” and “subtraction” operations are used.
Advantageously, the Boolean separation into two parts using the “exclusive OR” operation, and the arithmetic separation into two parts using the “addition” operation, the method is characterized in
that, in order to switch from the Boolean separation to the arithmetic operation, five “exclusive OR” operations and two “subtraction” operations are used.
Advantageously, in order to switch from the arithmetic separation to the Boolean separation, one defines at least one variable obtained by means of a predetermined number of successive iterations
from an initial value that is a function of at least one random number, through successive applications of a transformation based on Boolean and arithmetic operations that is applied to said parts of
the arithmetic separation and to said at least one random number.
Advantageously, said transformation is based on the “exclusive OR,” “logical AND” and “logical shift left by 1 bit” operations.
Advantageously, all but one part of the Boolean separation is obtained by applying Boolean operations to said variable or variables obtained through successive iterations, to said parts of the
arithmetic separation, and to said random number or numbers.
Advantageously, the Boolean operations applied in order to obtain all but one of the parts of the Boolean separation are the “exclusive OR” and “logical shift left by 1 bit” operations.
Advantageously, the method for securing a computer system using K-bit registers, the arithmetic separation into two parts using the “addition” operation and the Boolean separation into two parts
using the “exclusive OR” operation, (2K+4) “exclusive OR” operations, (2K+1) “logical AND” operations, and K “logical shift left by 1 bit” operations are used in order to switch from the Boolean
separation to the arithmetic operation.
The invention also concerns an embedded system comprising a processor and a memory and a cryptographic algorithm adapted to be implemented and stored in the memory Boolean operations and arithmetic
operations are utilized, wherein at least one variable of the algorithm is separated into several parts, in a Boolean separation using a Boolean operation, and in an arithmetic separation using an
arithmetic operation. In-order to switch from either of these separations to the other, conversion means are provided for performing a predetermined number of Boolean and arithmetic operations on
said parts and at least one random number by means of the processor, so that for each of the values appearing during the operation, there is no correlation with said variable, the operation producing
a result stored in the memory.
The description that follows should be considered in conjunction with FIG. 1 which represents the configuration of a smart card capable of executing the inventive method.
From the Boolean Masking to the Arithmetic Masking
To calculate A=(x⊕r)−r, the following algorithm is used:
“Boolean to Arithmetic” Algorithm
□ Input: (x′, r) such that x=x′⊕r.
□ Output: (A, r) such that x=A+r.
Initialize Γ at a random value γ
□ T←x′⊕Γ
□ T←T−Γ
□ T←T⊕x′
□ Γ←Γ⊕r
□ A←x′⊕Γ
□ A←A−Γ
□ A←A⊕T
The “BooleanToArithmetic” algorithm uses 2 auxiliary variables (T and Γ), 1 call to the random generator, and 7 elementary operations (more precisely: 5 “XORs” and 2 subtractions).
From the Arithmetic Masking to the Boolean Masking
To calculate x′=(A+r)⊕r, the following algorithm is used:
“ArithmeticToBoolean” Algorithm
□ Input: (A, r) such that x=A+r.
□ Output: (x′, r) such that x=x′⊕r.
Initialize Γ at a random value γ
□ T←2.Γ
□ x′←Γ⊕r
□ Ω←Γx′
□ x′←T⊕A
□ Γ←Γ⊕x′
□ Γ←Γr
□ Ω←Ω⊕Γ
□ Γ←TA
□ Ω←Ω⊕Γ
□ FOR k=1 to K−1
☆ Γ←Tr
☆ Γ←Γ⊕Ω
☆ T←TA
☆ Γ←Γ⊕T
☆ Γ←Γ⊕T
☆ T←2.Γ
□ ENDFOR
□ x′←x′⊕T
The “ArithmeticToBoolean” algorithm uses 3 auxiliary variables (T, Ω and Γ), 1 call to the random generator, and (5K+5) elementary operations (more precisely (2K+4) “XORs,” (2K+1) “ANDs” and K “shift
As for the number of random numbers involved in the method according to the invention, it is noted that there may be one or several of them per variable, and in the case of several variables, there
will generally be several random numbers, respectively associated with said variables.
FIG. 1 illustrates the general configuration of a smart card 1. It includes an information processing means or CPU 2, information storage means 3, 4, 5 of various types (RAM, EEPROM, ROM), input/
output means 6 that allow the card to cooperate with a card reading terminal, and a bus 7 that allows these various elements to dialog with one another. The aforementioned conversion means capable of
performing the Boolean and arithmetic operations specifically include at least one program as described herein and stored in the information storage means 3, 4, 5.
FIG. 2 is a graphical representation of the Boolean to Arithmetic algorithm, which includes the exclusive OR 202 and subtraction 204 operators.
FIG. 3 is a graphical representation of the Arithmetic to Boolean algorithm, which includes the shift left 302 (multiply by 2), AND 304, and the exclusive OR 202 operators.
Reference to the following publications will provide a more thorough understanding of the prior art.
[1] Carolynn Burwick, Don Coppersmith, Edward D'Avignon, Rosario Gennaro, Shai Halevi, Charanjit Jutla, Stephen M. Matyas, Luke O'Connor, Mohammad Peyravian, David Safford and Nevenko Zunic, “MARS—A
Candidate Cipher for AES,” Proposal for the AES, June 1998. Available at http://www.research.ibm.comlsecurity/mars.pdf
[2] Suresh Chari, Charantjit S. Jutla, Josyula R. Rao and Pankaj Rohatgi, “Towards Sound Approaches to Counteract Power-Analysis Attacks,” in Proceedings of Advances in Cryptology—CRYPTO '99,
Springer-Verlag, 1999, pp. 398-412.
[3] Jean-Sébastien Coron and Louis Goubin, “On Boolean and Arithmetic Masking against Differential Power Analysis,” in Proceedings of Workshop on Cryptographic Hardware and Embedded Systems,
Springer-Verlag, August 2000.
[4] Louis Goubin and Jacques Patarin, “DES and Differential Power Analysis—The Duplication Method,” in Proceedings of Workshop on Cryptographic Hardware and Embedded Systems, Springer-Verlag, August
1999, pp. 158-172.
[5] Paul Kocher, Joshua Jaffe and Benjamin Jun, “Introduction to Differential Power Analysis and Related Attacks,” http://www.cryptography.com/dpa/technical, 1998.
[6] Paul Kocher, Joshua Jaffe and Benjamin Jun, “Differential Power Analysis,” in Proceedings of Advances in Cryptology—CRYPTO '99, Springer-Verlag, 1999, pp. 388-397.
[7] Xuejia Lai and James Massey, “A Proposal for a New Block Encryption Standard,” in Advances in Cryptology—EUROCRYPT '90 Proceedings, Springer-Verlag, 1991, pp. 389-404.
[8] Thomas S. Messerges, “Securing the AES Finalists Against Power Analysis Attacks,” in Proceedings of Fast Software Encryption Workshop 2000, Springer-Verlag, April 2000.
[9] Ronald L. Rivest, Matthew J. B. Robshaw, Ray Sidney and Yiqun L. Yin, “The RC6 Block Cipher,” v.1.1, Aug. 20, 1998. Available at ftp://ftp.rsasecurity.con/pub/rsalabs/aes/rc6v11.pdf
[10] Bruce Schneier, John Kelsey, Doug Whiting, David Wagner, Chris Hall and Niels Ferguson, “Twofish: A 128-Bit Block Cipher,” Jun. 15, 1998, AES submission available at http://www.counterpane.com/ | {"url":"http://www.google.com.au/patents/US7334133?ie=ISO-8859-1","timestamp":"2014-04-18T20:48:29Z","content_type":null,"content_length":"85966","record_id":"<urn:uuid:7f05da2e-e065-4439-a756-027edf9bba4d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Stata HTML syntax highlighter in R
August 12, 2013
By Francis Smart
So I have been having difficulty getting my Stata code to look the way I want it to look when I post it to my blog. To alleviate this condition I have written a html encoder in R. I don't know much
about html so it is likely to be a little clunkier in terms of tags than it need be. It still needs some work but I thought to post what I have so far in case others would like to use the code to
format their Stata code or modify it to format any language of their choosing.
I would like to build a Shiny app in which all the user need do is paste the code and submit it. But that will be for a future post. Here is the code and the example using my Stata post from July
30th. You can also find the code on github. Please feel free to submit possible solutions to the two technical hurdles I discuss in my code (the inability to find and format numbers and the
difficulty of finding and formatting punctuation).
# A Stata HTML formatter in R
# Load up your Stata do file.
txt <- readLines(
# First subsititute out all of the < and > which can be misinterpretted as
# tags in HTML.
txt <- gsub("<","<",txt)
txt <- gsub(">",">",txt)
# Choose the formatting tags you would like applied to each field type.
comment.start <- '<span style="color: #669933">'
comment.end <- '</span>'
# I would like to auto format all numbers but I have nto yet been able to figure
# out how to do this.
num.start <- '<span style="color: #990000"><b>'
num.end <- '</b></span>'
punc.start <- '<span style="color: #0000FF">'
punc.end <- '</span>'
command1.start <- '<span style="color: #0000CC"><b>'
command1.end <- '</b></span>'
command2.start <- '<span style="color: #9900FF">'
command2.end <- '</span>'
command3.start <- '<span style="color: #990033">'
command3.end <- '</span>'
# I am not sure where exactly I got this
stata.commands1 <- unlist(strsplit(readLines(
, split=" "))
stata.commands2 <- unlist(strsplit(readLines(
, split=" "))
stata.commands3 <- unlist(strsplit(readLines(
, split=" "))
punc <- unlist(strsplit(readLines(
"https://raw.github.com/EconometricsBySimulation/RFormatter/master/Stata/Punc.txt") , split=" "))
# I want to figure out how to highlight the puncuation as well but I am having trouble
# with that.
# for (v in punc) txt<- gsub(v,
# paste0(punc.start,v,punc.end), txt)
# Create a vector to tell R to ignore entire lines.
comment <- (1:length(txt))*0
# '*' Star comment recognizer
for (i in grep("[:*:]", txt)) {
# Break each line to discover is the first symbol which is not a space is a *
txt2 <- strsplit(txt[i], split=" ")[[1]]
if (txt2[txt2!=""][1]=="*") {
txt.rep <- paste(c(comment.start,txt[[i]],comment.end), collapse="")
txt[[i]] <- txt.rep
comment[i] <- 1
# '//' Comment recognizer
for (i in (grep("//", txt))) if (comment[i]==0) {
txt2 <- strsplit(txt[i], split=" ")[[1]]
comment.place <- grep("//", txt2)[1]-1
txt.rep <- paste(c(txt2[1:comment.place], comment.start,
txt2[-(1:comment.place)],comment.end), collapse=" ")
txt[[i]] <- txt.rep
# Format stata commands that match each list
# "\\<",v,"\\>" ensures only entire word matches
# are used.
for (v in stata.commands1) txt[comment==0]<-
for (v in stata.commands2) txt[comment==0]<-
for (v in stata.commands3) txt[comment==0]<-
# This is my attempt at highlighting all numbers that are not words.
# It did not work.
# <a href ="http://stackoverflow.com/questions/18160131/replacing-numbers-r-regular-expression">stackoverflow topic</a>
# txt <- gsub(".*([[:digit:]]+).*", paste0(num.start,"\\1",num.end), txt)
# Add tags to the end and beginning to help control the general format.
txt <- c('<pre><span style="font-family: monospace',txt,
'\nFormatted By <a href="http://www.econometricsbysimulation.com">EconometricsbySimulation.com</a>',
# Copy formatted HTML to the clipboard.
writeClipboard(paste(txt, collapse="\n"))
Formatted by Pretty R at inside-R.org
Stata code formatting example:
set obs 4000
gen id = _n
gen eta1 = rnormal()
gen eta2 = rnormal()
* Generate 5 irrelevant factors that might affect each of the
* different responses on the pretest
gen f1 = rnormal()
gen f2 = rnormal()
gen f3 = rnormal()
gen f4 = rnormal()
gen f5 = rnormal()
* Now let's apply the treatment
expand 2, gen(t) // double our data
gen treat=0
replace treat=1 if ((id<=_N/4)&(t==1))
* Now let's generate our changes in etas
replace eta1 = eta1 + treat*1 + t*.5
replace eta2 = eta2 + treat*.5 + t*1
* Finally we generate out pre and post test responses
gen v1 = f1*.8 + eta1*1 + eta2*.4 // eta1 has more loading on
gen v2 = f2*1.5 + eta1*1 + eta2*.3 // the first few questions
gen v3 = f3*2 + eta1*1 + eta2*1
gen v4 = f4*1 + eta1*.2 + eta2*1 // eta2 has more loading on
gen v5 = f5*1 + eta2*1 // the last few questions
* END Simulation
* Begin Estimation
sem (L1 -> v1 v2 v3 v4 v5) (L2 -> v1 v2 v3 v4 v5) if t==0
predict L1 L2, latent
sem (L1 -> v1 v2 v3 v4 v5) (L2 -> v1 v2 v3 v4 v5) if t==1
predict L12 L22, latent
replace L1 = L12 if t==1
replace L2 = L22 if t==1
* Now let's see if our latent predicted factors are correlated with our true factors.
corr eta1 eta2 L1 L2
* We can see already that we are having problems.
* I am no expert on SEM so I don't really know what is going wrong except
* that eta1 is reasonably highly correlated with L1 and L2 and
* eta2 is less highly correlated with L1 and L2 equally each
* individually, which is not what we want.
* Well too late to stop now. Let's do our diff in diff estimation.
* In this case we can easily accomplish it by generating one more variable.
* Let's do a seemingly unrelated regression form to make a single joint estimator.
sureg (L1 t id treat) (L2 t id treat)
* Now we have estimated the effect of the treatment given a control for the
* time effect and individual differences. Can we be sure of our results?
* Not quite. We are treating L1 and L2 like observed varaibles rather than
* random variables we estimated. We need to adjust out standard errors to
* take this into account. The easiest way though computationally intensive is
* to use a bootstrap routine.
* This is how it is done. Same as above but we will use temporary variables.
cap program drop SEMdnd
program define SEMdnd
tempvar L1 L2 L12 L22
sem (L1 -> v1 v2 v3 v4 v5) (L2 -> v1 v2 v3 v4 v5) if t==0
predict `L1' `L2', latent
sem (L1 -> v1 v2 v3 v4 v5) (L2 -> v1 v2 v3 v4 v5) if t==1
predict `L12' `L22', latent
replace `L1' = `L12' if t==1
replace `L2' = `L22' if t==1
sureg (`L1' t id treat) (`L2' t id treat)
drop `L1' `L2' `L12' `L22'
SEMdnd // Looking good
* This should do it though I don't hae the machine time available to wait
* for it to finish.
bs , rep(200) cluster(id): SEMdnd
Formatted By EconometricsbySimulation.com
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/a-stata-html-syntax-highlighter-in-r/","timestamp":"2014-04-20T11:03:36Z","content_type":null,"content_length":"65911","record_id":"<urn:uuid:89bab5df-9cd2-4db5-8cbd-33376e102e02>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00565-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] New python module to simulate arbitrary fixed and infinite precision binary floating point
[Numpy-discussion] New python module to simulate arbitrary fixed and infinite precision binary floating point
Rob Clewley rob.clewley@gmail....
Sun Aug 10 15:34:34 CDT 2008
Dear Pythonistas,
How many times have we seen posts recently along the lines of "why is
it that 0.1 appears as 0.10000000000000001 in python?" that lead to
posters being sent to the definition of the IEEE 754 standard and the
decimal.py module? I am teaching an introductory numerical analysis
class this fall, and I realized that the best way to teach this stuff
is to be able to play with the representations directly, in particular
to be able to see it in action on a simpler system than full 64-bit
precision, especially when str(f) or repr(f) won't show *all* of the
significant digits stored in a float. The decimal class deliberately
avoids binary representation issues, and I can't find what I want
Consequently, I have written a module to simulate the machine
representation of binary floating point numbers and their arithmetic.
Values can be of arbitrary fixed precision or infinite precision,
along the same lines as python's in-built decimal class. The code is
here: http://www2.gsu.edu/~matrhc/binary.html
The design is loosely based on that decimal module, although it
doesn't get in to threads, for instance. You can play with different
IEEE 754 representations with different precisions and rounding modes,
and compare with infinite precision Binary numbers. For instance, it
is easy to learn about machine epsilon, representation/rounding error
using a much simpler format such as a 4-bit exponent and 6-bit
mantissa. Such a format is easily defined in the new module and can be
manipulated easily:
>>> context = define_context(4, 6, ROUND_DOWN)
>>> zero = context(0)
Binary("0", (4, 6, ROUND_DOWN))
>>> print zero # sign, characteristic, significand bits
>>> zero.next()
Binary("0.001E-9", (4, 6, ROUND_DOWN))
>>> print zero.next()
>>> largest_denormalized = context('0 0000 111111') # direct spec of the sign, characteristic, and significand bits
>>> largest_denormalized
Binary("0.111111E-6", (4, 6, ROUND_DOWN))
>>> largest_denormalized.as_decimal()
>>> n01 = context(0.1) # nearest representable is actually stored
>>> print n01, " rounded to ", n01.as_decimal()
0 0011 100110 rounded to 0.099609375
>>> Binary('-10111.0000001').as_decimal()
>>> Binary('-10111.0000001', context).as_decimal() # not enough precision in this context
>>> diff = abs(Binary('-10111.0000001') - Binary('-10111.0000001', context))
>>> diff
Binary("0.1E-6", (4, 6, ROUND_DOWN))
The usual arithmetic operations are permitted on these objects, as
well as representations of their values in decimal or binary form.
Default contexts for half, single, double, and quadruple IEEE 754
precision floats are provided. Binary integer classes are also
provided, and some other utility functions for converting between
decimal and binary string representations. The module is compatible
with the numpy float classes and requires numpy to be installed.
The source code is released under the BSD license, but I am amenable
to other licensing ideas if there is interest in adapting the code for
some other purpose. Full details of the functionality and known issues
are in the module's docstring, and many examples of usage are in the
accompanying file binary_tests.py (which also acts to validate the
common representations against the built-in floating point types). I
look forward to hearing feedback, especially in case of bugs or
suggestions for improvements.
Robert H. Clewley, Ph. D.
Assistant Professor
Department of Mathematics and Statistics
Georgia State University
720 COE, 30 Pryor St
Atlanta, GA 30303, USA
tel: 404-413-6420 fax: 404-413-6403
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-August/036530.html","timestamp":"2014-04-20T03:22:07Z","content_type":null,"content_length":"7047","record_id":"<urn:uuid:974715ad-6529-4911-a096-c00800718898>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/dylanjc/medals","timestamp":"2014-04-18T10:53:24Z","content_type":null,"content_length":"82460","record_id":"<urn:uuid:1ce1f4e9-c7ae-4658-9f37-9dbb8f6ea467>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
Percent Signal Change FAQ
Frequently Asked Questions - Percent Signal Change
Check out RoisFaq for more info about region-of-interest analysis in general...
1. What’s the point of looking at percent signal change? When is it helpful to do that?
The original statistical analyses of functional MRI data, going way back to '93 or so, were based exclusively on intensity changes. It was clear from the beginning of fMRI studies that raw intensity
numbers wouldn't be directly comparable across scanners or subjects or even sessions - average means of each of those things varies widely and arbitrarily. But simply looking at how much the
intensity in a given voxel or region jumped in one condition relative to some baseline seemed like a good way to look at how big the effect of the condition was. So early block experiments relied on
averaging intensity values for a given voxel in the experimental blocks, doing the same for the baseline block, and comparing the two of 'em. Relatively quickly, fancier forms of analysis became
available, and it seemed obvious that correcting that effect size by its variance was a more sensitive analysis than looking at it raw - and so t-statistics came into use, and the general linear
model, and so forth.
So why go back to percent signal change? For block experiments, there are a couple reasons, but basically percent signal change serves the same function as beta weights might (see RoisFaq for more on
them): a numerical measure of the effect size. Percent signal change is a lot more intuitive a concept than parameter weights are, which is nice, and many people feel that looking at a raw percent
signal change can get you closer to the data than looking at some statistical measure filtered through many layers of temporal preprocessing and statistical evaluation.
For event-related experiments, though, there's a more obvious advantage: time-locked averaging. Analyzing data in terms of single events allows you to create the timecourse of the average response to
a single event in a given voxel over the whole experiment - and timecourses can potentially tell you something completely different than beta weights or contrasts can. The standard general linear
model approach to activation assumes a shape for the hemodynamic response, and tests to see how well the data fit that model, but using percent signal change as a measure lets you actually go and see
the shape of the HRF for given conditions. This can potentially give you all kinds of new information. Two voxels might both be identified as "active" by the GLM analysis, but one might have an onset
two seconds before the next. Or one might have a tall, skinny HRF and one might have a short but wide HRF. That sort of information may shed new light on what sort of processing different areas are
engaging in. Percent signal change timecourses in general also allow you to validate your assumptions about the HRF, correlate timecourses from one region with those from another, etc. And, of
course, the same argument about percent signal change being somehow "closer" to the data still applies.
Timecourses are rarely calculated for block-related experiments, as it's not always clear what you'd expect to see, but for event-related experiments, they're fast becoming an essential element of a
2. How do I find it?
Good question, and very platform dependent. In AFNI and BrainVoyager, whole-experiment timecourses are easily found by clicking around, and in the Gablab the same is available for SPM with the
Timeseries Explorer. Peristimulus timecourses, though, ususally require some calculation. In SPM, you can get fitted responses through the usual results panel, using the plot command, but those are
in arbitrary units and often heavily smoothed relative to the real data. The simplest way these days for SPM99 is to use the Gablab Toolbox's roi_percent code. Check out RoiPercent for info about
that function. That creates timecourses averaged over an ROI for every condition in your experiment, with a variety of temporal preprocessing and baseline options. In SPM2, the new Gablab
roi_deconvolve is sort of working, although it's going to be heavily updated in coming months. It's based off AFNI's 3dDeconvolve function, which is the newest way to get peristimulus timecourses in
AFNI. That's based on a finite impulse response (FIR) model (more on those below). BrainVoyager's ROI calculations will also automatically run an FIR model across the ROI for you.
3. How do those timecourse programs work?
The simplest way to find percent signal change is perfectly good for some types of experiments. The basic steps are as follows:
• Extract a timecourse for the whole experiment for your given voxel (or extract the average timecourse for a region).
• Choose a baseline (more on that below) that you'll be measuring percent signal change from. Popular choices are "the mean of the whole timecourse" or "the mean of the baseline condition."
• Divide every timepoint's intensity value by the baseline, multiply by 100, and subtract 100, to give you a whole-experiment timecourse in percent signal change.
• For each condition C, start at the onset of each C trial. Average the percent signal change values for all the onsets of C trials together.
• Do the same thing for the timepoint after the onset of each C trial, e.g., average together the onset + 1 timepoint for all C trials.
• Repeat for each timepoint out from the onset of the trial, out to around 30 seconds or however long an HRF you want to look at.
You'll end up with an average peristimulus timecourse for each condition, and even a timecourse of standard deviations/confidence intervals if you like - enough to put confidence bars on your average
timecourse estimate. This is the basic method, and it's perfect for long event-related experiments - where the inter-trial interval is at least as long as the HRF you want to estimate, so every
experimental timepoint is included in one and only one average timecourse.
This method breaks down, though, with short ISIs - and those are most experiments these days, since rapid event-related designs are hugely more efficient than long event-related designs. If one trial
onsets before the response of the last one has faded away, then how do you know how much of the timepoint's intensity is due to the previous trial and how much due to the current trial? The simple
method will result in timecourses that have the contributions of several trials (probably of different trial types) averaged in, and that's not what you want. Ideally, you'd like to be able to run
trials with very short ISIs, but come up with peristimulus timecourses showing what a particular trial's response would have been had it happened in isolation. You need to be able to deconvolve the
various contributions of the different trial types and separate them into their component pieces.
Fortunately, that's just what AFNI's 3dDeconvolve, BrainVoyager QX, and the Gablab's roi_deconvolve all do. SPM2 also allows it directly in model estimation, and Russ Poldrack's toolbox allows it to
some degree, I believe. They all use basically the same tool - the finite impulse response model.
4. What's a finite impulse response model?
Funny you should ask. The FIR model is a modification of the standard GLM which is designed precisely to deconvolve different conditions' peristimulus timecourses from each other. The main
modification from the standard GLM is that instead of having one column for each effect, you have as many columns as you want timepoints in your peristimulus timecourse. If you want a 30-second
timecourse and have a 3-second TR, you'd have 10 columns for each condition. Instead of having a single model of activity over time in one column, such as a boxcar convolved with a canonical HRF, or
a canonical HRF by itself, each column represents one timepoint in the peristimulus timecourse. So the first column for each condition codes for the onset of each trial; it has a single 1 at each TR
that condition has a trial onset, and zeros elsewhere. The second column for each condition codes for the onset + 1 point for each trial; it has a single 1 at each TR that's right after a trial
onset, and zeros elsewhere. The third column codes in the same way for the onset + 2 timepoint for each trial; it has a single 1 at each TR that's two after a trial onset, and zeros elsewhere. Each
column is filled out appropriately in the same fashion.
With this very wide design matrix, one then runs a standard GLM in the multiple regression style. Given enough timepoints and a properly randomized design, the design matrix then assigns beta weights
to each column in the standard way - but these beta weights each represent activity at a certain temporal point following a trial onset. So for each condition, the first column tells you the effect
size at the onset of a trial, the second column tells you the effect size one TR after the onset, the third columns tells you the effect size two TRs after the onset, and so on. This clearly
translates directly into a peristimulus timecourse - simply plot each column's beta weight against time for a given condition, and voila! A nice-looking timecourse.
FIR models rely crucially on the assumption that overlapping HRFs add up in linear fashion, an assumption which seems valid for most tested areas and for most inter-trial intervals down to about 1
sec or so. These timecourses can have arbitrary units if they're used to regress on regular intensity data, but if you convert your voxel timecourses into percent signal change before they're input
to the FIR model, then the peristimulus timecourses you get out will be in percent signal change units. That's the tack taken by the Gablab new roi_percent. Some researchers have chosen to ignore the
issue and simply report the arbitrary intensity units for their timecourses.
By default, FIR models include some kind of baseline model - usually just a constant for a given session and a linear trend. That corresponds to choosing a baseline for the percent signal change of
simply the session mean (and removing any linear trend). Most deconvolution programs include the option, though, to add other columns to the baseline model, so you could choose the mean of a given
condition as your baseline.
There are a lot of other issues in FIR model creation - check out the AFNI 3dDeconvolve model for the basics and more.
5. What are temporal basis function models? How do they fit in?
Basis function models are a sort of transition step, representing the continuum between the standard, canonical-HRF, GLM analysis, and the unconstrained FIR model analysis. The standard analysis
assumes an exact form for the HRF you're looking for; the FIR places no constraints at all on the HRF you get. But sometimes it's nice to have some kinds of constraints, because it's possible (and
often happens) that the unconstrained FIR will converge on a solution that doesn't "look" anything like an HRF. So maybe you'd like to introduce certain constraints on the type of HRFs you'll accept.
You can do that by collapsing the design matrix from the FIR a little bit, so each column models a certain constrained fragment of the HRF you'd like to look for - say, a particular upslope, or a
particular frequency signature. Then the beta weight from the basis function model represents the effect size of that part of the HRF, and you can multiply the fragment by the beta weight and sum all
the fragments from one condition to make a nice smooth-looking (hopefully) HRF.
Basis function models are pretty endlessly complicated, and the interested reader is referred to the papers by Friston, Poline, etc. on the topic - check out the Friston et. al, "Event-related fMRI,"
here: ContrastsPapers.
6. How do you select a baseline for your timecourse? What are pros and cons of possible options? Do some choices make particular comparisons easier or harder?
Good question. Choosing a particular baseline places a variety of constraints on the shape of possible HRFs you'll see. The most popular option is usually to simply take the mean intensity of the
whole timecourse - the session mean. The problem with that as a baseline is that you're necessitating that there'll be as much percent signal change under the baseline as over it. If activity is at
its lowest point during the inter-trial interval or just before trial onset, then, that may lead to some funny effects, like the onset of a trial starting below baseline, and dramatic undershoots. As
well, if you've insufficiently accounted for drifts or slow noise across your timecourse, you may overweight some parts of the session at the expense of others, depending on what shape the drift has.
Alternatively, you could choose to have the mean intensity during a certain condition be the baseline. This is great if you're quite confident there's not much response happening during that
condition, but if you're not, be careful. Choosing another condition as the baseline essentially calculates what the peristimulus timecourse of change is between the two conditions, and if there's
more response at some voxels than you thought in the baseline condition, you may seriously underestimate real activations. Even if you pick up a real difference between them, the difference may not
look anything like an HRF - it may be constant, or gradually increase over the whole 30 seconds of timecourse. If you're interested in a particular difference between two conditions, this is a great
option; if you're interested in seeing the shape of one condition's HRF in isolation, it's iffier.
With long event-related experiments, one natural choice is the mean intensity in the few seconds before a trial onset - to evaluate each trial against its own local baseline. With short ISIs, though,
the response from the previous trial may not have decayed enough to show a good clean HRF.
7. What kind of filtering should I do on my timecourses?
Generally, percent signal analysis is subject to the same constraints in fMRI noise as the standard GLM, and so it makes sense to apply much of the same temporal filtering to percent signal analysis.
At the very least, for multi-session experiments, scaling each session to the same mean is a must, to allow different sessions to be averaged together. Linear detrending (or the inclusion of a
first-order polynomial in the baseline model, for the AFNI users) is also uncontroversial and highly recommended. Above that, high-pass filtering can help remove the low-frequency noise endemic to
fMRI and is highly-recommended - this would correspond to higher-order polynomials in the baseline model for AFNI, although studies have shown anything above a quadratic isn't super useful
(Skudlarski et. al, TemporalFilteringPapers). Low-pass filtering can smooth out your peristimulus timecourses, but can also severely flatten out their peaks, and has fallen out of favor in standard
GLM modeling; it's not recommended. Depending on your timecourse, outlier removal may make sense - trimming the extreme outliers in your timecourse that might be due to movement artifacts.
8. How can you compare time courses across ROIs? Across conditions? Across subjects? (peak amplitude? time to peak? time to baseline? area under curve?) How do I tell whether two timecourses are
significantly different? How can you combine several subjects’ ROI timecourses into an average? What’s the best way?
All of these are great questions, and unfortunately, they're generally open in the literature. FIR models generally allow contrasts to be built just as in standard GLM analysis, so you can easily do
t- or F-tests between particular aspects of an HRF or combinations thereof. But what aspects make sense to test? The peak value? The width? The area under the curve? Most of these questions aren't
super clear, although Miezin et. al (PercentSignalChangePapers) and others have offered interesting commentary on which parameters might be the most appropriate to test. Peak amplitude is the de
facto standard, but faced with questions like whether the tall/skinny HRF is "more" active than the short/fat HRF, we'll need a more sophisticated understanding to make sense of the tests.
As for group analysis of timecourses, that's another area where the literature hasn't pushed very far. A simple average of all subjects' condition A, for example, vs. all subjects' condition B may
well miss a subject-by-subject effect because of differing peaks and shapes of HRFs. That simple average is certainly the most widely used method, however, and so fancier methods may need some
justification. One fairly uncontroversial method might be simply analogous to the standard group analysis for regular design matrices - simply testing the distribution across subjects of the beta
weight of a given peristimulus timepoint, for example, or testing a given contrast of beta weights across subjects. | {"url":"http://mindhive.mit.edu/node/86","timestamp":"2014-04-18T18:16:25Z","content_type":null,"content_length":"26196","record_id":"<urn:uuid:7395e62f-f8f6-4047-af62-45cba6985d67>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00361-ip-10-147-4-33.ec2.internal.warc.gz"} |
Contest - Judges'
2009 UMD Programming Contest - Judges' Hints and Notes
Problem A - Power Tower
• Idea - multiply the number 2 to any reasonable power.
• It was my high school pet project - Dennis. One idea was to create an array or vector of digits to represent the number. First you start by storing number 1 in position of the array with index 0.
Then do n iterations, where for each iterations you multiply the number by two. While multiplying number by two, keep track of any carry digits, as individual array cells overflow their maximum
capacity (i.e. 10).
• I imagined this to be an easy problem, actually the easiest. But I was biased. It is tricky, so indeed it turned out to be one of the slightly harder problems.
Problem B - DJ Zhzyatslya
• Idea - sum up two fractions and print them out.
• DJ Zhzyatslya is a real DJ, of Russian-Ukrainian origin. His name is hard to pronounce. Can you do it ?
• Note -- input had spaced in it, so that it was easy to read in individual numbers without parsing the entire expression.
Problem C - Not a Composite Grid
• Idea - create a grid of primes satisfying prime adjacency rule.
• This problem was thought up by dimkadimon at TopCoder. There are a few similar programs I know that do not involve a grid, but do involve prime numbers. This is a really cool adaptation this type
of problem. I like it !
• The algorithm for this type of problem is called Depth First Search, or Backtracking
Problem D - Dogs with Large Eyes
• Idea - find number that repeat in the array, and print them out.
• Hans Christian Andersen is a writer. He has a story called Tinder Box. It will answer the question about whether I was sane when writing up this problem :)
Problem E - Reversible Primes
• Idea - find reversible primes in a given interval.
• Most of the difficulties people have had were with reversing a number, and with coding a fast prime checker.
• Trick - those who pre-computed primes only up to 400,000, got a problem after reversing this prime --> 100,009. | {"url":"http://www-personal.umd.umich.edu/~dennismv/events/2009umd2/judges_notes.html","timestamp":"2014-04-18T10:40:04Z","content_type":null,"content_length":"3024","record_id":"<urn:uuid:e6a4e77f-59b2-40dc-b40d-9ddad24c7a5c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
An O(m log n)-time algorithm for the maximal planar subgraph problem
Results 1 - 10 of 18
, 1999
"... Given a finite, undirected, simple graph G, we are concerned with operations on G that transform it into a planar graph. We give a survey of results about such operations and related graph
parameters. While there are many algorithmic results about planarization through edge deletion, the results abo ..."
Cited by 33 (0 self)
Add to MetaCart
Given a finite, undirected, simple graph G, we are concerned with operations on G that transform it into a planar graph. We give a survey of results about such operations and related graph
parameters. While there are many algorithmic results about planarization through edge deletion, the results about vertex splitting, thickness, and crossing number are mostly of a structural nature.
We also include a brief section on vertex deletion. We do not consider parallel algorithms, nor do we deal with on-line algorithms.
- ALGORITHMICA , 1996
"... ..."
- Graphs Combin , 1998
"... We give a state-of-the-art survey of the thickness of a graph from both a theoretical and a practical point of view. After summarizing the relevant results concerning this topological invariant
of a graph, we deal with practical computation of the thickness. We present some modifications of a ba ..."
Cited by 18 (0 self)
Add to MetaCart
We give a state-of-the-art survey of the thickness of a graph from both a theoretical and a practical point of view. After summarizing the relevant results concerning this topological invariant of a
graph, we deal with practical computation of the thickness. We present some modifications of a basic heuristic and investigate their usefulness for evaluating the thickness and determining a
decomposition of a graph in planar subgraphs. Key words: Thickness, maximum planar subgraph, branch and cut 1 Introduction In VLSI circuit design, a chip is represented as a hypergraph consisting of
nodes corresponding to macrocells and of hyperedges corresponding to the nets connecting the cells. A chip-designer has to place the macrocells on a printed circuit board (which usually consists of
superimposed layers), according to several designing rules. One of these requirements is to avoid crossings, since crossings lead to undesirable signals. It is therefore desirable to find ways to
handle wi...
"... In this paper we investigate the problem of identifying a planar subgraph of maximum weight of a given edge weighted graph. In the theoretical part of the paper, the polytope of all planar
subgraphs of a graph G is defined and studied. All subgraphs of a graph G, which are subdivisions of K 5 or K 3 ..."
Cited by 8 (1 self)
Add to MetaCart
In this paper we investigate the problem of identifying a planar subgraph of maximum weight of a given edge weighted graph. In the theoretical part of the paper, the polytope of all planar subgraphs
of a graph G is defined and studied. All subgraphs of a graph G, which are subdivisions of K 5 or K 3;3 , turn out to define facets of this polytope. We also present computational experience with a
branch and cut algorithm for the above problem. Our approach is based on an algorithm which searches for forbidden substructures in a graph that contains a subdivision of K 5 or K 3;3 . These
structures give us inequalities which are used as cutting planes.
- in WG , 1995
"... A planarizing set of a graph is a set of edges or vertices whose removal leaves a planar graph. It is shown that, if G is an n-vertex graph of maximum degree d and orientable genus g, then there
exists a planarizing set of O( p dgn) edges. This result is tight within a constant factor. Similar res ..."
Cited by 7 (1 self)
Add to MetaCart
A planarizing set of a graph is a set of edges or vertices whose removal leaves a planar graph. It is shown that, if G is an n-vertex graph of maximum degree d and orientable genus g, then there
exists a planarizing set of O( p dgn) edges. This result is tight within a constant factor. Similar results are obtained for planarizing vertex sets and for graphs embedded on nonorientable surfaces.
Planarizing edge and vertex sets can be found in O(n + g) time, if an embedding of G on a surface of genus g is given. We also construct an approximation algorithm that finds an O( p gn log g)
planarizing vertex set of G in O(n log g) time if no genus-g embedding is given as an input. 1 Introduction A graph G is planar if G can be drawn in the plane so that no two edges intersect. Planar
graphs arise naturally in many applications of graph theory, e.g. in VLSI and circuit design, in network design and analysis, in computer graphics, and is one of the most intensively studied class of
graphs [2...
- ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE , 1999
"... ..."
- Proc. 6 th Annual ACM-SIAM Symp. on Discrete Algorithms , 1995
"... Introduction The problem of extracting a maximum planar subgraph from a nonplanar graph, referred to as graph planarization, has important applications in circuit layout, facility layout, and
automated graphical display systems [F, TDB]. The problem is NP-hard [LG]; hence, research has focused on h ..."
Cited by 4 (0 self)
Add to MetaCart
Introduction The problem of extracting a maximum planar subgraph from a nonplanar graph, referred to as graph planarization, has important applications in circuit layout, facility layout, and
automated graphical display systems [F, TDB]. The problem is NP-hard [LG]; hence, research has focused on heuristics. There are several algorithms for finding maximal planar subgraphs [CHT, CNS, GT,
JTS, JM, K, OT]. However, there are graphs (see [CC]) for which the size ratio between two maximal planar subgraphs can be as small as 1=3. Hence, unless some precautions are taken to avoid the
extraction of small subgraphs, these heuristics have the potential for poor behavior. In this paper, we analyze the worst-case performance of some heuristics and show that there are graphs which can
cause each of them to achieve the 1=3 bound. However, a theoretical analysis of an algorithm's performance is often too pessimistic and somew
, 1996
"... The problem of computing a maximal planar subgraph of a non-planar graph has been deeply investigated over the last 20 years. Several attempts have been tried to solve the problem with the help
of PQ-trees. The latest attempt has been reported by Jayakumar et al. (1989). In this paper we show that t ..."
Cited by 4 (3 self)
Add to MetaCart
The problem of computing a maximal planar subgraph of a non-planar graph has been deeply investigated over the last 20 years. Several attempts have been tried to solve the problem with the help of
PQ-trees. The latest attempt has been reported by Jayakumar et al. (1989). In this paper we show that the algorithm presented by Jayakumar et al. is not correct. We show that it does not necessarily
compute a maximal planar subgraph and that the same holds for a modified version of the algorithm presented by Kant (1992). Our conclusions most likely suggest not to use PQ-trees at all for this
specific problem.
, 1998
"... The problem of computing a maximal planar subgraph of a non planar graph has been deeply investigated over the last 20 years. Several attempts have been tried to solve the problem with the help
of PQ-trees. The latest attempt has been reported by Jayakumar et al. [10]. In this paper we show that ..."
Cited by 4 (3 self)
Add to MetaCart
The problem of computing a maximal planar subgraph of a non planar graph has been deeply investigated over the last 20 years. Several attempts have been tried to solve the problem with the help of
PQ-trees. The latest attempt has been reported by Jayakumar et al. [10]. In this paper we show that the algorithm presented by Jayakumar et al. is not correct. We show that it does not necessarily
compute a maximal planar subgraph and we note that the same holds for a modified version of the algorithm presented by Kant [12]. Our conclusions most likely suggest not to use PQ-trees at all for
this specific problem. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=532212","timestamp":"2014-04-16T17:33:20Z","content_type":null,"content_length":"34986","record_id":"<urn:uuid:ab414cf7-8e65-4c5f-9278-156ef1368618>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reductive groups over non archimedean local fields.
up vote 1 down vote favorite
I want to know if connected reductive groups over non archimedean local fields have a dense countable subset. I was thinking that this should be true because if $G(\mathbb{F})$ is such group where $\
mathbb{F}$ is a non archimedian local field then there exist an embedding of $G(\mathbb{F})$ into some matrix group GL$_n(\bar{\mathbb{F}})$. I then suppose that this map is continuous and open,
right? and since $\bar{\mathbb{F}}$ has a dense countable set, or not? then $G(\mathbb{F})$ has a dense countable set. Can somone give me a reference of this result if it is true?
Thank you
nt.number-theory algebraic-groups
add comment
2 Answers
active oldest votes
I think this is true for any affine variety $X$ over $F$: by Noether normalization lemma it can be represented as a finite cover of an affine space, for which the statement is
up vote 4 down clearly true (then take pre-image in $X$).
It can't be true for any affine X since X may not even have any F-points. – Peter McNamara Feb 1 '12 at 3:33
1 Well, but in that case it still has a dense countable subset by definition (I thought countable means actually no more than countable) – Alexander Braverman Feb 1 '12 at 4:04
add comment
This is true because G is unirational, eg Springer, Linear Algebraic Groups, Corollary 13.3.9(ii).
up vote 4 down vote
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory algebraic-groups or ask your own question. | {"url":"http://mathoverflow.net/questions/87191/reductive-groups-over-non-archimedean-local-fields?sort=oldest","timestamp":"2014-04-18T11:09:44Z","content_type":null,"content_length":"54946","record_id":"<urn:uuid:e3ca4a9d-ecf4-4db1-aca3-67534684abfc>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |