content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
How To Convert Atomic Percent To Weight Percent And Vice Versa
Wed, Aug 12, 2009
In the course of reviewing information on metals and minerals, I often come across chemical composition information that is written in terms of atomic percent, when I am actually more interested in
the weight percent values of the elements involved. A little less frequently I want to do things the other way around, and do a conversion from weight percent to atomic percent.
After searching online, I’ve noticed that what little conversion information is out there, is unnecessarily complicated. So, I thought I’d share the simple but trusty formulae that I have had pinned
on one wall or another for the past couple of decades…
A) Converting from atomic percent to weight percent:
1. For each element listed in the compound, multiply the atomic percent of the element by its atomic weight [the larger of the two principal numbers listed for each element in the standard periodic
table]. For each element, let’s call this value p.
2. Add all the values of p together, and let’s call this value p(Total).
3. Now, for each value of p, divide it by p(Total), to obtain w.
4. Multiplying the resulting values of w by 100 gives us the weight percent values, for each respective element in the starting compound.
Example: we encounter a neodymium-based permanent magnet material whose composition is listed, in atomic percent terms, as being 15% Nd, 77% Fe and 8% B.
• Following Step 1 above, we first obtain the atomic weights for each element. To two significant figures, these are: Nd – 144.24, Fe – 55.85 and B – 10.81.
• Completing Step 1 results in values of p(Nd) = 2163.60, p(Fe) = 4300.07 and p(B) = 86.49.
• Following Step 2, p(Total) has a value of 6550.15.
• Following Step 3, this results in values of w(Nd) = 0.33, w(Fe) = 0.66 and w(B) = 0.01.
• Following Step 4 results in the final values, in weight percent terms, of 33% Nd, 66% Fe and 1% B.
B) Converting from weight percent to atomic percent:
1. For each element listed in the compound, divide the weight percent of the element by its atomic weight. For each element, let’s call this value m.
2. Add all the values of m together, and let’s call this value m(Total).
3. Now, for each value of m, divide it by m(Total), to obtain a.
4. Multiplying the resulting values of a by 100 gives us the atomic percent values, for each respective element in the starting compound.
Example: we encounter a samarium-based permanent magnet material whose composition is listed, in weight percent terms, as being 34% Sm and 66% Co.
• Following Step 1 above, we first obtain the atomic weights for each element. To two significant figures, these are: Sm – 150.35 and Co – 58.99.
• Completing Step 1 results in values of m(Sm) = 0.23 and m(Co) = 1.12.
• Following Step 2, m(Total) has a value of 1.35.
• Following Step 3, this results in values of a(Sm) = 0.17 and a(Co) = 0.83.
• Following Step 4 results in the final values, in atomic percent terms, of 17% Sm and 83% Co.
There is one other scenario that we sometimes encounter, related to A) above, but which involves the chemical formula for a particular metallurgical phase:
C) Converting from chemical formula to weight percent:
1. For each element listed in the compound, multiply the number of atoms of the element by its atomic weight. For each element, let’s call this value r.
2. Add all the values of r together, and let’s call this value r(Total).
3. Now, for each value of r, divide it by r(Total), to obtain w.
4. Multiplying the resulting values of w by 100 gives us the weight percent values, for each respective element in the starting compound.
Example: we look to evaluate the main hard magnetic phase in neodymium-based permanent magnet material, whose chemical formula consists of 2 atoms of Nd, 14 atoms of Fe and 1 atom of B [i.e. the
so-called 2-14-1 stoichiometric composition].
• Following Step 1 above, we first obtain the atomic weights for each element. To two significant figures, these are: Nd – 144.24, Fe – 55.85 and B – 10.81.
• Completing Step 1 results in values of r(Nd) = 288.84, r(Fe) = 781.83 and r(B) = 10.81.
• Following Step 2, r(Total) has a value of 1081.48.
• Following Step 3, this results in values of w(Nd) = 0.27, w(Fe) = 0.72 and w(B) = 0.01.
• Following Step 4 results in the final values, in weight percent terms, of 27% Nd, 72% Fe and 1% B.
Increasing the number of significant figures in the various values will increase the accuracy of the calculations, but you’ll probably find that you don’t need to get too much more detailed than I
did, in the examples above.
I hope that this is of some use to you; feel free to comment or suggest other topics for discussion or review.
22 Responses to “How To Convert Atomic Percent To Weight Percent And Vice Versa”
1. MB Says:
This was the best answer to my problem.
2. JKR Says:
It is superb…
3. Aww Says:
This was very useful! Thank You very much!! :)
4. maria Says:
This is the best solution to my homework problems that I am facing in my civil and materials engineering class. I have been looking online and through chemistry books for last three hours. thanks
so much
5. alex Says:
This is the best and easiest solution EVER presented to these problems.
6. Pierre Says:
Excellent work, Thank you
7. sen Says:
Its very useful thank you.
8. hector Says:
muchas gracias esto es algo que me ba a ayudar mas en mi trabajo por que yo soy metal sorter por mas de 25anos , conosco todo tipo de metales pero me especializo en refractory metals. GRACIAS DE
NUEVO . I was looking for this information for many years,so thanyou very much sincerely
9. alex Says:
What if I have an alloy composite? Say I have an alloy made up of 1.5wt% Al2O3 and 98.5 wt% ZnO. How would I calculate the atomic percent of each element in this composite?
10. he he he Says:
simple way. chemist!!!!
any question about chemistry of NdFeB or SmCo ask uff or HEHEHE, true specialist
and very knowledgeable about these materials.
Nd 2*144.24= 288.48 =288.48/1081.121= 0.266834147 26.68341471
Fe 14*55.85= 781.83 =781.83/1081.121= 0.723166047 72.31660471
B 1*10.811= 10.811 =10.811/1081.121= 0.009999806 0.999980576
11. Murli Gopal Krishnamoorty Says:
It was possible to calculate a whole range of alloy compositions in a spreadsheet and plot ternary systems both in at.% and wt.%. Earlier I used to take the ratio of number of atoms using
Avagadro Number and take percentage to be more sytematic
I cross checked with
Another useful link for plotting a ternary system is
Thanks and regards
12. Kashmira Tank Says:
I would like to ask one question, as how we can covert at% into wt% for the chemical formula such as Ca10(PO4)6(OH)2, as the example is given for Nd2Fe14B, But what about hydrogen, which can not
detected in EDAX ?
13. ?????? Says:
I am very grateful!
14. meena Says:
I would like to ask one question,how to calculate atomic ratio for metal
15. shanthini Says:
Very Nice!!!!
16. Sang Says:
I have a query. I have a protein based substance which has some amount of calcium carbonate in it. I know the weight percent (3.5 %) and atomic percent (4.4 %) of calcium in it. How will I
determine the total calcium carbonate present in my substance?
17. hana Says:
what if I wanted to prepare some nano material that should consist 99 wt.% ZnO and 1% Ag?how should I calculate how much starting material I needed?before this I used to determine the percentage
by calculating the number of moles. but I’m not sure if it’s the right way.
18. Daisy Says:
Thanks!! its very usefull information to have on hand.
19. Lim SL Says:
Thank you, Gareth! A very well written article, and very useful!!
Lim SL
20. k.s Says:
This is the best answer to my problem. Thank you so much.
21. wow news Says:
this is a very useful website!
22. 2014 Ford C Max Says:
If you desire to increase your familiarity only keep visiting this site and be updated with the most recent news update posted here. | {"url":"http://www.terramagnetica.com/2009/08/12/how-to-convert-atomic-percent-to-weight-percent-and-vice-versa/","timestamp":"2014-04-18T18:12:37Z","content_type":null,"content_length":"50976","record_id":"<urn:uuid:30992929-6afe-4273-bdf8-805f54566fdd>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
Double Stub Matching Using Smith Charts
I'm going to class, and I'll ask my teacher. There is not sense in you using your time on this problem anymore, so don't! I'll post the answer to my question after I find it in case someone ever
stumbles on this thread with the same question I had. OK HERE IS THE ANSWER: My teacher accidentally gave an unsolvable question :P.
1. The problem statement, all variables and given/known data
A 50 ohm line is terminated in a 40 ohm load.
Two shorted stubs are to be used in parallel with the line to match the load and line. One stub should be located in parallel with the load , and the other should be located1/4 wavelength from the
Find the input impedance of each stub.
3. The attempt at a solution
I know the circle I drew is retarded looking, but it gets the point across. I plotted the normalized impedance(.8) and converted it to admittance. Next, I drew a unit circle shifted toward the load
by .25 wavelengths. After that, I see no way I can add an imaginary admittance to my real 1.25 so that it will be on the predictive unit circle (in other words, so that when that total admittance is
shifted .25 wavelengths toward the generator, it will be on the unit circle with 1 real, xL reactive.) | {"url":"http://www.physicsforums.com/showthread.php?t=380746","timestamp":"2014-04-18T23:22:06Z","content_type":null,"content_length":"20985","record_id":"<urn:uuid:be986d1e-d6a0-4dc5-930a-262d95ce5521>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Notations used by TPS and ETPS
[ A] means "A is not true".
[A B] means "A and B".
[A B] means "A or B".
[A B] means "A implies B".
[A B] means "A if and only if B".
[x A] means "For every x, A is true".
[x A] means "There exists an x such that A is true".
Bracket and Parenthesis Conventions
Outermost brackets and parentheses may be omitted.
Use the convention of association to the left. Thus,
A B C stands for [[A B] C].
A B C D stands for [[[A B] C] D].
A dot stands for a left bracket, whose mate is as far to the
right as is possible without altering the pairing of left and right
brackets already present.
A .B C stands for [A [B C]].
[A .B C] D stands for [[A [B C]] D].
When the relative scopes of several connectives of different kinds | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/399/2481258.html","timestamp":"2014-04-19T17:30:14Z","content_type":null,"content_length":"7849","record_id":"<urn:uuid:cb7c50e8-9f2a-490e-97d2-86cc87f56d68>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00290-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sparse Stochastic Finite-State Controllers for POMDPs
Eric A. Hansen
Bounded policy iteration is an approach to solving infinite-horizon POMDPs that represents policies as stochastic finite-state controllers and iteratively improves a controller by adjusting the
parameters of each node using linear programming. In the original algorithm, the size of the linear programs, and thus the complexity of policy improvement, depends on the number of parameters of
each node, which grows with the size of the controller. But in practice, the number of parameters of a node with non-zero values is often very small, and it does not grow with the size of the
controller. To exploit this, we develop a version of bounded policy iteration that manipulates a sparse representation of a stochastic finite-state controller. It improves a policy in the same way,
and by the same amount, as the original algorithm, but with much better scalability.
Subjects: 1.11 Planning; 15.5 Decision Theory
Submitted: May 5, 2008
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy. | {"url":"http://aaai.org/Library/Workshops/2008/ws08-01-006.php","timestamp":"2014-04-19T15:53:45Z","content_type":null,"content_length":"2852","record_id":"<urn:uuid:72863c63-3eec-4532-90bb-ab44acb8f643>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the use and abuse of Bayesian modeling
Review: Jones, M., and Love, B. (2011, in press). “Bayesian Fundamentalism or Enlightenment? On the Explanatory Status and Theoretical Contributions of Bayesian Models of Cognition.” Behavioral
and Brain Sciences.
connectionist” or a “Bayesian” model. To an outsider to the field, it might seem these choices are inconsequential: if a theory is ultimately about the nature of human thought, what difference does
the mathematical “language” it is expressed with make? Isn’t the more important question to ask if a theory tells us something useful about the mind?
However, as it turns out, the choice of mathematical formalism often does means quite a lot, since it can greatly change what one learns from the model or what the model means.
Over the last couple years, there has been a large movement in the cognitive science community towards developing Bayesian models of cognition. To understand Bayesian probabilistic models and the
controversies surrounding their use, it will help to understand a little more about what they are.
Bayesian probabilistic models allow scientists and statisticians to develop models of the “rational” inferences learners should make based on a set of observations, given a mathematically precise
description of how the possible states of the world relate to those observations and what the learner’s prior beliefs are about those possible states. These are useful for two reasons:
1. They can be used to solve ordinary statistical problems (e.g., does smoking cause cancer?)
2. They can be used as ideal observer models, answering the question of how humans and animals “should” behave when asked to solve a particular cognitive problem.
Recently, a number of papers have proposed Bayesian models of various aspects of cognition, and given close fits to human data, argued that human behavior is therefore “rational”. This approach has
generated outcry from some who feel it encourages a preoccupation with rationality and mathematical formalisms, diverting attention away from the interesting psychological questions of how these
problems are solved by human and animal brains.
The central tenet of Bayesian Fundamentalism is the belief that human behavior can be explained entirely through rational analysis, given a correct probabilistic interpretation of the task
environment.A recent paper in Behavioral and Brain Sciences [see also] by Matt Jones and Brad Love can be considered a comprehensive manifesto for this viewpoint. Jones and Love critique a
perspective which they call “Bayesian Fundamentalism.” The central tenet of Bayesian Fundamentalism is the belief that human behavior can be explained entirely through rational analysis, given a
correct probabilistic interpretation of the task environment. Under this view, there is no need to make reference to mechanistic explanations to explain behavior: since humans act rationally, a
rational model will fully describe their behavior.
Jones and Love’s primary objections to this paradigm can be summarized as follows:
• Without a careful study of the environment and cognitive challenges that put our ancestors under evolutionary pressure, it is impossible to accurately specify the assumptions that should be built
into a model of a cognitive task. Therefore, the predictions of the models are highly unconstrained, and similarity to human behavior cannot be taken as evidence that humans behave rationally.
• Theories of cognition which have no predictions on an algorithmic or implementational level are fundamentally unsatisfying, and that many of the contributions of cognitive modeling to other
fields has been in the form of mechanistic predictions.
The authors call for a turn toward “Bayesian Enlightenment,” in which the algorithmic and implementational aspects of probabilistic models are taken seriously as having Psychological content.
We read a version of this paper in our recent lab meeting and our reactions were resolutely mixed. Some felt that the article did an excellent job pointing out theoretical excesses in the field,
while others felt that it was overly dismissive of the usefulness of showing how a problem could be incorporated into a Bayesian framework.
One source of frustration that Jones and Love were able to address effectively is the common conclusion among modelers that a good fit to human behavior by a Bayesian probabilistic model indicated
that human behavior is in some sense “rational.” As the authors make clear, a model cannot be considered a “rational” account of a cognitive process without a thorough analysis of the natural
environment and the cognitive challenges that our brains were evolved to solve (a level of analysis completely missing from recent Bayesian analyses of cognition).
… the spector of the “Bayesian Fundamentalist” is a straw man. Who are these people? … What Bayesian wouldn’t welcome constraining data from neuroscience that supported or could bear directly on
their model?
Another important issue they address was the apparent lack of clarity concerning the psychological content of Bayesian models. Since Bayes’ rule itself is trivial, the content of a Bayesian model
rests almost entirely in the setting up of the hypothesis space and (often) the choice of an approximation algorithm. Bayesian theorists often attack process-level approaches (such as connectionist
models or other process-level accounts) for making a large number of “arbitrary” assumptions. However to the degree that assumptions about priors and the hypothesis space in a Bayesian model are also
arbitrary (i.e. not set based on an analysis of the evolutionary environment), then there is no real advantage to either approach. One way to ease this tension is to say that the key psychological
contribution of Bayesian probabilistic models is their specification of the hypothesis space, prior, and approximation/optimization algorithm (Jones and Love advocate this approach as “Enlightened
On the other hand, there was a real sense that the best Bayesian modelers, who in fact have greatly contributed to the wide-spread interest in these types of models, are interested in process-level
models (e.g., Vul, Daw, Steyvers, Griffiths, Goodman, etc…). What Bayesian wouldn’t welcome neuroscientific data that supported or could bear directly on their model? One real risk from the article
is confusing people about what current Bayesian models are actually about, by aligning them with this non-existant bogeyman. Oddly, everyone we could think of who might be a “Bayesian Fundamentalist”
had also written compelling papers that Jones and Love would called “Enlightened Bayes”. Is this a paper stirring controversy with no real target?
Ultimately though, a great paper for debate, and will hopefully encourage everyone who works with cognitive models of every kind to think a little bit harder about what their models really mean. It’s
also pretty clearly written and might be a great place to start if you are interested in learning more about the value of cognitive models.
Melody Dye also has a fun post up about this article with a lot of colorful quotations.
Jones, M., and Love, B. (2011, in press). “Bayesian Fundamentalism or Enlightenment? On the Explanatory Status and Theoretical Contributions of Bayesian Models of Cognition.” Behavioral and Brain
Cartoon by Daniele Quercia.
(This article was written with input and ideas from our lab)
i hate bayesian fundamentalists too. who are they exactly?
June 19th, 2011 at 12:56 am
I am a Bayesian statistician who proposes normative Bayesian models for inference and prediction. I hate to step into a cross fire between Bayesian Fundamentalism and Enlightenism in psychology, but
two points came to mine when reading your excellent discussion.
First, Bayes theorem is “trivial” only because Thomas Bayes and Richard Price worked it out for us in the 18th century, and many great minds have elaborated it since then. I doubt that if we were
alive at the time, any of us would have beaten Price and Bayes to publication.
Second, Bayesian statisticians use the term “rational” in a very limited sense to describe a person’s preferences for a set of actions. If these preferences follow the von Neumann-Morgenstern-Savage
axioms, then a mathematical psychologist (do they still exist?) could assign numbers to a utility function on the space of consequences and to a probability function of the space of events such that
the subject’s preference orderings of actions corresponds to the mathematical psychologist’s ordering based on (numerical) expected utility for those actions.
“Irrational” preferences, such as intransitive orderings, violate these axioms. Consequently, there does not exist utility functions and probability measures that can reliably and consistently
measure this subject’s preferences. The mathematical psychologist will not be able to predict confidently this subject’s behavior in any choice scenario. Normative Bayesians are not making a value
judgment about this subject’s cognitive capacity; we merely note that this person falls outside our expertise. The good news for marketing managers and financial advisors is that this person makes an
excellent target for “extracting” consumer surplus.
A more relevant example for academic psychologists is that classical hypothesis testing violates the likelihood principal. Consequently, researchers who use p-values in decision making are
irrational, and journals that publish their results are also irrational.
Other uses of “rational” or “irrational” seem to be post hoc attempts to either defend or attack a person’s choice with value judgments that originate outside of the von Neumann-Morgenstern-Savage
axioms. This use of “rational” is good fun for kibitzers, especially those with grant money and undergraduate subject pools.
The downside is that normative Bayesians have been stigmatized for being small minded or mechanistic for positions about “rationality” that they have never taken. It is hard to certify a person as
rational without an extensive study of his or her preferences. Even then, one runs into major measurement error and models specification problems that can cause the rational/irrational result to
break either way. Even the most careful study makes critical assumptions, such as time homogeneity of preferences or method invariance. Relaxing these assumptions can turn the irrational into
So what? Well, if I were King of Psychology (if I had one wish, it would not be that), I would decree that “rational” only refers to the axioms. Other value judgments should be called what they are.
Oh, I would also reject all articles that rely on p-values.
Peter Lenk
August 26th, 2013 at 11:47 am | {"url":"http://gureckislab.org/blog/?p=165","timestamp":"2014-04-21T02:20:44Z","content_type":null,"content_length":"24499","record_id":"<urn:uuid:4911ff07-840e-426c-ab5f-d90f1eae5f33>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00129-ip-10-147-4-33.ec2.internal.warc.gz"} |
Go Figure!
We end January with my last guest blogger,
Renee Goularte. (Notice that art is part of her last name.) She is a retired elementary school teacher/art educator, a working artist, and a writer. Her professional teaching career began in San
Jose, CA in 1988 in a third grade classroom. She soon moved to a 2/3 combination class and then spent four years teaching in a Multi-age 1-2-3 program, which she started with a colleague. Renee has
also taught a 4/5 combination class in addition to Kindergarten, and has worked specifically with GATE students, ELL students, and at-risk students. Before she retired in 2010, she taught art to
Kindergarten, and first and second graders. For her, teaching art has come full circle. In addition to her work in the classroom, Renee has written articles on teaching, written lessons
professionally, maintained a number of websites and blogs, and facilitated math, language, and art workshops for teachers and parents. She also has a Teachers Pay Teachers store where she offers many
creative products. Click under her picture to go there. Like myself, Renee believes in hands-on, active learning that touches all learning styles. From a personal experience, she knows visual
literacy is an important component of the curriculum, and that creativity can be fostered and nurtured.
In many of the art lessons I teach, I use math, especially math vocabulary. Sometimes this is incorporated when having students look at famous art works, especially (and most obviously) some abstract
art, but also (and less obvious) art that is more representational. When I ask students to tell me what they notice in works by Wasily, Kandinsky or Pablo Picasso, many of the responses include basic
math vocabulary (square, triangle, line), but those responses might also include vocabulary that is more specific, words like acute, obtuse, parallelogram or isosceles.
Many of the lessons I teach to young children (I work mostly with primary-aged children) include the creation, manipulation, and use of different kinds of shapes and lines, whether they are drawing,
painting, or making collages. When appropriate, I like to remind the students that they are “doing math” as well as “making art”.
Here are a few of my favorite art-making activities with math connections.
Geometric Shape Collage
This is an activity that I have done with elementary students of all ages. For younger students, the directions are more general and fewer shapes are used. For older students, the directions and
requirements are more specific and complex. For example, younger students are asked to use one circle, two lines, three triangles, and four colors (for a little problem solving challenge), while
older students are asked to use one circle, two lines, three non-congruent triangles, four different quadrilaterals, and five colors. Directions can be altered to include geometry vocabulary as
desired, and there are many ways to extend the art lesson to incorporate more mathematics. For example, one can ask younger students to do a rubbing of their design and label all the shapes, or have
the older students calculate combined areas of the quadrilaterals and find the ratio of that area to the total area.
Geometric People
One of my standard lessons with second grade students, these figures are created entirely from geometric shapes (mostly rectangles). The lesson includes a discussion about our joints, some attention
to human body proportion, length and width, and the depiction of movement. When I have time, I begin with an introduction to the art of Keith Haring and a movement activity that requires students to
arrange their bodies in different poses.
Kandinsky-Inspired Abstract Design
This is one of my favorite “no prep needed” art activities for incorporating math vocabulary and most of the elements of design. All you need is white paper, assorted colored markers, and crayons. It
is adapted from an activity in the book “Drawing With Children” by Mona Brooks. I usually have students look at a Kandinsky print and tell me what they notice. Invariably, math vocabulary bubbles up:
acute angles, triangles, parallel lines, etc. Then I lead them through the drawing by having them draw one thing at a time: three dots anywhere, one line that goes off the edge of the paper, another
line parallel to the first, a third line that intersects the first two, etc. I have some standard directions for this activity that get varied now and then, according to what students are coming up
with. No matter what, the directions use lots of math vocabulary. They are asked to color it however they choose, using only one color per closed shape and leaving part of the composition white.
Later, they write about their art work, comparing it to Kandinsky’s work. These are always successful, colorful, and interesting!
Symmetrical Cityscapes
This is an easy, fun art activity that connects to mathematics with its focus on symmetry, proportion, and a little work with geometric shapes. It gives students the opportunity to be creative while
applying their knowledge of bilateral symmetry. I like to use construction paper crayons on black paper, but I’ve also used regular crayons on white paper. An even more artistic version of this
lesson has students do a watercolor wash for a sky, onto which the cut out skyline is glued. Students love this art lesson, and it is inherently successful; even if mistakes are made in the symmetry,
the end results are always beautiful.
There is no doubt artists use math all the time when creating art. Sometimes that math is obvious in the subject matter, and sometimes it is more subtle in the composition, but it’s nearly always
present. To teachers who insist that there is no time for art, I say “think math” and you can kill two birds with one stone!
All the art lessons described here are available in my Teachers Pay Teachers store where you will find lessons and lesson bundles such as Playing With Shapes, Art With Symmetry, Art With Patterns,
Exploring Lines and Shapes, GeomARTry, and much more! I also share many art ideas on my blog entitled: Creating Art With Kids. I hope you will check it out.
My guest blogger today is Barbara. Even though she is not a seller on Teachers Pay Teachers, we have been friends since 6th grade. She originally lived in England, and I was her pen pal, but then she
moved to my home town in Ohio where we became best friends. She teaches at the University of North Carolina. The college has a drop in "math lab," which is a service to students who would like help
with their math assignments. Barbara is one of five instructors who takes turns supervising the math lab and giving help. She finds it to be a very rewarding job. The courses she mainly helps with
are Statistics, Reality Math, Math for Teachers, Pre-Calculus through Calculus, a little Differential Equations, and occasionally some Physics. I think you will find Barbara's article on the magical
nines quite interesting.
As the story goes, one day Jack and Jill were walking up the hill to fetch a pail of water. Jack said to Jill, “Did you bring your calculator?” Jill acknowledged that she never left home without it.
Jack said, “Okay, Jill, now think of a fairly large whole number and reverse the digits. Then subtract the smaller number from the larger.” Using her calculator, Jill came up with an answer to which
Jack replied, “Now choose one of the digits in the answer and remember it. Then tell me the other digits, in any order, and I will tell you the remaining digit.”
“Okeydokey,” said Jill, “the other digits are 7, 2, 5, and 1.” Jack revealed that the remaining digit was three. “That’s right!” shouted Jill. In fact, she was so surprised that she fell down and
broke her crown.
?? How did Jack know Jill's number ??
The secret is in the properties of the number 9. Did you know if you multiply ANY whole number by 9, the sum of the digits of the answer will also be a multiple of 9?
*A number is a multiple of 9 if its digits add up to 9 or a multiple of 9. If the answer is not a one digit number, continue to add the digits to eventually arrive at just a one digit answer of nine.
(This is sometimes referred to as finding the digital root.)
Example: 175,439,826 is divisible by 9 because when you add all the digits, you get 45. (1 + 7 + 5 + 4 + 4 + 9 + 8 + 2 + 6 = 45) Adding the two digits of the answer, you get 9. (4 + 5 = 9) Nine is
the digital root of 175,439,826 so this number is divisible by 9 as well as three since three is a factor of 9.
Example: 53,872,091 is NOT divisible by 9. Adding the eight digits you get the sum of 35. Adding the digits of the answer, 3 and 5, you get 8. This tells you that if you divide the original number by
9, you will get an answer with a remainder of 8.
Another interesting property of 9 is that if you choose any whole number, reverse its digits, and subtract the smaller number from the larger, the answer is always a multiple of 9.
Example: 5,132 – 2,315 = 2,817, which is divisible by 9. (2 + 8 + 1 +7 = 18 1 + 8 = 9)
So Jack knew that when Jill subtracted her numbers, the answer was a multiple of 9. And when she told him the digits 7, 2, 5, and1, he knew that the remaining digit would have to be 3; so, that all
the digits would add up to be a multiple of 9. (in this case, 18.)
Beware for there is one condition when this trick might not go as smoothly. If the digits already add up to be a multiple of 9, then the missing digit could be a 0 or a 9. You could then claim that
your mental image is coming in fuzzy, and you can’t quite tell if it is a 0 or a 9.
Here is another math trick. Ask someone to think of any three-digit number (with no repeating digits), reverse the digits, and subtract the smaller number from the larger. When that person has the
answer, ask him/her to reverse the digits of the answer, and then add the answer and the reversed answer together. You can then tell him/her that the final result is 1089. How do you know that? I
will leave that for you to figure out! (That's problem solving at its best!)
P.S. If you need help with the solution, check out the page above entitled Answers to Problems.
My guest blogger for this week is Cynthia. She, too, is a seller on Teachers Pay Teachers. She has been an educator for 24 years - 16 years teaching kindergarten, one year looping with her students
to first grade and seven years in 2nd where she currently teaches. I hope you enjoy her timely post about limiting paper copies.
2nd Grade Pad
Hello Everyone!
I am Cynthia from 2nd Grade Pad, the name of my TPT Store. I am so excited to be doing a guest blog for Vicky!
Has your school limited copies that you can make for your classroom?
By using three objects everyone has in their classroom, along with training your students in the process, you will find that planning is a snap and MUCH easier than making lots of copies each week.
No matter what your grade-level, implementing is a cinch!
Dry erase boards, dry erase makers, and an eraser are probably the most important three items in my classroom. I literally, could not teach without them.
(All of the technology out there, and these are my MUST HAVES??? YES!!)
Each day in math, I work with a small group at my needs-based table. Each student has these three items. I work with them doing a spiral review that includes measurement,
adding, subtracting, money, problem of the day, time, and place value.
Above is an example from November. We go over one problem at a time.
The students write down their answer. When they are finished, they put the lid on the marker and place it on the table. This helps me to know when they are finished.
Here are some of the items we use.
If you aren't familiar with the Judy clock, (on the right) when you move the minute hand, it is attached to gears so that it automatically moves
the hour hand.
Next, all of the students show their answers, and we discuss them.
On the right is a sample of how they show the big number of the day.
In years' past, I have shown this on my big screen. The students would each write their answers on the spiral math board which I would laminate (included in the monthly units). The students would
keep these in their desk to use daily.
This option is also a paperless method.
If you like what you see, you can purchase the Daily Math Reviews by the month, or you can purchase the entire year at a discounted price here. My unit is for 2nd grade, but it can easily be used for
first or third. However, the spiral concept and work problems can certainly be used in any grade. I hope these ideas save lots and lots of paper copies!
I Spy Something Fishy
Cynthia's Blog
On your right is a free resource you might enjoy. My store also contains many other freebies if you would like to take a look.
Because I had surgery on my right hand, some of my fellow bloggers graciously agreed to be guest bloggers this month. I hope you enjoy reading math articles from other teachers who just might give
you a different perspective on how to teach math.
Brian's Blog
Hi! I am Brian from Hopkins' Hoppin' Happenings. I taught Kindergarten for three years, 2nd grade for five years, and did a short term in 5th grade. I am currently a substitute teacher. Today, I
would like to talk about how to make learning math fun in Kindergarten.
When I taught Kindergarten, I requested that each parent or guardian bring in food that we could use for math. Examples included Cheez-Its, Cheerios, Trix, Skittles, colored Goldfish, etc. Every
Friday, I taught or reviewed a math skill using the food! Two of many of the math skills covered were sorting and graphing. Let's say I was using colored Goldfish for the math lesson. I generally
placed the same number in each student baggie with the same colors of each so everyone would get the same answer. This also made it easy for me to check for accuracy. First, the students were asked
to sort the Goldfish according to color. Then they would line the different colors up on a graphing grid I had made. The students then removed one Goldfish at a time and colored in the space where it
had been. Next, I asked the whole class questions such as, “Which color of Goldfish are there the most of? Least of? How many green Goldfish are there? How many green and yellow goldfish are there in
all?” Afterwards the children were allowed to eat their goldfish! I have also had students graph and sort different kinds of cereal, Skittles, etc.
Another important math skill to use with food is patterning. Give students different kinds of food of different colors and have them create patterns with it. I give them a piece of paper on which
they glue down a couple of their patterns and then they are permitted to eat the rest. The patterns can be simple or difficult depending on the age of the children.
Food also works well for identifying numbers. Show the students a flash card with a numeral written on it. The students must count out that many skittles, Trix, etc.
Food also works well to teach addition and subtraction facts. For example: 3 + 1. The student counts out three Cheez-Its (or whatever food you are using) and adds one more. Have the student solve the
problem and write down how many there are all together. Here is another example, this time subtraction: 4 – 2. The student counts out four Cheez-Its but then eats two of them. This time the student
answers the question, “How many are left?”
You can also use food for math in 1st or 2nd grade and do the same graphing activity, but then ask higher level questions such as: “How many more orange life savers are than than green?” You can also
use stick pretzels and Cheerios to teach place value and two digit addition or subtraction. Let’s assume the Cheerios are the ones and the pretzels represent the tens. When adding 18 + 13, the
students would trade 10 Cheerios for a pretzel or if subtracting, trade a pretzel for 10 cheerios.
For older students, food can be used to make groups or arrays for multiplication or to figure out how many are in each group for division. You can also give the students two colors of some food to
practice fractions.
Try using food in the classroom. Your students are sure to love learning and quickly grasp the math concepts because they are having fun!
Free Graphing Activity
Brian is offering an exclusive freebie for all of you who read my blog. It is entitled Goldfish Cracker Graphing Activity. Just click under the goldfish to download it.
Also, be sure to check out Brian's Store on Teachers Pay Teachers for more primary resources. While you are there, check out his other 37 freebies. | {"url":"http://gofigurewithscipi.blogspot.com/2013_01_01_archive.html","timestamp":"2014-04-20T15:57:34Z","content_type":null,"content_length":"209062","record_id":"<urn:uuid:b308b8d2-4b99-4df1-9ebe-634921a2c250>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
The integration of e^(x^2)
How about integration using summation of (x^n)/n! ?
That is definitely an option; however, we do not regard an infinite power series as a nice anti-derivative, nor as a finite combination of elementary functions.
We get then, as an anti-derivative F(x):
This is most definitely not a nice function! | {"url":"http://www.physicsforums.com/showthread.php?p=2667827","timestamp":"2014-04-19T07:34:31Z","content_type":null,"content_length":"65306","record_id":"<urn:uuid:eaa90f26-7737-4f96-9023-944e0729530d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discretize input at specified interval
The Quantizer block passes its input signal through a stair-step function so that many neighboring points on the input axis are mapped to one point on the output axis. The effect is to quantize a
smooth signal into a stair-step output. The output is computed using the round-to-nearest method, which produces an output that is symmetric about zero.
y = q * round(u/q)
where y is the output, u the input, and q the Quantization interval parameter.
Data Type Support
The Quantizer block accepts and outputs real or complex signals of type single or double. For more information, see Data Types Supported by Simulink in the Simulink^® documentation.
Parameters and Dialog Box
The interval around which the output is quantized. Permissible output values for the Quantizer block are n*q, where n is an integer and q the Quantization interval. The default is 0.5.
Simulink software by default treats the Quantizer block as unity gain when linearizing. This setting corresponds to the large-signal linearization case. If you clear this check box, the
linearization routines assume the small-signal case and set the gain to zero.
Specify the sample time of this Outport block. See Specify Sample Time in the online documentation for information on specifying sample times. The output of this block changes at the specified
rate to reflect the value of its input.
The sldemo_boilersldemo_boiler model shows how you can use the Quantizer block.
The Quantizer block appears in the Boiler Plant model/digital thermometer/ADC subsystem.
The ADC subsystem digitizes the input analog voltage by:
● Multiplying the analog voltage by 256/5 with the Gain block
● Rounding the value to integer floor with the Quantizer block
● Limiting the output to a maximum of 255 (the largest unsigned 8-bit integer value) with the Saturation block
For more information, see Explore the Fixed-Point "Bang-Bang Control" Model in the Stateflow^® documentation. | {"url":"http://www.mathworks.se/help/simulink/slref/quantizer.html?s_tid=gn_loc_drop&nocookie=true","timestamp":"2014-04-23T12:19:42Z","content_type":null,"content_length":"41752","record_id":"<urn:uuid:cfb66d74-1bdb-4994-87ef-1f07508f3216>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polar Area using a Double Integral
May 25th 2010, 08:42 PM #1
Polar Area using a Double Integral
Q: Obtain the area inside $r=1+cos\theta$ and outside $r=3cos\theta$
My first attempt was $A=2\int_{\frac{\pi}{3}}^{\pi}\int_{3cos\theta}^{1+ cos\theta}rdrd\theta$ which I quickly realized has incorrect angles, since $r=3cos\theta$ loops around twice as fast as
the other graph. Thank you!
EDIT - I have just figured it out on my own! Sorry about that
Last edited by Em Yeu Anh; May 25th 2010 at 08:59 PM.
You need two integrals.
The circle $r=3\cos\theta$ doesn't reach the second quadrant.
So you integrate from $\pi/3$ to $\pi/2$ with the bounds you used
BUT in the second integral you integrate from $\pi/2$ to $\pi$
where the lower bound is zero not $3\cos\theta$
May 25th 2010, 11:26 PM #2 | {"url":"http://mathhelpforum.com/calculus/146446-polar-area-using-double-integral.html","timestamp":"2014-04-19T11:16:34Z","content_type":null,"content_length":"35354","record_id":"<urn:uuid:a0ce4886-5501-4bb3-8e6b-42e98d6cd657>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Imperial Beach Algebra 2 Tutor
Find a Imperial Beach Algebra 2 Tutor
...I hope to be able to help someone in any course that they are struggling in soon.In obtaining both my physics and engineering degree, calculus has been a necessary part of my everyday life. I
can give examples of why the subject is useful as well as explain the best way to apply the different co...
19 Subjects: including algebra 2, chemistry, calculus, writing
I am a San Diego native that chose to stay in this beautiful city and go to University of California, San Diego (UCSD).I graduated from high school as a lifetime member of the California Scholars
Federation, as an AP Scholar with Distinction, as a National AP Scholar, and as an IB Diploma recipient....
42 Subjects: including algebra 2, reading, English, Spanish
...Additionally, I lived in France for five years, which helped develop very strong and natural French language skills. I also spent one year tutoring in Seattle, WA, working with special needs
students pursuing their GEDs. I am an effective tutor because of my skill in assessing my student's needs, but also because of my ability to empathize with young learners.
14 Subjects: including algebra 2, French, geometry, ESL/ESOL
...I love the Chinese literature. In addition, I have a BA degree in Chinese Studies from University of California San Diego. My pronunciation and tones are perfect.
28 Subjects: including algebra 2, reading, writing, Chinese
...I am a Big Sister for the Big Brother, Big Sister Organization and I tutor my Little Sister whenever asked. I also private tutor college students in pre-algebra. I started tutoring in 2013 when
my Calculus teacher highly recommended me to a family friend.
6 Subjects: including algebra 2, geometry, prealgebra, linear algebra | {"url":"http://www.purplemath.com/imperial_beach_ca_algebra_2_tutors.php","timestamp":"2014-04-20T11:18:47Z","content_type":null,"content_length":"24148","record_id":"<urn:uuid:0b4c76e2-42cd-49a9-bfde-bc3a475325cd>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00268-ip-10-147-4-33.ec2.internal.warc.gz"} |
simplex algorithm:can leaving variable enter the basis again?
August 11th 2012, 08:14 AM #1
Aug 2012
I am here.
simplex algorithm:can leaving variable enter the basis again?
Is it possible that variable Xi leaves the basis and enter the basis again in the next step? what about other steps (not next to current step)?
Last edited by ehsazh; August 11th 2012 at 08:20 AM.
Re: simplex algorithm:can leaving variable enter the basis again?
In each step of simplex algorithm, exactly one variable leaves the basis and exactly another one enter the basis. That said, if the same variable leaves and enter the basis in subsequent steps,
then the basis dont change, and the algorithm enters in infinite loop. I dont remember exactly how simplex works, but supposing that simplex is correct, then this case cannot happen. On the other
hand, it is completely normal the same variable leave the basis and then, in a future step, enters the basis again. Remember, the simplex is an exponential time algorithm, and its worst running
time is determined by the number of different basis it has. Therefore, if a variable leaves the basis and never comes back, then simplex would quickly run out of variables to enter the basis, and
then it would be a polinomial time algorithm.
BUT!! I remember that there was some king of "degenerate cases" that could bring problems to simplex functioning, but right now i dont remeber exactly what are these "degenerate cases".
August 14th 2012, 12:38 PM #2
Nov 2009
Curitiba - Brazil | {"url":"http://mathhelpforum.com/advanced-algebra/202033-simplex-algorithm-can-leaving-variable-enter-basis-again.html","timestamp":"2014-04-25T00:49:24Z","content_type":null,"content_length":"32858","record_id":"<urn:uuid:bbf05beb-7d3a-4551-aed9-b53ae4a942ad>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00215-ip-10-147-4-33.ec2.internal.warc.gz"} |
Untitled Document
2169 K  
51 pp.
View  
Title The Theory of Quantized Fields. II
Author(s) Schwinger, J.
Publication 1951
Report NP-4494
Unique ACC0110
Other OSTI ID: 4369628
Research Harvard University, Cambridge, Mass.
Contract No None
Sponsoring US Atomic Energy Commission (AEC)
Subject Field Theory
Keywords Physics; Action Principle; Bosons; Commutation Relations; Electromagnetic Fields; Fermions; Field Theory; Gauge Invariance; Matrices; Mechanics; Motion; Quantum Mechanics; Reflection;
Related Web Julian Schwinger and the Source Theory
Abstract The arguments leading to the formulation of the Action Principle for a general field are presented. In association with the complete reduction of all numerical matrices into symmetrical
and anti-symmetrical parts, the general field is decomposed into two sets, which are identified with Bose-Einstein and Fermi-Dirac fields. The spin restriction on the two kinds of fields
is inferred from the time reflection invariance requirement. The consistency of the theory is verified in terms of a criterion involving the various generators of infinitesimal
transformations. Following a discussion of charged fields, the electromagnetic field is introduced to satisfy the postulate of general gauge invariance. As an aspect of the latter, it is
recognized that the electromagnetic field and charged fields are not kinematically independent. After a discussion of the field-strength commutation relations, the independent dynamical
variable of the electromagnetic field are exhibited in terms of a special gauge.
2169 K  
51 pp.
View  
Some links on this page may take you to non-federal websites. Their policies may differ from this site. | {"url":"http://www.osti.gov/cgi-bin/rd_accomplishments/display_biblio.cgi/left?id=ACC0110&numPages=51&fp=","timestamp":"2014-04-16T11:32:40Z","content_type":null,"content_length":"7367","record_id":"<urn:uuid:7cc81606-8666-436d-9019-2a8cd1b5c6ca>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
What constitutes a rational expression? How would you explain this concept to someone unfamiliar with it? - WyzAnt Answers
How would you define what a rational exponent is? How about rational expression?
Provide two rational expressions with different denominators for your classmate
to add or
Tutors, please sign in to answer this question.
1 Answer
Hi Harry,
When we use the word "rational" in algebra we are referring to quantities that look like fractions (1/2, 1/x+4) or that can be made to look like fractions ( 2 2/3 = 8/3, 5 = 5/1). Therefore, a
rational exponent is a fractional exponent and a rational expression is an expression that contains a fraction. When you add or subtract rational expressions with unlike denominators they could be
numerical (1/2 + 2/3) or algebraic
(1/ x - 3/y).
I hope that this helps you. | {"url":"http://www.wyzant.com/resources/answers/13281/what_constitutes_a_rational_expression_how_would_you_explain_this_concept_to_someone_unfamiliar_with_it","timestamp":"2014-04-18T03:04:43Z","content_type":null,"content_length":"44472","record_id":"<urn:uuid:c99e1456-b90c-4fcf-8202-212bbe86010f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00257-ip-10-147-4-33.ec2.internal.warc.gz"} |
help with a couple conics issues please!
May 3rd 2009, 02:42 PM #1
May 2009
help with a couple conics issues please!
1. Find the focus and directrix of the equation. (y+3)^2 =8(x-2)
I got the focus is (4,-3) and the directrix is x=0
is this right?
2. Find the vertices and foci of this ellipse. x^2/49 +y2/9=1
For vertices I got: (7,0) and (-7,0) but it said I was wrong. and for Foci I got (2 root 10, 0) and (-2 root 10,0) I don't know if the foci is right but I dont know how to put in square roots on
3. A hall 100 feet in length is to be designed as a whispering gallery. If the foci are located 22 feet from the center, how high will the ceiling be at the center?
No idea.
4. Find an equation for the hyperbola described. Foci at (-5,0) and (5,0); vertex at (3,0)
I got (x^2)/4 - (y^2)/3 =1 but it said I was wrong I don't understand.
5. Identify the conic x^2 +2xy+3y^2-2x+4y+10 = 0
I said none of these, but I'm wrong.
6. r = 8/ 4-2sintheta
i said directrix is 1/2 units below the pole and it said wrong.
also, how do you know if the directrix is parallel or perpindicular?
7. Lastly
r= 4/ 2-3sintheta i said directrix is 1.5 units below the pole it said wrong
** NOTE: some of these problems may be webassign formatting but maybe not!
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/pre-calculus/87207-help-couple-conics-issues-please.html","timestamp":"2014-04-17T02:05:29Z","content_type":null,"content_length":"30122","record_id":"<urn:uuid:d1a3e35c-b706-4c42-9b27-e7f185329d8a>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Interpolation FP1 Formula
Re: Linear Interpolation FP1 Formula
Their solution identifies a pattern then uses the standard sum for n^2 to get to that formula. I tried a GF approach and it is too messy, I don't think it is a good approach for this part of the
Re: Linear Interpolation FP1 Formula
I can generate a gf but I can not get the zeroth coefficient from it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I have the same problem too, it might be quicker to use the 'normal' method (the problem is aimed at students who have never encountered GFs).
Re: Linear Interpolation FP1 Formula
Okay, I will have a look at that paper. Thanks for the link.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I found out that she leaves only a couple of minutes before I do. She has early lessons on Tuesday and Wednesday, so I can meet her on the way to school on those days.
She doesn't seem to be that interested in maths though... I just need to show her something nice, but I don't know what she is interested in. She just said she likes 'pure maths', but she only says
that because she has never really done any applied maths. To be truthful, she doesn't even tend to do well in maths... but I'll see if I can find out why she likes it.
Re: Linear Interpolation FP1 Formula
Probably some reason that will be difficult to understand.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Her Dad has a PhD in physics, maybe he was pushy...
Re: Linear Interpolation FP1 Formula
Yeccchh! Ever heard of the teakettle principle?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Isn't that when you simplify a problem to one you how to solve? I remember reading something about a joke, don't know what it is though...
Re: Linear Interpolation FP1 Formula
A physicist and a mathematician are given an empty teakettle, a fire and a water supply and are asked to boil water. They both fill the teakettle, place it on the fire and boil the water. Now they
are both given a teakettle already filled with water. The physicist after much thought shouts "Eureka" and places the teakettle on top of the fire and boils the water. The mathematician immediately
empties his teakettle, the physicist asks why did you empty it? The mathematician says, "because I already know how to solve the empty teakettle problem."
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Haha, that is good. I'll remember that one.
Re: Linear Interpolation FP1 Formula
Vilenkin, the Russian combinatoricist is the source.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Never heard of him...
Re: Linear Interpolation FP1 Formula
Neither did I until I came across his book. The Soviet authors did not get a lot of exposure over here.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
It's the same here. We are never taught the history of any maths, or where things came from.
Re: Linear Interpolation FP1 Formula
That is what I have found. It is a shame, the history is fascinating.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
It is a shame indeed. I was asking a maths teacher about Leibniz and do you know what he said?
"Oh, Leibniz! Those cost about £1.99 just across the street."
Turns out he wasn't joking, he thought Leibniz was a chocolate biscuit, not a man.
Re: Linear Interpolation FP1 Formula
I had a similar experience. I tried to compliment a supposedly pretty good math type with a phrase from Newton and one of the Bernoulli's. He had never heard of it.
Remember we were talking about cf's and Pell equations? There is an example, Pell had nothing to do with that equation. It was Fermat's! A historical error names them Pell equations.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Oh I see -- I've seen them called 'Pell-Fermat equations'... I do not really know what else Pell is famous for doing though. Pell equations are all I can associate with him.
Argand diagrams are a historical error too, aren't they? I don't think someone called Argand discovered them.
Re: Linear Interpolation FP1 Formula
I do not know about that one, I will have to look it up.
The Binet formula for the Fibonacci numbers, that was discovered by De Moivre, not Binet.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I did not know that, and we never do question these things when we learn about them. I just accepted it...
Venn diagrams were actually first used by Euler I think...
Re: Linear Interpolation FP1 Formula
I heard that. It is strange that in some cases they did not even get the name of the discoverer right. Makes you wonder what other mistakes there are.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Hopefully the errors are only historical ones...
Re: Linear Interpolation FP1 Formula
That is what we hope. I do not agree with cutting it out of the educational process. They should teach a little bit of the background of these men.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Even fellow mathematicians care little for history or even for the beauty of maths itself. Not that those are necessary pre-requisites to study it, though. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=256948","timestamp":"2014-04-20T16:36:26Z","content_type":null,"content_length":"35889","record_id":"<urn:uuid:33e5e7c6-411d-41ec-b387-ca2371a4521c>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sliding Mode Control for Mass Moment Aerospace Vehicles Using Dynamic Inversion Approach
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 284869, 11 pages
Research Article
Sliding Mode Control for Mass Moment Aerospace Vehicles Using Dynamic Inversion Approach
College of Automation, Harbin Engineering University, Harbin 150001, China
Received 26 July 2013; Accepted 28 August 2013
Academic Editor: Rongni Yang
Copyright © 2013 Xiao-Yu Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The moving mass actuation technique offers significant advantages over conventional aerodynamic control surfaces and reaction control systems, because the actuators are contained entirely within the
airframe geometrical envelope. Modeling, control, and simulation of Mass Moment Aerospace Vehicles (MMAV) utilizing moving mass actuators are discussed. Dynamics of the MMAV are separated into two
parts on the basis of the two time-scale separation theory: the dynamics of fast state and the dynamics of slow state. And then, in order to restrain the system chattering and keep the track
performance of the system by considering aerodynamic parameter perturbation, the flight control system is designed for the two subsystems, respectively, utilizing fuzzy sliding mode control approach.
The simulation results describe the effectiveness of the proposed autopilot design approach. Meanwhile, the chattering phenomenon that frequently appears in the conventional variable structure
systems is also eliminated without deteriorating the system robustness.
1. Introduction
Some of the earliest flight vehicles are controlled by moving the body of the pilot to affect the center of mass (c. m.) of the vehicle. The change in vehicle center of mass alters the relative
location of the center of mass with respect to the external forces, thereby effecting a change in the vehicle’s motion. Recently, mass movement has been proposed as a control methodology for flight
vehicles in atmospheric and exoatmospheric engagements. Advances in aerodynamics subsequently made the moving-mass approach to flight control obsolete in all, but a few, specialized applications. In
recent years, techniques of controlling the flight of missiles have gravitated to systems that deliver relatively large amounts of control authority. Several studies have suggested that Mass Moment
Control System (MMCS) appears to offer the greater design and cost advantages [1–6], such as controlling the missiles at extreme Mach number and meeting the need for maneuverability and agility. The
MMCS changes the vehicle center of mass relative to the external forces to generate the desired control moments. For instance, if the thrust is aligned with the vehicle longitudinal body axis
containing the nominal center of mass, moving the center of mass off the body centerline will result in thrust moments about the pitch-yaw axes. Additionally, roll moments will be generated if the
thrust or drag has an angular misalignment with respect to the longitudinal axis, or if the vehicle is subject to an aerodynamic lift force. There are some advantages of mass moment control as
follows [7, 8]: (1) all the mechanism of MMCS is in aerospace vehicles, which will not affect the aerodynamic configuration and is better to accuracy of the terminal attack. (2) The actuators of MMCS
are internal moving masses, which decrease the thermal load of aerodynamic configuration and avoid the gap on the surface of the vehicle and ablation steering surface. (3) By using aerodynamic forces
generated by high-speed flight of vehicle, we can decrease the energy consumption and get effective control avoiding the conflict between fuel consumption and control moment generated by lateral jet
Sliding mode control is a robust control technique which has many attractive features such as robustness to parameter variations and insensitivity to disturbances [9–12]. The sliding mode controller
is composed of an equivalent control part that describes the behavior of the system when the trajectories stay over the sliding manifold and a variable structure control part that enforces the
trajectories to reach the sliding manifold and presents them leaving the sliding manifold. Sliding mode control is one of the best choices for controlling perturbed systems with time-delay [13–15].
The price for achieving the robustness/insensitivity to these disturbances is control chattering. The traditional ways for reducing chattering are as follows: (a) replacing the discontinuous control
function by “saturation” or “sigmoid ones” [16, 17]. This approach yields continuous control and chattering elimination. However, it constrains the sliding system’s trajectories not to the sliding
surface but to its vicinity losing the robustness to the disturbances. (b) Using the higher-order sliding mode control techniques [18–24]. This approach allows driving the sliding variable to zero
and its consecutive derivatives in the presence of the disturbances/uncertainties increasing the accuracy of the sliding variable stabilization, and has still been successfully applied for the
control of electropneumatic actuators [25, 26]. Nonetheless, the main challenge of high-order sliding mode controllers is the use of high-order time derivatives of the sliding variable. It is worth
noting that some second-order sliding mode control, the popular super-twisting algorithm [27] and gain-commuted controller [28], only require measurement of the sliding variable; whereas the other
second-order sliding mode controllers also need the time derivative of the sliding variable. (c) Using controllers with dynamical gains. Recently, adaptive sliding mode controllers have been
proposed, with the interest being the adaptation of the gain magnitude with respect to uncertainty/perturbation effects. Then, a reduced gain induces lower chattering. In [29], an adaptive (first
order) sliding mode controller has been proposed and has been evaluated for the control of an electropneumatic actuator. (d) Another technique is the use of fuzzy sliding mode control [30, 31]. The
main advantage of the method is that the performance of the system is improved in the sense of removing the chattering in comparison with the same SMC technique without using fuzzy logic algorithm,
and the robust behavior of the system cannot be deteriorated.
Considering the above-mentioned issues, in this paper, we investigate the control of MMAV using FSMC based on dynamic inversion approach. This paper is organized as follows. In Section 2, the
mathematical model of MMAV is presented. Based on dynamic inversion, Section 3 gives the main methodological results concerning the fuzzy sliding mode control algorithm. In Section 4, simulation
demonstrates the ability of the controller to effectively control the MMAV’s motion. Conclusion is given in Section 5.
2. Mass Moment Aerospace Vehicles Model
2.1. The General Dynamical Model of the MMAV
The basic principle by which an MMCS is able to control the vehicle’s motion is to produce the control torque by using the aerodynamic forces and moving the masses within the MMAV to offset the c. m.
of system.
Supposing that the MMAV includes moving masses and the mass of MMAV’s shell are . The mass of the th moving mass is . So, the total mass of MMAV is . The mass ratio of the th moving mass is . The
coordinates in the body fixed frame is , . In the ground frame, the velocity of the center of MMAV is , and the acceleration is . Let the coordinates in the ground with the body fixed frame of the th
moving mass be , let the coordinates in the ground frame be . There are the relationships ,.
The coordinates of MMAV’s c. m. in the ground frame are given by
After derivation, the translational equation of the MMAV in the ground frame can be presented as follows:
Then, the translational equation of the MMAV in the body fixed frame is as follows: where is the transformation matrix from the body fixed frame to the ground frame, is the antisymmetry matrix of the
angular velocity of the MMAV in the ground frame. is the antisymmetry matrix of the angular velocity of the MMAV in the body fixed frame.
Correspondingly, the force equation of the th moving mass in the body fixed frame is given by
According to D’Alembert principle, the rotational equation in body coordinates is obtained as follows: where, , is the antisymmetric matrix of the th moving mass in the body fixed frame
representing the position coordinates .
2.2. The Dynamical Model of the MMAV with Three Moving Masses
The structure diagram of the MMAV with three moving masses is shown in Figure 1. To quickly adjust the flying attitude and decrease coupling, one mass is fixed at the -axis in the body fixed frame.
Another two masses are fixed at the -axis and the -axis in radial direction through the MMAV’s axis.
The mass of the MMAV’s shell is . The mass of axial moving mass is , and the coordinate in the body fixed frame is . The mass of radial moving mass is , and the coordinate in the body fixed frame is
. The mass of radial moving mass is and the coordinate in the body fixed frame is . So, the total mass is , and the mass ratios are , , and , respectively.
This section derives the equations of motion fully accounting for the dynamic coupling between the four bodies. The moving masses are allowed to translate with respect to the MMAV’s shell but are not
allowed to rotate with respect to the MMAV’s shell. Both the MMAV and the moving masses are assumed to be rigid bodies.
In the body fixed frame, the interaction between axial moving mass and MMAV is , the interaction between radial moving mass and MMAV is , and the interaction between radial moving mass and MMAV is .
Equation (4) can be presented below
The vector translational dynamics of MMAV can be obtained by (3) as follows
The rotational dynamics of MMAV obtained by (5) are given by (8): where is the inertia tensor of MMAV about its center of mass; the antisymmetric matrix of position coordinates of the three moving
masses in the body fixed frame are , , and , respectively.
Furthermore, the equations of motion of MMAV system also include some relative movement functions and nonlinear aerodynamic functions. The equations of motion clearly indicate that the MMCS is a
complex nonlinear system which has the variable coefficients and large disturbances caused by the accelerations and velocities of masses.
3. Sliding Mode Control System Design
3.1. Hierarchy-Structured Dynamic Inversion of MMAV
Although the design of MMCS using moving masses actuation appears to be conceptually straightforward, difficulties arise as a result of the highly coupled and nonlinear nature of the system dynamics.
A major concern is how to control moving mass position coordinately to satisfy the needed moment.
Dynamic inversion is one of the nonlinear flight control techniques based on feedback linearization. Consider the following MMAV nonlinear system [32]: where is the state variable, is the control
input, and is the output to be controlled by the control input . Differentiating in (10), in general, one obtains
It is assumed here that is invertible with respect to . Then, consider the following nonlinear feedback control using the inverse dynamics of (11): where is the auxiliary input. Substituting (12) to
(11), one obtains
Equation (13) shows that the relation between the output and the auxiliary input is now feedback linearized. is often given as shown in (14), so that (13) will form a first-order servo system, where
is a feedback gain matrix and is a commanded value for . Feedback linearization term in cancels out the inherent stability of the dynamical system, and then the outer tracking loop given in (14)
rearranges the corresponding nonlinear modes to realize the desired output dynamics .
But it is known that the number of the modes that dynamic inversion can rearrange is equal to the sum of the relative degrees of the input-to-output relations and that the remaining modes are
unobservable from . These unobservable modes are called zero dynamics. If some of the input-to-output relations are nonminimum phase, the desired output dynamics cannot be realized for the reason
that the corresponding zeros are located in the right half plane and, therefore, result in unstable zero dynamics. Unfortunately, some input-to-output relations in the aircraft dynamics are
nonminimum phase due to the derivatives of aerodynamic forces with respect to the control surface deflections. This fact prevents direct application of dynamic inversion to MMCS.
The problem can be avoided by two time-scale separation, where the dynamics are separated into the fast one and the slow one according to the time-scales of the variables. The fast variables are used
to control the slow state variables, and the fast variables are controlled by the control input. The MMAV nonlinear system is rewritten as follows: where is the fast state, is the slow state, is the
control input, and is the output to be controlled. For simplicity, the output to be controlled is assumed to be . Then, the input-to-output relations in slow time-scale can be derived as follows:
One obtains from (18), which is the commanded values for , using dynamic inversion as follows: where is the auxiliary input for slow time-scale controller and is the feedback gain matrix. If , and
the following relation holds
Finally, is derived in fast time-scale as follows using dynamic inversion again so that will follow its command . Consider where is the auxiliary input for fast time-scale controller and is the
feedback gain matrix.
Note that if then the asymptotic stability of in (20) is not necessarily guaranteed but depends on the fast dynamics. From the viewpoint of the singular perturbation theory, the asymptotic stability
is guaranteed, when the two dynamics are well separated; that is, the time-scales are not very close to each other. Although no theoretical background of the asymptotic stability of the slow variable
is given in this paper, it will be evaluated by linearization and 6DOF nonlinear simulation. Two time-scale separation is easily expanded into multi-time-scale separation. Dynamic inversion using
multi-time-scale separation and multiloop closure method is called hierarchy-structured dynamic inversion [33]. The following section details chattering free fuzzy sliding mode control using dynamic
inversion to MMAV systems.
3.2. Chattering Free Fuzzy Sliding Mode Control of MMAV
Without loss of generality, the parameter uncertainty and external disturbance are taken into account in the MMAV control system. Then, the dynamics equations (15) and (16) can be rewritten in
abbreviate form as: where and denote uncertain terms representing the unmodeled dynamics or structural variation of the MMAV system, which is owing to the time variations of the atmospheric
coefficients system, and denote the disturbance of system.
In the practical MMAV system, the uncertain term and the disturbance term are bounded, that is, and , where are four positive and known constants.
The control problem of a practical system is to get the system to track an -dimensional desired vector , , which belongs to a class of continuous functions on . Let the tracking error be
The control goal is that, for any given target , a sliding mode control (SMC) is designed, such that the resulting state response of the tracking error vector satisfies where denotes the Euclidean
norm of a vector.
SMC is an efficient tool to control complex high-order dynamic plants operating under uncertainty conditions due to its order reduction property and low sensitivity to disturbances and plant
parameter variations. In SMC, the states of the controlled system are first guided to reside on a designed surface (i.e., the sliding surface) in state space and then confined there with a shifting
law (based on the system states). A time varying surface is defined in the state space by equating the variable , defined below, to zero,
Here, is a strict positive constant, taken to be the bandwidth of the system [17]. As our problem formulation is a first-order differential equation, then , and the relation (26) can be rewritten as
When the closed loop system is in the sliding mode, it satisfies , and then the equivalent control law of the fast dynamics of the MMAV system is obtained by
In practical systems, the system uncertainty and external disturbance are unknown, and the implemented equivalent control input is modified as
According to the Lyapunov stability theory [17], a Lyapunov function is defined as
Then, the derivative of becomes
In the above equation, if is negative for all , then the so-called reaching condition [17] is satisfied. That is, the control is designed to guarantee that the states are hitting on the sliding
surface .
The reaching control law is selected as , and the overall control is determined by where is the switching gain.
Based on the Lyapunov theory, the fast dynamics states approach the hyperplane, if . The error vector asymptotically reduces to zero once the system states are on .
The finite time delays and limitations of practical control systems render the implementation of such control signals problematic in real-world systems. In other words, the sign function in overall
control will cause the control input to produce the chattering phenomenon. In the current study, this problem is resolved through the application of a fuzzy logic control (FLC) scheme to determine an
appropriate reaching law. Furthermore, if system uncertainties are large, the sliding mode controller would require a high switching gain with a thicker boundary layer to eliminate the resulting
higher chattering effect. However, if we continuously increase the boundary layer thickness, we are actually reducing the feedback system to a system without a sliding mode. To tackle these
difficulties, recently, FSMC has also been used for this purpose, which is shown to be quite effective [34, 35].
In this paper, in order to eliminate the chattering problem, a fuzzy inference engine is used for reaching phase, and fuzzy sliding mode control methodology is proposed. The main advantage of this
method is that the robust behavior of the system is guaranteed. The second advantage of the proposed scheme is that the performance of the system in the sense of removing chattering is improved in
comparison with the same SMC technique without using FLC.
The equivalent control part is the same as that in (29), and the reaching law is selected as where is the normalization factor of the output variable, and is the output of the FSMC, which is
determined by the normalized and .
The fuzzy control rules can be represented as the mapping of the input linguistic variables and to the output linguistic variable as follows:
The membership function of input linguistic for each set of variables and and the membership functions of the output linguistic variable , , are shown in Figure 2, respectively. Here, is denoted as
Our proposed FLC has two inputs and one output. These are , , and the control signal, respectively. Linguistic variables which imply inputs and outputs have been classified as: NB, NM, NS, ZE, PS,
PM, and PB. Inputs and outputs are all normalized in the interval of with equal span as shown in Figure 2. The linguistic labels used to describe the fuzzy sets are “Negative Big” (NB), “Negative
Small” (NS), “Zero” (ZE), “Positive Small” (PS), and “Positive Big” (PB). It is possible to assign a set of decision rules as shown in Table 1. The fuzzy rules are extracted in such a way that the
stability of the system would be satisfied, which was explained in more detail before. These rules contain the input/output relationships that define the control strategy. Each control input has
seven fuzzy sets so that there are at most 25 fuzzy rules. Consider
In the following theorem, proposed scheme (36) will be proved to be able to drive the nonlinear system (22) onto the sliding surface . That is, the reaching condition is guaranteed.
Theorem 1. Consider the uncertain nonlinear system (22) controlled by in (36), where is in (29), is in (34) and . Then, the error state trajectory converges to the sliding surface .
Proof. Let , then
So, if we select , one can conclude that the reaching condition is always satisfied. Thus, the proof is achieved completely.
Theorem 2. One obtains from (23), which is the commanded values for , using dynamic inversion as follows:
If is satisfied, then the error state trajectory converges to the sliding surface .
Proof. According to the Lyapunov stability theory [17], a Lyapunov function of the slow dynamics of the MMAV system is defined as
Then, the derivative of becomes
So, if we select , one can conclude that the reaching condition is always satisfied. This completes the proof.
4. Simulation Results and Discussions
In order to demonstrate the performance of the proposed flight control system, simulations are presented in this section. The initial conditions of the engagement are given in [25]. The transfer
function of MMAV’s actuator masses is described as follows: where is the equivalent time constant, and is damping ratio of three moving mass. The time constant and damping ratio are tuned precisely
in order to obtain the best possible performances in response to a command signal. The time constant and the damping ratio of axial moving mass are chosen as and , and the time constant and the
damping ratio of radial moving mass are chosen as and , respectively.
The needed moment coefficients and aerodynamic coefficients are also determined through a looked-up table using the current flight status (i.e., Mach number, altitude). Moreover, without loss of
generality, in all the simulations, the uncertainty/disturbance terms in the dynamics equations are randomly selected within 15% of their nominal values in all the simulations.
Furthermore, in order to show the full potential of the proposed control system, the controller parameters (i.e., ) are optimally chosen using a genetic algorithm (GA) [36]. In which case, the cost
function subjected to be minimized is where , , and are the weighting matrices.
Genetic algorithms (GA) search the solution space of a function through the use of simulated evolution, that is, the survival of the first strategy. In general, the fittest individuals of any
population tend to reproduce and survive to the next generation, thus, improving successive generations. However, inferior individuals can, by chance, survive and also reproduce. GA have been shown
to solve linear and nonlinear problems by exploring all regions of the state space and exponentially exploiting promising areas through mutation, crossover, and selection operations applied to
individuals in the population. Thus, the main advantage of using GA is that they do not get trapped in local minimal, and they can use any cost function that can be computed in a reasonable amount of
In practical applications, the acceleration commands () are determined by a guidance loop. According to the magnitude and frequency of spiral maneuvering targets [37], the command and are given by
sinusoidal signals with amplitude 10 and frequency 3rad/s.
The mathematical simulation of MMAV is performed using sliding mode control and fuzzy sliding mode control based on dynamic inversion designed in this paper. The simulation results are shown in
Figures 3 and 4, respectively.
In Figures 3(d), 3(e), 4(d), and 4(e), the dotted line, are given acceleration commands, and the solid line, are actual acceleration commands. The results in the figures show that the longitudinal
displacement of moving mass is constrained within ±0.16m, and the radical displacements of two moving masses are constrained within ±0.08m (The maximum displacement is ±0.05m); just with a minute
displacement of moving mass, the MMAV can be controlled, and the control system has favorable dynamic performance to meet the design requirements. The simulation of SMC without fuzzy logic function
are carried out in this paper, the displacements of three moving masses in MMAV is shown in Figure 3, it can be seen that the high-frequency chattering appeared in the control signals, this means the
three moving masses in MMAV will reciprocate with high frequency, the high-frequency chattering should be restrained with respect to energy saving or extending using life. Correspondingly, Figure 4
shows that the high-frequency chattering disappeared in the control signals and the displacements of three moving masses in MMAV using FSMC. According to the results in the figure, this method can
restrain the high-frequency chattering of the system effectively.
5. Conclusions
An autopilot for a nonlinear six-degree-of-freedom MMAV is introduced in this paper based on fuzzy sliding mode control, using dynamic inversion techniques. Simulation results indicate that the
resultant control system works well and effectively in MMAV flight control system. Because the stability control mode used by MMAV with three moving masses is three-channel, and the deduced
mathematical model is too complicated, the dynamical model of MMAV is still a nonlinear system after reasonable simplification. So, this control system is hard to design. The time-scale separation
theory is proposed, in which the dynamical model of MMAV is divided into the dynamics of fast state and the dynamics of slow state, in view to two dynamical subsystems, fuzzy sliding mode control
system is designed based on dynamic inversion, this approach can improve the robustness of dynamic inversion effectively in addition to restrain system chattering. The simulation results show that
the flight control system of MMAV has good dynamic behavior and strong robustness. As the mechanism of mass moment control is very complicated and the research about that is in the beginning, and the
public references are insufficient, we only give some useful discussions about the control system design method of MMAV in this paper.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work is supported by the Natural Science Foundation of Heilongjiang Province of China (F201221), the Training Program of Harbin Engineering University for the National Natural Science Foundation
of China and Fundamental Research Funds for the Central Universities of China (HEUCF100417, HEUCF130402, HEUCFX41302).
1. T. Petsopoulos and F. J. Regan, “A moving-mass roll control system for a fixed-trim reentry vehicle,” AIAA Paper 94-0033, pp. 1-11, Reno, Nev, USA, 1994.
2. R. H. Byrne, B. R. Sturgis, and R. D. Robinett, “A moving mass trim control system for reentry vehicle guidance,” Tech. Rep. AIAA-96-3438-CP, pp. 644-650, San Diego, Calif, USA, 1996.
3. R. D. Robinett III, B. R. Sturgis, and S. A. Kerr, “Moving mass trim control for aerospace vehicles,” Journal of Guidance, Control, and Dynamics, vol. 19, no. 5, pp. 1064–1070, 1996. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
4. C. A. Woolsey and N. E. Leonard, “Moving mass control for underwater vehicles,” in Proceedings of the American Control Conference (ACC '02), pp. 2824–2829, Anchorage, Alaska, USA, May 2002. View
at Scopus
5. P. K. Menon, S. S. Vaddi, and E. J. Ohlmeyer, “Finite-horizon robust integrated guidance-control of a moving-mass actuated kinetic warhead,” Tech. Rep. AIAA 2006-6787, pp. 1-13, Keystone, Colo,
USA, 2006.
6. J.-W. Li, B.-W. Song, and C. Shao, “Tracking control of autonomous underwater vehicles with internal moving mass,” Acta Automatica Sinica, vol. 34, no. 10, pp. 1319–1323, 2008. View at Publisher
· View at Google Scholar · View at MathSciNet
7. X.-Y. Zhang, Y.-Z. He, and Z.-C. Wang, “Robust control of mass moment interception missile based on ${H}_{\infty }$ performance characteristics,” Acta Aeronautica et Astronautica Sinica, vol. 28,
no. 3, pp. 634–640, 2007. View at Scopus
8. J. Liu, X. Gao, and Y. Ma, “Study on guidance and control technology of mass moment vehicle,” in Proceedings of the IEEE International Conference on Control and Automation (ICCA '09), pp.
679–684, Christchurch, New Zealand, December 2009. View at Publisher · View at Google Scholar · View at Scopus
9. V. I. Utkin, Sliding Modes in Control and Optimization, Communications and Control Engineering Series, Springer, Berlin, Germany, 1992. View at MathSciNet
10. L. Wu, H. Gao, and C. Wang, “Quasi sliding mode control of differential linear repetitive processes with unknown input disturbance,” IEEE Transactions on Industrial Electronics, vol. 58, no. 7,
pp. 3059–3068, 2011. View at Publisher · View at Google Scholar · View at Scopus
11. L. Wu, D. W. C. Ho, and C. W. Li, “Sliding mode control of switched hybrid systems with stochastic perturbation,” Systems and Control Letters, vol. 60, no. 8, pp. 531–539, 2011. View at Publisher
· View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
12. L. Wu and D. W. C. Ho, “Sliding mode control of singular stochastic hybrid systems,” Automatica, vol. 46, no. 4, pp. 779–783, 2010. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at MathSciNet
13. L. Wu and J. Lam, “Sliding mode control of switched hybrid systems with time-varying delay,” International Journal of Adaptive Control and Signal Processing, vol. 22, no. 10, pp. 909–931, 2008.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
14. L. Wu and W. X. Zheng, “Passivity-based sliding mode control of uncertain singular time-delay systems,” Automatica, vol. 45, no. 9, pp. 2120–2127, 2009. View at Publisher · View at Google Scholar
· View at Zentralblatt MATH · View at MathSciNet
15. L. Wu, X. Su, and P. Shi, “Sliding mode control with bounded ${L}_{2}$ gain performance of Markovian jump singular time-delay systems,” Automatica, vol. 48, no. 8, pp. 1929–1933, 2012. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
16. J. A. Burton and A. S. I. Zinober, “Continuous approximation of VSC,” International Journal of Systems Science, vol. 17, no. 6, pp. 875–885, 1986. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH · View at Scopus
17. J. J. E. Slotine and W. Li, Applied Nonlinear Control, Prentice Hall, Englewood Cliffs, NJ, USA, 1991.
18. A. Levant, “Higher-order sliding modes, differentiation and output-feedback control,” International Journal of Control, vol. 76, no. 9-10, pp. 924–941, 2003. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
19. A. Levant, “Homogeneity approach to high-order sliding mode design,” Automatica, vol. 41, no. 5, pp. 823–830, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View
at MathSciNet
20. S. Laghrouche, M. Smaoui, F. Plestan, and X. Brun, “Higher order sliding mode control based on optimal approach of an electropneumatic actuator,” International Journal of Control, vol. 79, no. 2,
pp. 119–131, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
21. S. Laghrouche, F. Plestan, and A. Glumineau, “Higher-order sliding mode control based on integral sliding mode,” Automatica, vol. 43, no. 3, pp. 531–537, 2007. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
22. Y. B. Shtessel, I. A. Shkolnikov, and A. Levant, “Smooth second-order sliding modes: missile guidance application,” Automatica, vol. 43, no. 8, pp. 1470–1476, 2007. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at MathSciNet
23. M. Djemai, J.-P. Barbot, and K. K. Busawon, “Designing R-sliding mode control using smooth iterative manifolds,” Mediterranean Journal of Measurement and Control, vol. 4, no. 2, pp. 86–93, 2008.
View at Scopus
24. F. Plestan, A. Glumineau, and S. Laghrouche, “A new algorithm for high-order sliding mode control,” International Journal of Robust and Nonlinear Control, vol. 18, no. 4-5, pp. 441–453, 2008.
View at Publisher · View at Google Scholar · View at MathSciNet
25. S. Laghrouche, M. Smaoui, F. Plestan, and X. Brun, “Higher order sliding mode control based on optimal approach of an electropneumatic actuator,” International Journal of Control, vol. 79, no. 2,
pp. 119–131, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
26. A. Girin and F. Plestan, “A new experimental test bench for a high performance double electropneumatic actuator system,” in Proceedings of the American Control Conference (ACC '09), pp.
3488–3493, Saint-Louis, Mo, USA, June 2009. View at Publisher · View at Google Scholar · View at Scopus
27. A. Levant, “Sliding order and sliding accuracy in sliding mode control,” International Journal of Control, vol. 58, no. 6, pp. 1247–1263, 1993. View at Publisher · View at Google Scholar · View
at Zentralblatt MATH · View at MathSciNet
28. F. Plestan, E. Moulay, A. Glumineau, and T. Cheviron, “Robust output feedback sampling control based on second-order sliding mode,” Automatica, vol. 46, no. 6, pp. 1096–1110, 2010. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
29. F. Plestan, Y. Shtessel, V. Brégeault, and A. Poznyak, “New methodologies for adaptive sliding mode control,” International Journal of Control, vol. 83, no. 9, pp. 1907–1919, 2010. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
30. H.-T. Yau and C.-L. Chen, “Chattering-free fuzzy sliding-mode control strategy for uncertain chaotic systems,” Chaos, Solitons & Fractals, vol. 30, no. 3, pp. 709–718, 2006. View at Publisher ·
View at Google Scholar · View at Scopus
31. H. F. Ho, Y. K. Wong, and A. B. Rad, “Adaptive fuzzy sliding mode control with chattering elimination for nonlinear SISO systems,” Simulation Modelling Practice and Theory, vol. 17, no. 7, pp.
1199–1210, 2009. View at Publisher · View at Google Scholar · View at Scopus
32. G. Xu, T. Li, X. Zhang, and L. Zhang, “Modeling and motion analysis of a missile based on mass moment control,” Journal of Harbin Engineering University, vol. 32, no. 12, pp. 1588–1593, 2011.
View at Publisher · View at Google Scholar · View at Scopus
33. K. Peng, K. Y. Lum, E. K. Poh, and D. Li, “Flight control design using hierarchical dynamic inversion and quasi-steady states,” Tech. Rep. AIAA 2008-6491, pp. 1-19, Honolulu, Hawaii, USA, 2008.
34. A. Ishigame, T. Furukawa, S. Kawamoto, and T. Taniguchi, “Sliding mode controller design based on fuzzy inference for nonlinear systems,” IEEE Transactions on Industrial Electronics, vol. 40, no.
1, pp. 64–70, 1993. View at Publisher · View at Google Scholar · View at Scopus
35. S.-W. Kim and J.-J. Lee, “Design of a fuzzy controller with fuzzy sliding surface,” Fuzzy Sets and Systems, vol. 71, no. 3, pp. 359–367, 1995. View at Publisher · View at Google Scholar · View at
36. C. Houck, J. Joines, and M. Kay, The Genetic Algorithm Optimization Toolbox (GAOT) for Matlab 5, 1996.
37. W. R. Chadwick, “Augmentation of high-altitude maneuver performance of a tailed-controlled missile using lateral thrust,” Tech. Rep. AD-A328973, 1995. | {"url":"http://www.hindawi.com/journals/mpe/2013/284869/","timestamp":"2014-04-21T02:02:12Z","content_type":null,"content_length":"466081","record_id":"<urn:uuid:39c0a20c-35ec-4c32-890a-e084313dfd15>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parallelogram (2)
November 18th 2009, 05:37 AM #1
Senior Member
Jan 2009
Parallelogram (2)
In the figure , ABCD is a parallelogram . AE=BF and C is parallel to FH . Prove that
(1) EFCD is a parallelogram
(2) GHFC is a parallelogram
(3) parallelogram GHFC and ABCD are equal in area
My work :
(1) BC // AD , AE=BF , AD=BC
$\angle ADE =\angle CBF$(corresponding angle)
$\triangle ADE$ congruent to $\triangle BCF$
$\angle DEA =\angle CFB$
DE// CF , AF// DC
Hence , EFCD is a paralleogram
(2) DH // CF , CG//HF
Hence , GHFC is a parallelogram .
(3) not really sure .
Hello thereddevils
Comments below.
In the figure , ABCD is a parallelogram . AE=BF and C is parallel to FH . Prove that
(1) EFCD is a parallelogram
(2) GHFC is a parallelogram
(3) parallelogram GHFC and ABCD are equal in area
My work :
(1) BC // AD , AE=BF , AD=BC
$\angle ADE =\angle CBF$(corresponding angle) No. You mean $\color{red}\angle DAE=\angle CBF$
$\triangle ADE$ congruent to $\triangle BCF$
$\angle DEA =\angle CFB$
DE// CF , AF// DC
Hence , EFCD is a paralleogram. Apart from that this is fine.
(2) DH // CF , CG//HF
Hence , GHFC is a parallelogram . OK.
(3) not really sure .Use the result from your previous post: parallelograms on the same base and between the same parallels are equal in area. Look first at CFDE and CFGH; then at CDEF and CDAB.
(Note the bold type.)
November 18th 2009, 06:14 AM #2 | {"url":"http://mathhelpforum.com/geometry/115354-parallelogram-2-a.html","timestamp":"2014-04-18T00:22:15Z","content_type":null,"content_length":"35792","record_id":"<urn:uuid:e83aa174-44c5-46bc-a644-510318d07257>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Work done by a force field (Vector Fields)
April 22nd 2007, 12:37 AM
Work done by a force field (Vector Fields)
Find the work done by the force field:
F = (y^2 * cosx + z^3)i + (2ysinx-4)j + (3xz^2 + 2)k
in moving a particle along the curve:
x = arcsin(t)
y = 1-2t
z = 3t-1 0<t<1
April 22nd 2007, 03:15 AM
Calculate the integral
where F is the force applied and c the underlying curve.
April 22nd 2007, 04:47 PM
Am I supposed to go through and find a potential for F? Im kinda lost on this one | {"url":"http://mathhelpforum.com/calculus/14007-work-done-force-field-vector-fields-print.html","timestamp":"2014-04-18T21:56:08Z","content_type":null,"content_length":"4139","record_id":"<urn:uuid:e7b62b95-7969-48d9-b8b0-33f6431e3d99>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphics of Confidence Interval using R
08-05-2010 04:28 PM #2
TS Contributor
St Albans, UK
Thanked 7 Times in 5 Posts
08-04-2010 12:12 PM #1
Thanked 0 Times in 0 Posts
Graphics of Confidence Interval using R
The idea is to make on R a graph like this one (g1) http://img87.imageshack.us/img87/613/59617892.jpg for any vector of data.
So, I have some difficulties on the statistics and on programming in R.
Just to the moment the nearest graph (g2) that I've found is this one:
On the code, y are annual maximum speeds of winds on a meteorological station in Brazil.
The problem is that on the graph of the link indicated "g1" it exists a straight on full line, and teh legend shows that it corresponds to the fitting on GEV-I or Gumbel.
It comes the following doubt: the straight on "g2" is of a normal distribution (?) 'cause the term dist="norm", so how can I change this straight on code to the correct fitting,
and how to make the vaules on ordinates of the "g2" graph assuming the y values of speed (like "g1") and not a reduced variable?
Thanks for the help.
Is the predict function what you are looking for, as in the following example:
cl<-predict(a, newdata=xx, interval="confidence", level=0.9)
matplot(xx,cl, lty=c(1,2,2), type="l", col=c(1,2,2), ylab="predicted y")
points(x, y, pch=22, bg="white")
legend("bottomright", legend=c("Data", "Fit","90% Confidence limit"), pch=c(22,-1,-1),lty=c(-1, 1,2), col=c(1,1,2), bg="white")
Thank you for helping me.
That's an advance!
I've changed the data, to an artificial one made by funcion rgumbel(10,35,8) using the package "evd", for testing:
cl<-predict(a, newdata=xx, interval="confidence", level=0.9)
matplot(xx,cl, lty=c(1,2,2), type="l", col=c(1,2,2), ylab="predicted y")
points(x, y, pch=22, bg="white")
legend("bottomright", legend=c("Data", "Fit","90% Confidence limit"), pch=c(22,-1,-1),lty=c(-1, 1,2), col=c(1,1,2), bg="white")
In this case, i see that a normal distibution delimits the confidence limit, that was on the objective, although the black line on the midle isn't touched by the points of the data, like on the
link i've sent on my first message (g1).
My objective in that the line called "Fit" be the ideal ajust of Gumbel distribution, which rule of distribution with parameters loc = a and scale = b is
G(x) = exp{-exp[-(z-a)/b]}
for all real z, where b > 0.
If you undestood my english and if it's possible, could you help me?
I am not sure that I help you much further with this. The vglm and guplot functions in the VGAM package and the gum.fit function in the ismev package might be useful. It might be helpful if you
could provide a reference for the source of the image showing the wind speed versus reduced variate?
The best option I have found is to use the the rlplot function in the extRemes package. This uses a GEV distribution. The gumbel distribution is a special case with shape = 0 and the gev fit
results with the gumbel data give a shape value that is close to zero.
fit <- gev.fit(y)
rlplot( fit, ci=0.1, add.ci=TRUE)
head(fit$vals) # shape is non-zero
An other option is the gum.diag function in the ismev package but this is not as useful because it produces 4 plots and you cannot change the confidence intervals. It is possible to hack the code
to produce just the return level plot with the gum.rl function (as shown below) but changing the confidence intervals would be more complex.
gum.rl(c(fit1$mle,0), fit1$cov, fit1$data)
Thank you very much.
The article with the g1 graph is on http://www.sendspace.com/file/q4scs2
08-05-2010 06:01 PM #3
Thanked 0 Times in 0 Posts
08-09-2010 10:55 AM #4
TS Contributor
St Albans, UK
Thanked 7 Times in 5 Posts
08-11-2010 07:37 AM #5
TS Contributor
St Albans, UK
Thanked 7 Times in 5 Posts
08-11-2010 07:58 AM #6
Thanked 0 Times in 0 Posts | {"url":"http://www.talkstats.com/showthread.php/13023-Graphics-of-Confidence-Interval-using-R","timestamp":"2014-04-21T10:10:42Z","content_type":null,"content_length":"72580","record_id":"<urn:uuid:83d5ad8d-5949-4c27-9b09-a4ea67df85f6>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
how to find/define eigenvectors as a continuous function of matrix?
up vote 16 down vote favorite
I asked this (with background) here http://stats.stackexchange.com/questions/38494/principal-component-analysis-bootstrap-and-probability-of-eigenvalue-collision
but did not really get any answers. See that post for the background.
Let $D$ be some open set in the plane, say. Not really important where the set $D$ sits, but it shoud not be only a line/curve. Suppose we have defined a continuous function on $D$ $$ f \colon D \
mapsto \text{Sym}^n $$ where $\text{sym}^n$ is the set of (real) symmetric $n \times n$ matrices. How can I define the eigenvectors of $f(x), x \in D$ as a continuous function on $D$? How can I
calculate this? And how can I deal with eigenvalue collisions? A simple example clarifying this point (and defined on a curve): Let $$ f(t) =\left( \begin{matrix} 1+t & 0 \cr 0 & 1-t \end{matrix}\
right) $$ Then the largest eigenvalue is $$ \lambda_1(t) = 1+ |t| $$ but the eigenvector corresponding to the largest eigenvalue cannot be defined as a continuous function: $$ v_1(t) = \begin{cases}
e_2 & t\le 0 \cr e_1 & t > 0 \end{cases} $$ So what I want is to look at the two eigenvalue functions $1+t, 1-t$ and follow the eigenvectors corresponding to each one, which obviously can be done in
a continuos (constant!) manner.
ADDED after the answer by Anthony Quas:
Is it possible to give some further conditions, under which a solution is possible? Differentiability? Or, if the matrices are realizations of some random field of matrices, can something be said
about the probability some continuous selection is possible?
linear-algebra reference-request matrices
1 I fixed the latex by replacing "\\" with "\cr" FWIW – Anthony Quas Dec 11 '12 at 23:50
add comment
6 Answers
active oldest votes
The example given by Anthony Quas reveals a phenomenon discussed in Kato's book Perturbation Theory for Linear Differential Operators. The point is the following:
• If the symmetric matrix depends analytically upon one parameter, then you can follow analytically its eigenvalues and its eigenvectors. Notice that this requires sometimes that the
eigenvalues cross. When this happens, the largest eigenvalues, as the maximum of smooth functions, is only Lipschitz.
• On the contrary, if the matrix depends upon two or more parameters, the eigenvalues are at most Lipschitz when crossing happens, and the eigenvectors cannot be chosen continuously.
A typical example is $$(s,t)\mapsto\begin{pmatrix} s & t \\\\ t & -s \end{pmatrix},$$ whose eigenvalues are $\pm\sqrt{s^2+t^2}$. Up to the shift by $I_2$, Quas' example is just a
up vote 22 piecewise $C^1$ section of this two-parameters example, and it inherits its lack of continuous selection of eigenvectors.
down vote • Likewise, if analyticity is dropped, a $C^\infty$-example by Rellich shows that eigenvectors need not be continuous functions of a single parameter. Of course, Quas' example can be
accepted recast as a $C^\infty$ one, by flatening the parametrisation at $t=0$, say by replacing $t$ by $s$ such that $t={\rm sgn}(s)\cdot e^{-1/s^2}$.
Side remark: Kato's result is only local. If the domain is not simply connected, it could happen that a global continuous selection of eigenvectors is not possible. This is classical
in the exemple above if you restrict to the unit circle $s^2+t^2=1$; then the eigenvalues $\pm1$ are global continuous functions, but when following an eigenvector, it experiences a
flip $v\mapsto -v$ as one makes one turn.
Regarding the circle example: Here the problem is fixed if one allows for eigenvectors with complex entries. Over $\mathbb C$ (and with domain an open subset of $\mathbb R^2$) a
continuous selection is possible as soon as the eigenvalues are always distinc – Leonel Robert Dec 12 '12 at 15:01
7 Attention: There is a 1-parameter $2\times 2$-symmetric matrix which is $C^\infty$, the eigenvalues are $C^\infty$, but the eigenvector cannot be chosen continuously: See Example
7.7 (due to Rellich) in Dmitri Alekseevky, Andreas Kriegl, Mark Losik, Peter W. Michor: Choosing roots of polynomials smoothly, Israel J. Math 105 (1998), p. 203-233. Page 21 in:
mat.univie.ac.at/~michor/roots.pdf. – Peter Michor Dec 12 '12 at 18:19
@Peter. You're right. I edit my answer. – Denis Serre Dec 13 '12 at 10:34
add comment
Unfortunately it can get worse than your example. There can be no continuously choosable eigenvectors at all.
Here's an example: Consider the family of matrices $$ g(t)=\begin{cases} \begin{pmatrix}1+t&0\cr 0&1-t\end{pmatrix}&\text{for $t<0$}; \cr \begin{pmatrix}1&t\cr t&1\end{pmatrix}&\text{for
$t\ge 0$.} \end{cases} $$ Then the eigenvectors are $\begin{pmatrix}1\cr 0\end{pmatrix}$ and $\begin{pmatrix}0\cr1\end{pmatrix}$ for $t<0$ and $\begin{pmatrix}1\cr1\end{pmatrix}$ and $\
up vote 26 begin{pmatrix}1\cr-1\end{pmatrix}$ for $t>0$.
down vote
Obviously there's no continuous selection possible.
add comment
See the following article which contains an overview of available results:
Andreas Kriegl, Peter W. Michor, Armin Rainer: Denjoy-Carleman differentiable perturbation of polynomials and unbounded operators. Integral Equations and Operator Theory 71,3 (2011),
407-416. (pdf)
In particular, if your mapping $f$ is Hölder continuous (of class $C^{0,\alpha}$ for $0<\alpha\le1$) then the eigenvalues can be parameterized in a $C^{0,\alpha}$ way also.
EDIT: I describe the results which seem most relevant to your question:
Let $t\mapsto A(t)$ for $t\in T$ be a parameterized family of unbounded operators in a Hilbert space $H$ with common domain of definition and with compact resolvent.
If $t\in T=\mathbb R$ and all $A(t)$ are self-adjoint then the following holds:
up vote (A) (Rellich) If $A(t)$ is real analytic in $t\in \mathbb R$, then the eigenvalues and the eigenvectors of $A(t)$ can be parameterized real analytically in $t$.
9 down
vote (B) If $A(t)$ is quasianalytic of class $C^Q$ in $t\in \mathbb R$, then the eigenvalues and the eigenvectors of $A(t)$ can be parameterized $C^Q$ in $t$.
If $t\in T=\mathbb R^n$ and all $A(t)$ are normal then the following holds:
(L) If $A(t)$ is real analytic or quasianalytic of class $C^Q$ in $t\in \mathbb R^n$, then for each $t_0\in \mathbb R^n$ and for each eigenvalue $z_0$ of $A(t_0)$, there exist a neighborhood
$D$ of $z_0$ in $\mathbb C$, a neighborhood $W$ of $t_0$ in $\mathbb R^n$, and a finite covering $\{\pi_k : U_k \to W\}$ of $W$, where each $\pi_k$ is a composite of finitely many mappings
each of which is either a local blow-up along a real analytic or $C^Q$ submanifold or a local power substitution, such that the eigenvalues of $A(\pi_k(s))$, $s \in U_k$, in $D$ and the
corresponding eigenvectors can be parameterized real analytically or $C^Q$ in $s$. If $A$ is self-adjoint, then we do not need power substitutions.
(M) If $A(t)$ is real analytic or quasianalytic of class $C^Q$ in $t\in \mathbb R^n$, then for each $t_0\in \mathbb R^n$ and for each eigenvalue $z_0$ of $A(t_0)$, there exist a neighborhood
$D$ of $z_0$ in $\mathbb C$ and a neighborhood $W$ of $t_0$ in $\mathbb R^n$ such that the eigenvalues of $A(t)$, $t \in W$, in $D$ and the corresponding eigenvectors can be parameterized by
functions which are special functions of bounded variation (SBV) in $t$.
add comment
(1) If the eigenvalues of $f$ always have multiplicity 1, then indeed you can make a continuous selection of eigenvectors. Say $\lambda_j(z)$, with $j=1,2,\dots,n$, denote the eigenvalues of
$f(z)$ arranged in increasing order. For each $j$ choose a continuous $\alpha_j(t,z)$, with $t\in \mathbb R$ and $z\in D$ such that $$\alpha_j(\lambda_j(z),z)=1\mbox{ and }\alpha_j(\lambda_
{i}(z),z)=0\mbox{ if }i\neq j.$$ Using functional calculus let us define $$p_j(z)=\alpha_j(f_j(z),z).$$ These are the rank one orthogonal projections onto the eigenspaces of $f(z)$. Since
$D$ is an open subset of the plane, it only has trivial complex line bundles. So for each each $p_j(z)$ there exists a continuous section $v_j(z)\in \mathbb C^n$ such that $p_j(z)v_j(z)=v_j
up vote (2) The set of functions $f$ with all eigenvalues of multiplicity 1 is dense (and $G_\delta$) among the bounded continuous functions on $D$ with values on the $n\times n$ selfadjoint
5 down matrices. Here it is crucial that $D$ is at most of dimension 2. This is proven in "Density of the self-adjoint elements with finite spectrum in an irrational rotation C∗-algebra. Math.
vote Scand. 67 (1990), 73–86.", by Choi and Elliott.
The gist of their argument is this: the set of $n\times n$ self-adjoint matrices such that at least two eigenvalues agree is a finite union of submanifolds of the set of all self-adjoint
matrices, where each submanifold has codimension at least 3. This makes it that a suitable perturbation of $f$ can avoid such finite union of submanifolds provided that the domain of $f$ has
dimension at most 2.
add comment
I think this might salvage the situation: Assume that you do have a path parameterized by $t$ and that it is stationary for a while any time that the multiplicity of an eigenvalue increases.
In the case $n=2$ the eigenvectors are (usually) perpendicular so we could represent them as four points on the unit circle separated by $\frac{\pi}{2}$ radians . Imagine time as an axis so
the eigenvectors form four black paths traveling up a cylinder. Any time the matrix becomes a scalar multiple of the identity matrix you suddenly have the solid unit circle. As long as this
is a band of some width you can arrange to leave the band in the appropriate configuration.
With larger $n$ and even more general matrices I think it would be about the same. So one point is that it can not be a function of merely where you are, but also where you were and where you
will be next.
up vote
2 down A related problem is constructive versions of the Fundamental Theorem of Algebra ( cribbed from a paper by Fred Richman which I recommend.) Let $\mathbb{A} \subset \mathbb{C}$ be the field of
vote algebraic numbers (roots of polynomials with rational coefficients) Consider degree $n+1$ monic polynomials $x^{n+1}+\sum_0^na_iz^i$ They can be parameterized by their coefficient vectors $\
mathbf{a}=(a_0,a_1,\cdots,a_n) \in \mathbb{A}^n$ and by their "list" of roots $\boldsymbol\alpha=(\alpha_0,\cdots,\alpha_n) \in \mathbb{A}^n$ ordered somehow. There is an obvious continuous
map (uniformly bicontinuous on bounded sets) in one direction $\boldsymbol\alpha \to \prod(z-\alpha_i)$ i.e. extract the coefficients using elementary symmetric functions. Is there a
continuous mapping in the other? Read the paper (which gets into Dedikind cuts, extension to all of $\mathbb{C}$ and other matters.) As I recall, the correct target in the space of roots
should instead be multisets of algebraic numbers with an appropriate metric. A motivating example is $z^2-b$ with $b$ real. For $b$ close to $0$ we have a sudden shift from the two roots
spanning a horizontal line to a vertical one.
With respect to your first paragraph, this particular way of moving the goal posts seems to be a special case of part (L) of Peter Michor's answer. – Igor Khavkine Dec 15 '12 at 20:39
add comment
The eigenvalues are roots of the characteristic polynomial, whose coefficients are a continuous function of the parameter. Therefore we have that the eigenvalues and parameter lie on some
up vote -2 submanifold. By examining the parametrization things can be said about how nice this manifold is.
down vote
I don't understand how your use of the word "submanifold" makes sense. If you have a continuous family of diagonalizable matrices parametrized by a space $D$, then the spectrum is a
continuous map from $D$ to the symmetric power $\operatorname{Sym}^n \mathbb{C}$ that doesn't necessarily lift to a continuous map with target $\mathbb{C}^n$. – S. Carnahan♦ Dec 12 '12
at 7:51
So consider the map $det(M-\lambdaI)$. This is a continuous map from $\mathbb{C}\times D$ to $\mathbb{C}$, with nice properties of the derivative. Apply the Inverse Function Theorem. –
Watson Ladd Dec 12 '12 at 20:08
1 Are you sure about "nice properties of the derivative"? – Charles Staats Dec 13 '12 at 14:24
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra reference-request matrices or ask your own question. | {"url":"http://mathoverflow.net/questions/116123/how-to-find-define-eigenvectors-as-a-continuous-function-of-matrix","timestamp":"2014-04-21T09:51:32Z","content_type":null,"content_length":"87073","record_id":"<urn:uuid:4530ebf7-6c4e-4a4e-ae70-bc4bb86dc8f1>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] On Physical Church-Turing Thesis
Toby Ord toby.ord at philosophy.oxford.ac.uk
Tue Feb 10 16:47:20 EST 2004
On 8 Feb 2004, at 22:55, Dmytro Taranovsky wrote:
> In this posting I discuss the physical Church thesis and its relation
> to
> physical theories. The physical Church thesis claims that every
> physically realizable machine is recursive but for the ability to pick
> random numbers.
Do you have any references to precise formalizations of this kind? In
my experience it is quite difficult to distinguish random non-recursive
computation from the non-random kind. For example, consider the
(admittedly hand-waving) case of a universe in which recursive
computation is physically possible, as is fair coin tossing and every
computation formed by combining them. In this case, one could imagine a
machine that takes n as input and flips a coin, adding one to n if it
is heads and leaving n unchanged otherwise. This computes a
non-recursive function (with probability one), yet I imagine you would
not want to count it as non-recursive computation.
One way to approach this is to say that it is not replicable, but
consider the alternate formulation in which we have one machine that
does the coin flipping and produces an ever growing table of flips.
Then other machines can use this table as many times as they want,
computing the same non-recursive function every time. It is not obvious
exactly how to rule out this class of computations.
My preferred option is to speak of 'harnessable' computations, saying
that 'All harnessable (deterministic) physical computation is
recursive' and holding that a computation is harnessable iff the
function it computes can be known by some precise finite description.
By this I would include the halting function and all functions finitely
specified in first order arithmetic and many others, but I don't know
how to make this more precise (as it needs to be).
I do think that there are separate hypotheses regarding:
a recursive universe
a recursive universe + a certain type of randomness
a non-recursive universe
but I don't know how these can be precisely formalized.
> I suspect that all widely used physical theories that do not suffer
> from
> internal inconsistencies or absence of a rigorous formulation admit a
> mathematical proof whose physical interpretation is impossibility of
> non-recursive computation.
I am not so sure and know that some suspect the opposite. For example,
Newtonian mechanics was recently shown to be much stranger than we
suspected, with infinite travel in finite time and so on. There is also
a paper on doing infinite computation via a succession of increasingly
small machines (E. Brian Davies, Building Infinite Machines, BJPS).
Perhaps you would say that Newtonian Mechanics is not internally
consistent, but in this case, the set of all widely used physical
theories that satisfy your constraints is no doubt empty at the moment.
(This is not a problem per se, but worth pointing out).
> (If the theory, like general relativity,
> depends on external parameters or fields, recursiveness of the
> parameters is part of the hypotheses of such theorems).
One would want to justify such hypotheses...
> It would be interesting to hear from a specialist whether the Standard
> Model allows only recursive computation. If so, then any hope of
> physical realization of non-recursive computation would require
> currently unknown physical phenomena (or non-recursiveness in
> parameters
> such as the fine structure constant).
Tien Kieu is such a specialist whom I have worked with extensively on
this topic and cannot see any reason at all as to why QM would prohibit
non-recursive computation. Indeed, he has suggested how it can allow
the solving of hilbert's tenth problem (and thus all Sigma_1 and Pi_1
problems) via the Quantum Adiabatic Theorem. There has been a technical
criticism of Kieu's method, but he believes it to be satisfactorily
answered. See Kieu, 'Computing the Noncomputable'.
> Although the physical Church Thesis may appear to be an empirical
> claim,
> a demonstration of its negation would require not only observations of
> non-recursive physical phenomena, but also a philosophical argument
> that
> the observable pattern is in fact non-recursive rather than being
> produced by a complicated and unknown but recursive rule set. For
> example, if some physical constant turns out to equal 0# (a
> non-recursive real number), a philosophical argument is needed to show
> that the constant is actually 0# rather than a different real number
> masquerading as 0#: Comparison of data with theory is possible only if
> one can compute the theoretical prediction, which is problematic if the
> theory makes non-recursive predictions.
As has been pointed out in another current post, this problem (as put)
is no different from knowing whether or not a given computer (with
unbounded memory) is computing multiplication. In all such cases, an
infinite amount of functions are compatible with our (at any stage)
finite data. I don't see any new problem here and think that we should
(and would) be entirely prepared to accept the halting problem answers
given by a machine that our physics led us to believe could indeed
solve the halting problem.
Also, the same argument can be put in terms of the rationals and an
alleged physical quantity of pi units (such as in the ratio of the
length of a physical circle to its diameter).
> I personally believe that not everything we do and observe is
> recursive.
I am unsure. I don't see any great evidence either way, but think that
there should be more realization that this is one of the great unsolved
questions in physics with potentially very large implications.
There are also many rather bizarre limitations that recursive physics
would place upon the universe. For example, it would mean that no
measurable quantity (in some currently under specified sense) could
grow faster than the busy beaver function or slower than the inverse of
the busy beaver function. It also means that no measurable quantity
could converge to a value faster than the reciprocal of the busy beaver
function or slower than the reciprocal of the inverse of the busy
beaver function. I am currently unaware of any result remotely like
this in physics, but it is the type of thing that falls out for free if
we find that physics is (in some appropriate sense) recursive.
> For example, the search for axioms of set theory can be
> fully successful only if it is non-recursive. The twenty-first
> century,
> with new scientific discoveries and theories, and a revolution in
> knowledge in general, may resolve the status of the physical Church
> thesis.
I hope so.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2004-February/007891.html","timestamp":"2014-04-20T13:23:34Z","content_type":null,"content_length":"9412","record_id":"<urn:uuid:7f14f40a-058a-452e-8774-6b1557828d2b>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
pproach to
Results 1 - 10 of 183
- IEEE TRANSACTIONS ON SIGNAL PROCESSING , 2002
"... Increasingly, for many application areas, it is becoming important to include elements of nonlinearity and non-Gaussianity in order to model accurately the underlying dynamics of a physical
system. Moreover, it is typically crucial to process data on-line as it arrives, both from the point of view o ..."
Cited by 1137 (2 self)
Add to MetaCart
Increasingly, for many application areas, it is becoming important to include elements of nonlinearity and non-Gaussianity in order to model accurately the underlying dynamics of a physical system.
Moreover, it is typically crucial to process data on-line as it arrives, both from the point of view of storage costs as well as for rapid adaptation to changing signal characteristics. In this
paper, we review both optimal and suboptimal Bayesian algorithms for nonlinear/non-Gaussian tracking problems, with a focus on particle filters. Particle filters are sequential Monte Carlo methods
based on point mass (or “particle”) representations of probability densities, which can be applied to any state-space model and which generalize the traditional Kalman filtering methods. Several
variants of the particle filter such as SIR, ASIR, and RPF are introduced within a generic framework of the sequential importance sampling (SIS) algorithm. These are discussed and compared with the
standard EKF through an illustrative example.
, 1999
"... Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised
learning under a single basic generative model. This is achieved by collecting together disparate observa ..."
Cited by 260 (17 self)
Add to MetaCart
Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised
learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking
discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative
model. We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new
model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and
local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models.
- IEEE Trans. Inform. Theory , 2002
"... Abstract—An overview of statistical and information-theoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discrete-time finite-state homogeneous Markov chain observed
through a discrete-time memoryless invariant channel. In recent years, the work of Baum and Petrie on finite- ..."
Cited by 170 (3 self)
Add to MetaCart
Abstract—An overview of statistical and information-theoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discrete-time finite-state homogeneous Markov chain observed through
a discrete-time memoryless invariant channel. In recent years, the work of Baum and Petrie on finite-state finite-alphabet HMPs was expanded to HMPs with finite as well as continuous state spaces and
a general alphabet. In particular, statistical properties and ergodic theorems for relative entropy densities of HMPs were developed. Consistency and asymptotic normality of the maximum-likelihood
(ML) parameter estimator were proved under some mild conditions. Similar results were established for switching autoregressive processes. These processes generalize HMPs. New algorithms were
developed for estimating the state, parameter, and order of an HMP, for universal coding and classification of HMPs, and for universal decoding of hidden Markov channels. These and other related
topics are reviewed in this paper. Index Terms—Baum–Petrie algorithm, entropy ergodic theorems, finite-state channels, hidden Markov models, identifiability, Kalman filter, maximum-likelihood (ML)
estimation, order estimation, recursive parameter estimation, switching autoregressive processes, Ziv inequality. I.
, 1998
"... Post-crash distributions inferred from S ..."
, 1996
"... Linear systems have been used extensively in engineering to model and control the behavior of dynamical systems. In this note, we present the Expectation Maximization (EM) algorithm for
estimating the parameters of linear systems (Shumway and Sto er, 1982). We also point out the relationship between ..."
Cited by 156 (7 self)
Add to MetaCart
Linear systems have been used extensively in engineering to model and control the behavior of dynamical systems. In this note, we present the Expectation Maximization (EM) algorithm for estimating
the parameters of linear systems (Shumway and Sto er, 1982). We also point out the relationship between linear dynamical systems, factor analysis, and hidden Markov models.
- Neural Computation , 1998
"... We introduce a new statistical model for time series which iteratively segments data into regimes with approximately linear dynamics and learns the parameters of each of these linear regimes.
This model combines and generalizes two of the most widely used stochastic time series models -- hidden Ma ..."
Cited by 142 (6 self)
Add to MetaCart
We introduce a new statistical model for time series which iteratively segments data into regimes with approximately linear dynamics and learns the parameters of each of these linear regimes. This
model combines and generalizes two of the most widely used stochastic time series models -- hidden Markov models and linear dynamical systems -- and is closely related to models that are widely used
in the control and econometrics literatures. It can also be derived by extending the mixture of experts neural network (Jacobs et al., 1991) to its fully dynamical version, in which both expert and
gating networks are recurrent. Inferring the posterior probabilities of the hidden states of this model is computationally intractable, and therefore the exact Expectation Maximization (EM) algorithm
cannot be applied. However, we present a variational approximation that maximizes a lower bound on the log likelihood and makes use of both the forward-backward recursions for hidden Markov models
and the Kalman lter recursions for linear dynamical systems. We tested the algorithm both on artificial data sets and on a natural data set of respiration force from a patient with sleep apnea. The
results suggest that variational approximations are a viable method for inference and learning in switching state-space models.
, 1992
"... this article then is to develop methodology for modeling the nonnormality of the ut, the vt, or both. A second departure from the model specification ( 1 ) is to allow for unknown variances in
the state or observational equation, as well as for unknown parameters in the transition matrices Ft and Ht ..."
Cited by 126 (14 self)
Add to MetaCart
this article then is to develop methodology for modeling the nonnormality of the ut, the vt, or both. A second departure from the model specification ( 1 ) is to allow for unknown variances in the
state or observational equation, as well as for unknown parameters in the transition matrices Ft and Ht. As a third generalization we allow for nonlinear model structures; that is, X t = ft(Xt-l) q-
Ut, and Yt = ht(xt) + vt, t = 1, ..., n, (2) whereft( ) and ht(. ) are given, but perhaps also depend on some unknown parameters. The experimenter may wish to entertain a variety of error
distributions. Our goal throughout the article is an analysis for general state-space models that does not resort to convenient assumptions at the expense of model adequacy
- Adaptive Processing of Sequences and Data Structures , 1998
"... Bayesian networks are directed acyclic graphs that represent dependencies between variables in a probabilistic model. Many time series models, including the hidden Markov models (HMMs) used in
speech recognition and Kalman filter models used in filtering and control applications, can be viewed as ex ..."
Cited by 124 (0 self)
Add to MetaCart
Bayesian networks are directed acyclic graphs that represent dependencies between variables in a probabilistic model. Many time series models, including the hidden Markov models (HMMs) used in speech
recognition and Kalman filter models used in filtering and control applications, can be viewed as examples of dynamic Bayesian networks. We first provide a brief tutorial on learning and Bayesian
networks. We then present some dynamic Bayesian networks that can capture much richer structure than HMMs and Kalman filters, including spatial and temporal multiresolution structure, distributed
hidden state representations, and multiple switching linear regimes. While exact probabilistic inference is intractable in these networks, one can obtain tractable variational approximations which
call as subroutines the forward-backward and Kalman filter recursions. These approximations can be used to learn the model parameters...
- In Advances in Neural Information Processing Systems 13 , 2001
"... Variational approximations are becoming a widespread tool for Bayesian learning of graphical models. We provide some theoretical results for the variational updates in a very general family of
conjugate-exponential graphical models. We show how the belief propagation and the junction tree algorithms ..."
Cited by 110 (14 self)
Add to MetaCart
Variational approximations are becoming a widespread tool for Bayesian learning of graphical models. We provide some theoretical results for the variational updates in a very general family of
conjugate-exponential graphical models. We show how the belief propagation and the junction tree algorithms can be used in the inference step of variational Bayesian learning. Applying these results
to the Bayesian analysis of linear-Gaussian state-space models we obtain a learning procedure that exploits the Kalman smoothing propagation, while integrating over all model parameters. We
demonstrate how this can be used to infer the hidden state dimensionality of the state-space model in a variety of synthetic problems and one real high-dimensional data set.
- Journal of Machine Learning Research , 2004
"... The advantages of discriminative learning algorithms and kernel machines are combined with generative modeling using a novel kernel between distributions. In the probability product kernel, data
points in the input space are mapped to distributions over the sample space and a general inner product i ..."
Cited by 104 (7 self)
Add to MetaCart
The advantages of discriminative learning algorithms and kernel machines are combined with generative modeling using a novel kernel between distributions. In the probability product kernel, data
points in the input space are mapped to distributions over the sample space and a general inner product is then evaluated as the integral of the product of pairs of distributions. The kernel is
straightforward to evaluate for all exponential family models such as multinomials and Gaussians and yields interesting nonlinear kernels. Furthermore, the kernel is computable in closed form for
latent distributions such as mixture models, hidden Markov models and linear dynamical systems. For intractable models, such as switching linear dynamical systems, structured mean-field
approximations can be brought to bear on the kernel evaluation. For general distributions, even if an analytic expression for the kernel is not feasible, we show a straightforward sampling method to
evaluate it. Thus, the kernel permits discriminative learning methods, including support vector machines, to exploit the properties, metrics and invariances of the generative models we infer from
each datum. Experiments are shown using multinomial models for text, hidden Markov models for biological data sets and linear dynamical systems for time series data. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=108958","timestamp":"2014-04-20T13:21:40Z","content_type":null,"content_length":"39374","record_id":"<urn:uuid:493b212d-03ca-4863-96dc-e5c569d68a05>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Faculty Stories:
Faculty Stories: Patricia Hale, Ph.D.
Title: Professor
Department: Mathematics
Office: 8-206
Phone: (909) 869-3492
Email: phale@csupomona.edu
Course: MAT 191
Required Textbook: Excursions in Modern Mathematics
Author(s): Tannenbaum
ISBN: 978-0-321-57621-7
Textbook is available at a reduced cost to students from CourseSmart via the Bronco Bookstore.
Original Cost at Bronco Bookstore: $137.50
Cost of online edition/material: $61.00
Savings per Student: $76.50
Course: MAT 214
Required Textbook: Multivariable Calculus Early Transcendentals
Author(s): Stewart
ISBN: 978-0-495-54428-9
Textbook is available at a reduced cost to students from CourseSmart via the Bronco Bookstore.
Original Cost at Bronco Bookstore: $164.10
Cost of online edition/material: $88.00
Savings per Student: $76.10
Course: MAT 330
Required Textbook: Foundations of Geometry
Author(s): Venema
ISBN: 978-0-13-208327-0
Textbook is available at a reduced cost to students from CourseSmart via the Bronco Bookstore.
Original Cost at Bronco Bookstore: $86.65
Cost of online edition/material: $48.00
Savings per Student: $38.65 | {"url":"http://www.csupomona.edu/~library/ali/featured_Hale.shtml","timestamp":"2014-04-17T09:51:15Z","content_type":null,"content_length":"7317","record_id":"<urn:uuid:7358770a-c17f-4d72-88e4-cc13b6ded236>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lecturer: Dr. Axel Großmann,
See the official Course Summary page for more details.
Time and Place
Fridays, 2nd DS (from 9.20am until 10.50am), in GRU 350
The lectures commence on 18th November. The time and place for the tutorials is still to be decided.
During the Open House
Recommended text:
Christos H. Papadimitriou. Computational Complexity, Addison Wesley, 1994
Other useful texts:
• Michael Sipser. Introduction to the Theory of Computation, PWS, 1997
• Michael R. Garey and David S. Johnson. Computers and Intractability, W. H. Freeman, 1979
List of Topics Arranges by Weeks
Please note this arrangement is tentative. Along with each topic, there is a reference to the relevant chapters or sections of the main text.
Week 1 (18th Nov.) 2 lectures, 1 tutorial
Introduction (Slides)
Languages for specificaton and the need for formalism. Algorithms, polynomials and polynomial time algorithms. (Ch. 1)
• Graph reachability
• Maximum flow and matching
• Travelling Salesman problem
Week 3 (2nd Dec.) 2 lectures, 1 tutorial
Turing machines as formalised algorithm. Complexity classes. Space complexity and nondeterminism (Ch. 2, except for 2.6)
• TM basics
• TM as algorithms
• TM with multiple strings
• Lineas speedup
• Space bounds
• Nondeterministic machines
Week 5 (16th Dec.) 3 lectures, 1 tutorial
Universal machines and undecidability. (Ch. 3)
Propositional logic. The complexity of satisfiability and validity. (Sec. 4.1 and 4.2)
• Universal TM
• Halting problem
• More undecidability
• Boolean logic
• Boolean expressions
• Satisfiability and validity (except for Horn clauses)
Week 8 (20th Jan.) 3 lectures, 1 tutorial
Complexity classes. The Hierarchy theorem. (Sec. 7.1 and 7.2)
Reductions and completeness. (Sec. 8.1 and 8.2)
• Complexity classes
• Hierarchy theorem
• Reductions
• Completeness (Cook's theorem)
Written exam on all courses within the Foundations modules
Details about the Examination
There will be a single 120 minute written exam in the examination period. In this exam, there will be two tracks:
• Logic and Science of Computational Logic
• Complexity Theory, Computer Algebra, and Science of Computational Logic
with 50% of the total points going to both, Logic and Science of Computational Logic, and 25% of the total points going to both, Complexity Theory and Computer Algebra.
For example, if the total number of points is 100, then there will be exercises as follows:
• 50 points for Logic
• 50 points for Science of Computational Logic
• 25 points for Complexity theory
• 25 points for Computer Algebra
These exercises will be combined into the two tracks. | {"url":"http://www.computational-logic.org/iccl/master/lectures/winter05/ct/","timestamp":"2014-04-21T04:31:43Z","content_type":null,"content_length":"5134","record_id":"<urn:uuid:64dc243c-f372-4ceb-9c94-19304c376021>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00244-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the difference between merging knowledge bases and combining them
Results 1 - 10 of 34
, 2002
"... We consider the problem of merging several belief bases in the presence of integrity constraints. ..."
- Synthese , 2006
"... The aggregation of individual judgments on logically interconnected propositions into a collective decision on the same propositions is called judgment aggregation. Literature in social choice
and political theory has claimed that judgment aggregation raises serious concerns. For example, consider a ..."
Cited by 36 (8 self)
Add to MetaCart
The aggregation of individual judgments on logically interconnected propositions into a collective decision on the same propositions is called judgment aggregation. Literature in social choice and
political theory has claimed that judgment aggregation raises serious concerns. For example, consider a set of premises and a conclusion where the latter is logically equivalent to the former. When
majority voting is applied to some propositions (the premises) it may give a different outcome than majority voting applied to another set of propositions (the conclusion). This problem is known as
the discursive dilemma (or paradox). The discursive dilemma is a serious problem since it is not clear whether a collective outcome exists in these cases, and if it does, what it is like. Moreover,
the two suggested escape-routes from the paradox — the so-called premise-based procedure and the conclusion-based procedure — are not, as I will show, satisfactory methods for group decision-making.
In this paper I introduce a new aggregation procedure inspired by an operator defined in artificial intelligence in order to merge belief bases. The result is that we do not need to worry about
paradoxical outcomes, since these arise only when inconsistent collective judgments are not ruled out from the set of possible solutions. ∗The title of this paper in an earlier version was
“Collective decision-making without paradoxes:
- In Proceedings of the Fifth European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty (ECSQARU’99), LNAI 1638 , 1999
"... Merging operators aim at defining the beliefs/goals of a group of agents from the beliefs/goals of each member of the group. Whenever an agent of the group has preferences over the possible
results of the merging process (i.e., the possible merged bases), she can try to rig the merging process by ly ..."
Cited by 18 (5 self)
Add to MetaCart
Merging operators aim at defining the beliefs/goals of a group of agents from the beliefs/goals of each member of the group. Whenever an agent of the group has preferences over the possible results
of the merging process (i.e., the possible merged bases), she can try to rig the merging process by lying on her true beliefs/goals if this leads to a better merged base according to her point of
view. Obviously, strategy-proof operators are highly desirable in order to guarantee equity among agents even when some of them are not sincere. In this paper, we draw the strategy-proof landscape
for many merging operators from the literature, including model-based ones and formula-based ones. Both the general case and several restrictions on the merging process are considered. 1.
- In Proc. of IJCAI’05 , 2005
"... In this paper, two families of merging operators are considered: quota operators and Gmin operators. Quota operators rely on a simple idea: any possible world is viewed as a model of the result
of the merging when it satisfies “sufficiently many” bases from the given profile (a multi-set of bases). ..."
Cited by 17 (3 self)
Add to MetaCart
In this paper, two families of merging operators are considered: quota operators and Gmin operators. Quota operators rely on a simple idea: any possible world is viewed as a model of the result of
the merging when it satisfies “sufficiently many” bases from the given profile (a multi-set of bases). Different interpretations of the “sufficiently many” give rise to specific operators. Each Gmin
operator is parameterized by a pseudo-distance and each of them is intended to refine the quota operators (i.e., to preserve more information). Quota and Gmin operators are evaluated and compared
along four dimensions: rationality, computational complexity, strategy-proofness, and discriminating power. Those two families are shown as interesting alternatives to the formula-based merging
operators (which selects some formulas in the union of the bases). 1
, 2001
"... The importance of belief merging is reflected by the abundance of the literature about it for the last years. In the following, a model for belief merging based on distances is introduced; many
merging operators already pointed out so far can be recovered as specific instances of this model. We inve ..."
Cited by 17 (6 self)
Add to MetaCart
The importance of belief merging is reflected by the abundance of the literature about it for the last years. In the following, a model for belief merging based on distances is introduced; many
merging operators already pointed out so far can be recovered as specific instances of this model. We investigate the computational aspects of such distance-based operators and give two general
results showing that the complexity of inference for them is at the first level of the polynomial hierarchy (under very weak assumptions). Then some specific distance-based operators are considered
and their complexity is identified. Finally, distancebased merging operators are investigated from the logical point of view.
- Journal of Applied Non-Classical Logics , 2004
"... ABSTRACT. We propose in this paper a new family of belief merging operators, that is based on a game between sources: until a coherent set of sources is reached, at each round a contest is
organized to find out the weakest sources, then those sources has to concede (weaken their point of view). This ..."
Cited by 15 (9 self)
Add to MetaCart
ABSTRACT. We propose in this paper a new family of belief merging operators, that is based on a game between sources: until a coherent set of sources is reached, at each round a contest is organized
to find out the weakest sources, then those sources has to concede (weaken their point of view). This idea leads to numerous new interesting operators (depending of the exact meaning of “weakest ”
and “concede”, that gives the two parameters for this family) and opens new perspectives for belief merging. Some existing operators are also recovered as particular cases. Those operators can be
seen as a special case of Booth’s Belief Negotiation Models [BOO 02], but the achieved restriction forms a consistent family of merging operators that worths to be studied on its own.
- In Proceedings of the Ninth Conference on Principles of Knowledge Representation and Reasoning , 2004
"... Merging operators aim at defining the beliefs/goals of a group of agents from the beliefs/goals of each member of the group. Whenever an agent of the group has preferences over the possible
results of the merging process (i.e. the possible merged bases), she can try to rig the merging process by lyi ..."
Cited by 13 (5 self)
Add to MetaCart
Merging operators aim at defining the beliefs/goals of a group of agents from the beliefs/goals of each member of the group. Whenever an agent of the group has preferences over the possible results
of the merging process (i.e. the possible merged bases), she can try to rig the merging process by lying on her true beliefs/goals if this leads to a better merged base according to her point of
view. Obviously, strategy-proof operators are highly desirable in order to guarantee a fair merging process even when some of them are not sincere. In fact, when strategy-proofness is not guaranteed,
it may be questioned whether the result of the merging process actually represents the beliefs/goals of the group. In this paper, the strategy-proof landscape for many merging operators from the
literature, including model-based ones and formula-based ones, is drawn. Both the general case and several restrictions on the merging process (among others, the number of agents and the presence of
integrity constraints), are considered.
- J PHILOS LOGIC( 2011) 40: 239–270 , 2011
"... Belief merging aims at combining several pieces of information coming from different sources. In this paper we review the works on belief merging of propositional bases. We discuss the
relationship between merging, revision, update and confluence, and some links between belief merging and social ch ..."
Cited by 10 (2 self)
Add to MetaCart
Belief merging aims at combining several pieces of information coming from different sources. In this paper we review the works on belief merging of propositional bases. We discuss the relationship
between merging, revision, update and confluence, and some links between belief merging and social choice theory. Finally we mention the main generalizations of these works in other logical
- Normative Multi-agent Systems, volume 07122 of Dagstuhl Seminar Proceedings. Internationales Begegnungsund Forschungszentrum für Informatik (IBFI), Schloss Dagstuhl , 2007
"... Abstract. The paper discusses ten philosophical problems in deontic logic: how to formally represent norms, when a set of norms may be termed ‘coherent’, how to deal with normative conflicts,
how contraryto-duty obligations can be appropriately modeled, how dyadic deontic operators may be redefined ..."
Cited by 9 (4 self)
Add to MetaCart
Abstract. The paper discusses ten philosophical problems in deontic logic: how to formally represent norms, when a set of norms may be termed ‘coherent’, how to deal with normative conflicts, how
contraryto-duty obligations can be appropriately modeled, how dyadic deontic operators may be redefined to relate to sets of norms instead of preference relations between possible worlds, how various
concepts of permission can be accommodated, how meaning postulates and counts-as conditionals can be taken into account, and how sets of norms may be revised and merged. The problems are discussed
from the viewpoint of input/output logic as developed by van der Torre & Makinson. We argue that norms, not ideality, should take the central position in deontic semantics, and that a semantics that
represents norms, as input/output logic does, provides helpful tools for analyzing, clarifying and solving the problems of deontic logic.
, 2004
"... In this paper, a new method for merging multiple inconsistent knowledge bases in the framework of possibilistic logic is presented. We divide the fusion process into two steps: one is called the
splitting step and the other is called the combination step. Given several inconsistent possibilistic ..."
Cited by 8 (5 self)
Add to MetaCart
In this paper, a new method for merging multiple inconsistent knowledge bases in the framework of possibilistic logic is presented. We divide the fusion process into two steps: one is called the
splitting step and the other is called the combination step. Given several inconsistent possibilistic knowledge bases (i.e. the union of these possibilistic bases is inconsistent) , we split each of
them into two subbases according to the upper free degree of their union, such that one subbase contains formulas whose necessity degrees are less than the upper free degree and the other contains
formulas whose necessity degrees are greater than the upper free degree. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=253602","timestamp":"2014-04-18T17:29:37Z","content_type":null,"content_length":"38888","record_id":"<urn:uuid:2001162e-32d0-4d4f-8759-2624218bae9d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Calculus Based Physics course
“Another area you may want to spend time thinking about is the level of mathematics:”
This is a real issue. My supervisor was asking me about this and I said that the students could be enrolled in a calculus course while taking the calculus based physics. But now I realize that this
could be a problem because students might find it difficult to apply the calculus concepts that they just began to learn. This problem could be solved if we make the calculus course a pre-requisite.
What is your experience?
Thanks for your reply.
My experience/opinion is not to turn the Physics course into a Math course. In practice, that means teaching the algebra-based course using *at most* basic trigonometry and quadratic equations. For
the calculus-based course, I would limit the math to simple derivatives and integrals, but keep the general level of the course at algebra. I don't have the students do any derivations on exams, and
I provided them with a formula sheet last year.
But realize, this decision is up to you- just make sure you communicate to the students on the first day what your expectations are, what they should know, and what they can do if they don't. For
example, you may want to quickly solve a simple kinematics problem using calculus for the class, and tell them they are responsible for being able to work at that level if they want an 'A'.
Enjoy the experience! | {"url":"http://www.physicsforums.com/showpost.php?p=2853723&postcount=6","timestamp":"2014-04-17T07:31:13Z","content_type":null,"content_length":"8901","record_id":"<urn:uuid:88eba20a-72ac-404f-b00d-cf8ef305c20c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bel Tiburon, CA Algebra Tutor
Find a Bel Tiburon, CA Algebra Tutor
...I took a number of courses in the subject. I've used the concepts during my years as a programmer and have tutored many students in the subject. I have a strong background in linear algebra
and differential equations.
49 Subjects: including algebra 1, algebra 2, calculus, physics
...My role as an instructor also allowed me to pilot a program for 5th grade math students struggling with word problems. I also spent a year in DC working with 9-10th graders on all subjects as
a tutor, but particularly spent a lot of time working on math because I have a strong background from hi...
16 Subjects: including algebra 1, algebra 2, English, special needs
...As a speaking/pronunciation coach, reading tutor, and writing tutor, I've worked with native English speakers as well as English learners to improve grammar, style, and communication of ideas.
I edited papers for journal submission for a professor at one of the top universities in Bangkok as wel...
34 Subjects: including algebra 1, algebra 2, reading, Spanish
...I have also edited several texts and assisted many fellow students in polishing their papers. I strive to help students master the mechanics of writing so that they can let their unique voices
show in their work. I match the richness and diversity of material presented in social studies courses with my own background in advanced history, politics and theology coursework.
32 Subjects: including algebra 1, algebra 2, chemistry, reading
...Reading can also take us places we may never actually be able to visit, and see worlds that don't exist. Any way you look at it, reading is a skill that one must have to get along in this
world. Learning to write entails many factors, from proper grammar to writing a well-organized essay.
12 Subjects: including algebra 1, algebra 2, English, reading
Related Bel Tiburon, CA Tutors
Bel Tiburon, CA Accounting Tutors
Bel Tiburon, CA ACT Tutors
Bel Tiburon, CA Algebra Tutors
Bel Tiburon, CA Algebra 2 Tutors
Bel Tiburon, CA Calculus Tutors
Bel Tiburon, CA Geometry Tutors
Bel Tiburon, CA Math Tutors
Bel Tiburon, CA Prealgebra Tutors
Bel Tiburon, CA Precalculus Tutors
Bel Tiburon, CA SAT Tutors
Bel Tiburon, CA SAT Math Tutors
Bel Tiburon, CA Science Tutors
Bel Tiburon, CA Statistics Tutors
Bel Tiburon, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/bel_tiburon_ca_algebra_tutors.php","timestamp":"2014-04-18T18:36:19Z","content_type":null,"content_length":"24164","record_id":"<urn:uuid:67e7fb53-5e7e-4e10-b720-24b9856f64b7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00627-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Highest level question
• one year ago
• one year ago
Best Response
You've already chosen the best response.
The question is?
Best Response
You've already chosen the best response.
The Birch and Swinnerton-Dyer Conjecture by A. Wiles A polynomial relation f(x, y) = 0 in two variables defines a curve C0. If the coefficients of the polynomial are rational numbers then one can
ask for solutions of the equation f(x, y) = 0 with x, y ∈ Q,in other words for rational points on the curve. The set of all such points is denoted C0(Q). If we consider a non-singular projective
model C of the curve then topologically C is classified by its genus,and we call this the genus of C0 also. Note that C0(Q) and C(Q) are either both finite or both infinite. Mordell conjectured, and
in 1983 Faltings proved,the following deep result Theorem [F1]. If the genus of C0 is greater than or equal to two, then C0(Q) is finite. As yet the proof is not effective so that one does not
possess an algorithm for finding the rational points. (There is an effective bound on the number of solutions but that does not help much with finding them.) The case of genus zero curves is much
easier and was treated in detail by Hilbert and Hurwitz [HH]. They explicitly reduce to the cases of linear and quadratic equations. The former case is easy and the latter is resolved by the
criterion of Legendre. In particular for a non-singular projective model C we find that C(Q) is non-empty if and only if C has p-adic points for all primes p,and this in turn is determined by a
finite number of congruences. If C(Q) is non-empty then C is parametrized by rational functions and there are infinitely many rational points. The most elusive case is that of genus 1. There may or
may not be rational solutions and no method is known for determining which is the case for any given curve. Moreover when there are rational solutions there may or may not be infinitely many. If a
non-singular projective model C has a rational point then C(Q) has a natural structure as an abeliangroup with this point as the identity element. In this case we call C an elliptic curve over Q.
(For a history of the development of this idea see [S]). In 1922 Mordell ([M]) proved that this group is finitely generated,thus fulfilling an implicit assumption of Poincar´e. Theorem. If C is an
elliptic curve over Q then C(Q) Z r ⊕ C(Q) tors for some integer r ≥ 0, where C(Q) tors is a finite abelian group. The integer r is called the rank of C. It is zero if and only if C(Q) is finite.
We can find an affine model for an elliptic curve over Q in Weierstrass form C: y 2 = x 3 + ax + b with a, b ∈ Z. We let ∆ denote the discriminant of the cubic and set Np := #{solutions of y 2 ≡ x 3
+ ax + b mod p} ap := p − Np. Then we can define the incomplete L-series of C (incomplete because we omit the Euler factors for primes p|2∆) by L(C, s) := p2∆ (1 − app−s + p 1−2s )−1 . We view
this as a function of the complex variable s and this Euler product is then known to converge for Re(s) > 3/2. A conjecture going back to Hasse (see the commentary on 1952(d) in [We1]) predicted
that L(C, s) should have a holomorphic continuation as a function of s to the whole complex plane. This has now been proved ([W],[TW],[BCDT]). We can now state the millenium prize problem:
Conjecture (Birch and Swinnerton-Dyer). The Taylor expansion of L(C, s) at s = 1 has the form L(C, s) = c(s − 1) r + higher order terms with c = 0 and r = rank(C(Q)). In particular this
conjecture asserts that L(C, 1) = 0 ⇔ C(Q) is infinite. 2Remarks. 1. There is a refined version of this conjecture. In this version one has to define Euler factors at primes p|2∆ to obtain the
completed L-series, L ∗ (C, s). The conjecture then predicts that L ∗ (C, s) ∼ c ∗ (s − 1) r with c ∗ = |XC|R∞w∞ p|2∆ wp/|C(Q) tors | 2 . Here |XC| is the order of the Tate-Shafarevich group of
the elliptic curve C, a group which is not known in general to be finite although it is conjectured to be so. It counts the number of equivalence classes of homogeneous spaces of C which have
points in all local fields. The term R∞ is an r × r determinant whose matrix entries are given by a height pairing applied to a system of generators of C(Q)/C(Q) tors . The wp’s are elementary
local factors and w∞ is a simple multiple of the real period of C. For a precise definition of these factors see [T1] or [T3]. It is hoped that a proof of the conjecture would also yield a proof
of the finiteness of XC. 2. The conjecture can also be stated over any number field as well as for abelian varieties,see [T1]. Since the original conjecture was stated much more elaborate
conjectures concerning special values of L-functions have appeared,due to Tate, Lichtenbaum,Deligne,Bloch,Beilinson and others,see [T2],[Bl] and [Be]. In particular these relate the ranks of
groups of algebraic cycles to the order of vanishing (or the order of poles) of suitable L-functions. 3. There is an analogous conjecture for elliptic curves over function fields. It has been
proved in this case by M. Artin and J. Tate [T1] that the L-series has a zero of order at least r,but the conjecture itself remains unproved. In the function field case it is now known to be
equivalent to the finiteness of the Tate-Shafarevich group,[T1],[Mi] X corollary 9.7. 4. A proof of the conjecture in the stronger form would give an effective means of finding generators for the
group of rational points. Actually one only needs the integrality of the term XC in the expression for L ∗ (C, s) above, without any interpretation as the order of the Tate-Shafarevich group.
This was shown by Manin [Ma] subject to the condition that the elliptic curves were modular,a property which is now known for all elliptic curves by [W],[TW],[BCDT]. (A modular elliptic curve is
one which occurs as a factor of the Jacobian of a modular curve.) Early History Problems on curves of genus 1 feature prominently in Diophantus’ Arith- 3metica. It is easy to see that a straight
line meets an elliptic curve in three points (counting multiplicity) so that if two of the points are rational then so is the third. 1 In particular if a tangent is taken to a rational point then
it meets the curve again in a rational point. Diophantus implicitly uses this method to obtain a second solution from a first. However he does not iterate this process and it is Fermat who first
realizes that one can sometimes obtain infinitely many solutions in this way. Fermat also introduced a method of ‘descent’ which sometimes permits one to show that the number of solutions is finite
or even zero. One very old problem concerned with rational points on elliptic curves is the congruent number problem. One way of stating it is to ask which rational integers can occur as the
areas of right-angled triangles with rational length sides. Such integers are called congruent numbers. For example,Fibonacci was challenged in the court of Frederic II with the problem for n = 5
and he succeeded in finding such a triangle. He claimed moreover that there was no such triangle for n = 1 but the proof was fallacious and the first correct proof was given by Fermat. The problem
dates back to Arab manuscripts of the 10 th century (for the history see [We2] chapter 1, §VII and [Di] chapter XVI). It is closely related to the problem of determining the rational points on
the curve Cn: y 2 = x 3 − n 2 x. Indeed Cn(Q) is infinite ⇐⇒ n is a congruent number Assuming the Birch and Swinnerton-Dyer conjecture (or even the weaker statement that Cn(Q) is infinite ⇔ L(Cn,
1) = 0) one can show that any n ≡ 5, 6, 7 mod 8 is a congruent number and moreover Tunnell has shown,again assuming the conjecture,that for n odd and square-free n is a congruent number ⇐⇒ #{x,y,
z ∈ Z: 2x 2 + y 2 + 8z 2 = n} = 2 × #{x, y, z ∈ Z: 2x 2 + y 2 + 32z 2 = n}, with a similar criterion if n is even ([Tu]). Tunnell proved the implication left to right unconditionally with the
help of the main theorem of [CW] described below. Recent History It was the 1901 paper of Poincar´e [P] which started the modern interest in the theory of rational points on curves and which first
raised questions about the minimal 1 This was apparently first explicitly pointed out by Newton. 4number of generators of C(Q). The conjecture itself was first stated in the form we have given in
the early 1960’s (see [BS]). In the intervening years the theory of L-functions of elliptic curves (and other varieties) had been developed by a number of authors but the conjecture was the first
link between the L-function and the structure of C(Q). It was found experimentally using one of the early computers EDSAC at Cambridge. The first general result proved was for elliptic curves with
complex multiplication. (The curves with complex multiplication fall into a finite number of families including {y 2 = x 3 − Dx} and {y 2 = x 3 − k} for varying D, k = 0.) This theorem was proved
in 1976 and is due to Coates and Wiles [CW]. It states that if C is a curve with complex multiplication and L(C, 1) = 0 then C(Q) is finite. In 1983 Gross and Zagier showed that if C is a modular
elliptic curve and L(C, 1) = 0 but L (C, 1) = 0,then an earlier construction of Heegner actually gives a rational point of infinite order. Using new ideas together with this result, Kolyvagin
showed in 1990 that for modular elliptic curves,if L(C, 1) = 0 then r = 0 and if L(C, 1) = 0 but L (C, 1) = 0 then r = 1. In the former case Kolyvagin needed an analytic hypothesis which was
confirmed soon afterwards; see [Da] for the history of this and for further references. Finally as noted in remark 4 above it is now known that all elliptic curves over Q are modular so that we
now have the following result: Theorem. If L(C, s) ∼ c(s − 1)m with c = 0 and m = 0 or 1 then the conjecture holds. In the cases where m = 0 or 1 some more precise results on c (which of course
depends on the curve) are known by work of Rubin and Kolyvagin. Rational Points on Higher Dimensional Varieties We began by discussing the diophantine properties of curves,and we have seen that
the problem of giving a criterion for whether C(Q) is finite or not is an issue only for curves of genus 1. Moreover according to the conjecture above, in the case of genus 1, C(Q) is finite if and
only if L(C, 1) = 0. In higher dimensions if V is an algebraic variety,it is conjectured (see [L]) that if we remove from V (the closure of) all subvarieties which are images of P 1 or of abelian
varieties then the remaining open variety W should have the property that W(Q) is finite. This has been proved in the case where V is itself a subvariety of an abelian variety by Faltings 5[F2].
This suggests that to find infinitely many points on V one should look for rational curves or abelian varieties in V . In the latter case we can hope to use methods related to the Birch and
Swinnerton-Dyer conjecture to find rational points on the abelian variety. As an example of this consider the conjecture of Euler from 1769 that x 4 + y 4 + z 4 = t 4 has no non-trivial solutions.
By finding a curve of genus 1 on the surface and a point of infinite order on this curve,Elkies [E] found the solution, 2682440 4 + 15365639 4 + 18796760 4 = 20615673 4 His argument shows that
there are infinitely many solutions to Euler’s equation. In conclusion,although there has been some success in the last fifty years in limiting the number of rational points on varieties,there are
still almost no methods for finding such points. It is to be hoped that a proof of the Birch and Swinnerton-Dyer conjecture will give some insight concerning this general problem. References
[BCDT] Breuil,C.,Conrad,B.,Diamond,F.,Taylor,R., On the modularity of elliptic curves over Q: wild 3-adic exercises,preprint. [Be] Beilinson,A.,Notes on absolute Hodge cohomology, Applications of
algebraic K-theory to algebraic geometry and number theory,Contemp. Math. 55 (1986), 35–68. [Bl] Bloch,S., Height pairings for algebraic cycles,J. Pure Appl. Algebra 34 (1984) 119–145. [BS]
Birch,B.,Swinnerton-Dyer,H., Notes on elliptic curves II,Journ. reine u. angewandte Math. 218 (1965),79–108. [CW] Coates,J.,Wiles,A., On the conjecture of Birch and Swinnerton-Dyer,Invent. Math.
39,223–251 (1977). [Da] Darmon,H.,Wiles’ theorem and the arithmetic of elliptic curves,in Modular forms and Fermat’s Last Theorem pp. 549–569,Springer (1997). [Di] wingspanson,L., History of the
theory of numbers vol. II. 6[E] Elkies,N., On A4 + B4 + C 4 = D4 ,Math. Comput. 51,No. 184 (1988) pp. 825–835. [F1] Faltings,G., Endlichkeits¨atze f¨ur abelsche Variet¨aten ¨uber
Zahlk¨orpern,Invent. Math. 73,No. 3 (1983) pp. 549–576. [F2] Faltings,G.,The general case of S. Lang’s conjecture, Perspec. Math.,vol. 15, Academic Press,Boston (1994). [GZ] Gross,B.,Zagier,D.,
Heegner Points and Derivatives of L-series,Invent. Math. 84 (1986) pp. 225–320. [HH] Hilbert,D.,Hurwitz,A., Uber die diophantischen Gleichungen von Geschlect Null; Acta Mathematica 14 (1890),pp.
217-224. [K] Kolyvagin,V., Finiteness of E(Q) and X(E, Q) for a class of Weil curves,Math. USSR,Izv. 32 (1989) pp. 523–541. [L] Lang,S.,Number Theory III, Encyclopædia of Mathematical
Sciences,vol. 60, Springer-Verlag,Heidelberg (1991). [M] Mordell, On the rational solutions of the indeterminate equations of the third and fourth degrees,Proc. Cambridge Phil. Soc. 21
(1922-23),179–192. [Ma] Manin,Y., Cyclotomic Fields and Modular Curves,Russian Mathematical Surveys vol. 26,no. 6,pp. 7–78. (1971). [Mi] Milne,J., Arithmetic Duality Theorems,Academic Press,Inc.
(1986). [P] Poincar´e,H., Sur les Propri´et´es Arithm´etiques des Courbes Alg´ebriques,Jour. Math. Pures Appl. 7,Ser. 5 (1901). [S] Schappacher,N., D´eveloppement de la loi de groupe sur une
cubique; Seminaire de Th´eorie des Nombres,Paris 1988/89,Progress in Mathematics 91 (1991),pp. 159-184. [T1] Tate,J., On the conjectures of Birch and Swinnerton-Dyer and a geometric
analog,Seminaire Bourbaki 1965/66,no. 306. [T2] Tate,J.,Algebraic Cycles and Poles of Zeta Functions,in Arithmetical Algebraic Geometry,Proceedings of a conference at Purdue University (1965).
[T3] Tate,J., The Arithmetic of Elliptic Curves,Inv. Math. 23,pp. 179–206 (1974). [Tu] Tunnell,J., A classical diophantine problem and modular forms of weight 3/2, Invent. Math. 72 (1983) pp.
323–334. [TW] Taylor,R.,Wiles,A., Ring-theoretic properties of certain Hecke algebras,Ann. of Math. vol. 141,no 3 (1995) 553–572. 7[W] Wiles,A., Modular Elliptic Curves and Fermat’s Last
Theorem,Ann. Math. 141 (1995) pp. 443–551. [We1] Weil,A., Collected Papers,Vol. II. [We2] Weil,A., Basic Number Theory,Birkha¨user,Boston (1984). 8
Best Response
You've already chosen the best response.
O M G... I um... ah... yeah...............
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
if you answer and can give valid evidence you get a million dollars I am serious it's a millennium prize problem
Best Response
You've already chosen the best response.
and you're asking here, why?
Best Response
You've already chosen the best response.
good point
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50d2727de4b052fefd1d8592","timestamp":"2014-04-20T21:10:57Z","content_type":null,"content_length":"56673","record_id":"<urn:uuid:0b6e0b66-8d19-44b5-a22e-c14306aac0bf>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
example of special lagrangian submanifold
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
are there any examples of a real analytic riemannian manifold that cannot be isometrically embedded as a special lagrangian submanifold of a calabi-yau manifold ?
peter hara
up vote 4 down vote favorite
3 dg.differential-geometry sg.symplectic-geometry
add comment
are there any examples of a real analytic riemannian manifold that cannot be isometrically embedded as a special lagrangian submanifold of a calabi-yau manifold ?
1. If the question is "Are there examples of compact real-analytic Riemannian manifolds that cannot be isometrically embedded as a special Lagrangian submanifold of a compact Calabi-Yau
manifold?", then the answer is "yes".
2. If the question is "Are there known, explicit examples of compact real-analytic Riemannian manifolds that cannot be isometrically embedded as a special Lagrangian submanifold of a
compact Calabi-Yau manifold?", then the answer is "probably".
3. If the question is "Are there known, explicit examples of compact real-analytic Riemannian manifolds for which a proof is known that they cannot be isometrically embedded as a special
Lagrangian submanifold of a compact Calabi-Yau manifold?", then the answer is "no" (to my knowledge).
For the first question, just note that, already for dimension 2, the space of compact Calabi-Yau surfaces is a finite-dimensional space, and the metrics that can be realized on compact
complex curves in such a Calabi-Yau fall into a countable union of finite dimensional families. (Remember that special Lagrangian surfaces in a Calabi-Yau are complex curves in a different
Calabi-Yau metric in the canonical $S^2$-family of Calabi-Yau metrics.) Thus, the set of such realizable metrics, even on the $2$-sphere, constitutes a countable union of finite dimensional
families. This could never account for all of the real-analytic metrics on the $2$-sphere. Thus, some example exists, though we don't know one explicitly.
For the second question, consider the fact that it is highly unlikely that the induced metric on any complex curve in a Calabi-Yau surface has constant Gaussian curvature. The 'reason' is
that most (non-flat) Ricci-flat Kahler metrics contain no complex curves with constant Gaussian curvature. It would be remarkable indeed if one of the Ricci-flat Kahler metrics on a
(non-flat) compact 4-manifold had such a curve. In particular, I regard it as highly likely that the standard round metric on the $2$-sphere cannot be isometrically embedded as a complex
curve in any compact Calabi-Yau surface.
up vote
8 down My answer to the third question is just an affirmation of my ignorance.
A remark about the local story: peter h asked about what I would call the 'local case', i.e., whether a real analytic Riemannian manifold can be isometrically embedded as a special
Lagrangian submanifold in some Calabi-Yau, with no assumptions about completeness of the ambient manifold. In particular, he raised the question for surfaces.
Now, in the case of a real-analytic metric on a Riemann surface, the answer would be 'yes', according to a paper in 2000 by D. Kaledin, "Hyperkaehler structures on total spaces of
holomorphic cotangent bundles", which is available on the arXive (arXiv:alg-geom/9710026v1). (It's 100 pages, and I don't claim that I have read it, I'm just pointing out that it is there.)
The main theorem of this paper is that, given any real-analytic Kahler manifold $M$, there exists a hyperKahler metric on a neighborhood of the $0$-section of the cotangent bundle $T^\ast M$
that is compatible with the natural complex and holomorphic structures on $T^\ast M$ and that induces the original metric on the $0$-section.
When the (real) dimension of $M$ is $2$, this would apply to show that $M$ is isometrically imbedded as a complex curve in a Calabi-Yau (complex) surface, and then one can apply the
'rotation trick' to turn this into a special Lagrangian surface when the ambient $4$-manifold is regarded as a complex surface with respect to one of the orthogonal complex structures. Thus,
the case of surfaces would be covered by this theorem.
In fact, this would work in any even dimension when the given real-analytic metric is actually Kahler.
There would remain the question (which I raised in my original paper) of whether every real-analytic metric on $S^4$ can be realized by an embedding as a special Lagrangian submanifold of a
$4$-dimensional Calabi-Yau.
show 2 more comments
If the question is "Are there examples of compact real-analytic Riemannian manifolds that cannot be isometrically embedded as a special Lagrangian submanifold of a compact Calabi-Yau manifold?", then
the answer is "yes".
If the question is "Are there known, explicit examples of compact real-analytic Riemannian manifolds that cannot be isometrically embedded as a special Lagrangian submanifold of a compact Calabi-Yau
manifold?", then the answer is "probably".
If the question is "Are there known, explicit examples of compact real-analytic Riemannian manifolds for which a proof is known that they cannot be isometrically embedded as a special Lagrangian
submanifold of a compact Calabi-Yau manifold?", then the answer is "no" (to my knowledge).
For the first question, just note that, already for dimension 2, the space of compact Calabi-Yau surfaces is a finite-dimensional space, and the metrics that can be realized on compact complex curves
in such a Calabi-Yau fall into a countable union of finite dimensional families. (Remember that special Lagrangian surfaces in a Calabi-Yau are complex curves in a different Calabi-Yau metric in the
canonical $S^2$-family of Calabi-Yau metrics.) Thus, the set of such realizable metrics, even on the $2$-sphere, constitutes a countable union of finite dimensional families. This could never account
for all of the real-analytic metrics on the $2$-sphere. Thus, some example exists, though we don't know one explicitly.
For the second question, consider the fact that it is highly unlikely that the induced metric on any complex curve in a Calabi-Yau surface has constant Gaussian curvature. The 'reason' is that most
(non-flat) Ricci-flat Kahler metrics contain no complex curves with constant Gaussian curvature. It would be remarkable indeed if one of the Ricci-flat Kahler metrics on a (non-flat) compact
4-manifold had such a curve. In particular, I regard it as highly likely that the standard round metric on the $2$-sphere cannot be isometrically embedded as a complex curve in any compact Calabi-Yau
My answer to the third question is just an affirmation of my ignorance.
A remark about the local story: peter h asked about what I would call the 'local case', i.e., whether a real analytic Riemannian manifold can be isometrically embedded as a special Lagrangian
submanifold in some Calabi-Yau, with no assumptions about completeness of the ambient manifold. In particular, he raised the question for surfaces.
Now, in the case of a real-analytic metric on a Riemann surface, the answer would be 'yes', according to a paper in 2000 by D. Kaledin, "Hyperkaehler structures on total spaces of holomorphic
cotangent bundles", which is available on the arXive (arXiv:alg-geom/9710026v1). (It's 100 pages, and I don't claim that I have read it, I'm just pointing out that it is there.) The main theorem of
this paper is that, given any real-analytic Kahler manifold $M$, there exists a hyperKahler metric on a neighborhood of the $0$-section of the cotangent bundle $T^\ast M$ that is compatible with the
natural complex and holomorphic structures on $T^\ast M$ and that induces the original metric on the $0$-section.
When the (real) dimension of $M$ is $2$, this would apply to show that $M$ is isometrically imbedded as a complex curve in a Calabi-Yau (complex) surface, and then one can apply the 'rotation trick'
to turn this into a special Lagrangian surface when the ambient $4$-manifold is regarded as a complex surface with respect to one of the orthogonal complex structures. Thus, the case of surfaces
would be covered by this theorem.
In fact, this would work in any even dimension when the given real-analytic metric is actually Kahler.
There would remain the question (which I raised in my original paper) of whether every real-analytic metric on $S^4$ can be realized by an embedding as a special Lagrangian submanifold of a
$4$-dimensional Calabi-Yau.
On the contrary, R. Bryant has shown that any closed oriented real analytic 3-dimensional riemannian manifold is the real locus of an antiholomorphic, isometric involution of a
up vote 2 down Calabi-Yau 3-fold (see http://arxiv.org/abs/math/9912246).
add comment
On the contrary, R. Bryant has shown that any closed oriented real analytic 3-dimensional riemannian manifold is the real locus of an antiholomorphic, isometric involution of a Calabi-Yau 3-fold (see | {"url":"http://mathoverflow.net/questions/90805/example-of-special-lagrangian-submanifold?sort=votes","timestamp":"2014-04-19T05:01:23Z","content_type":null,"content_length":"70947","record_id":"<urn:uuid:0b609907-f455-45d5-8840-97bfa0161293>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sample 3rd Midterm
1. An object's velocity is measured as a function of time.
• Graph this function
• How far does it fall?
Sketch the graph of F(x) over the interval (a , b).
3. Given
Find formulas for v(t) and y(t). Include your constants of integration. Interpret these constants of integration.
Using these formulas, find when a water balloon which is launched upward with an initial velocity of 50ft/sec from the top of a stadium, which is 40ft above the ground, hits the ground. What is its
velocity when it hits the ground?
4. The rate at which the population of a town is increasing is 3% per year. i.e.,
where the population, P, is measured in people and the time, t, is measured in years.
Solve this differential equation to get the population as a function of time. If we start measuring the time now (i.e., t = 0 at the present) and the population of the town is 30,000 people now, what
will the population be in 5 years?
5. Find the following indefinite integrals. 5a, 5b, 5c, 5d.
6. Find the following definite integrals.
6a, 6b, 6c, 6d. | {"url":"http://www.sonoma.edu/users/w/wilsonst/Courses/Math_161/smt3/default.html","timestamp":"2014-04-17T06:48:52Z","content_type":null,"content_length":"4365","record_id":"<urn:uuid:7f40fb60-5d2e-4945-8638-f8430a1c68ac>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
Professor A Russell Davies
Position: Honorary Distinguished Professor Email: daviesr@cardiff.ac.uk
Telephone: +44(0)29 208 75522
Fax: +44(0)29 208 74199
Extension: 75522
Location: M/2.55
Research Interests
Inverse problems in materials characterization.
Numerical inversion of integral transforms.
Mechanics of time-dependent materials.
Mathematical problems in imaging science.
Computational partial differential equations.
Research Group
Autumn Semester
MA0122 Algebra I
Administrative Duties
Financial Matters/Financial Plan
Management of School
Staff Appraisal and Development
Strategic Plan/Annual Report
Teaching Loads
Chair of the School Management Committee
Member of School Admissions Committee
Member of School Research Committee
Member of School Staff/Student Panel
Member of School Teaching and Learning Committee
External Funding Since 2000
2000-02: The computation of complex industrial EPSRC £49,800 non-Newtonian flows -JREI
2000-4: Mathematical and Numerical Modelling of Leaf(UK) Confectionery Manufacture -EPSRC £38,500
2000-3: Complex fluids and complex flows. EPSRC £583,000 UWS/UWA Platform Grant
2001: Resource capital for complex fluids research HEFCW £100,000
2002-7: Unilever Research Consultancy Unilever £70,000
2004-6: SRIF2 Resource for Computational Fluid Dynamics HEFCW £650,000
Major Conference Talks Since 2004
Aug 2004. Relating the Relaxation Spectrum to Wave Dispersion Data. VIIIth International Workshop on Time Dependent Materials, Bled.
Jan 2005. Wobble, Creep and Relaxation: Modelling Materials with Memory. SIAM UK-IE Meeting, Cork.
April 2005. Determining Creep and Relaxation Functions from a Single Experiment. AERC 2005, Grenoble.
October 2005. On Creep and Relaxation. Vth International Conference on Mechanics of Time Dependent Materials. Nagano, Japan.
August 2006. Recent advances in Linear Viscoelasticity. IXth International Workshop on Time Dependent Materials. Portoroz.
Postgraduate Students
Graduated (Since 2000)
G. Bowers
M. al Hodaly
Eman el Aidarous
1969 - BSc Hons Mathematics and Physics, King’s College, London
1971 - MSc (Mathematics), Oxford University, Balliol College
1974 - D Phil (Mathematics), Oxford University, Balliol College
Scholarships - Thomas and Elizabeth Williams Scholar, 1970-73; Balliol College Graduate Award, 1971-1973
Membership of Learned Societies
Institute of Mathematics and its Applications (MIMA, CMath)
London Mathematical Society
British Society of Rheology
1973-75: Atlas Research Fellow in Mathematics, Pembroke College Oxford and SERC Rutherford-Appleton Laboratory
1976-85: Lecturer in Applied Mathematics, The University College of Wales, Aberystwyth
1984: Visiting Fellow, Centre for Mathematical Analysis, Australian National University
1984: Visiting Scientist, Division of Mathematics and Statistics, CSIRO, Canberra
1984-86: Senior Lecturer in Applied Mathematics, UCW, Aberystwyth
1986: Visiting Fellow, Centre for Mathematical Analysis, Australian National University
1986-90: Reader in Mathematics, UCW, Aberystwyth
1990: Visiting Fellow, Centre for Mathematical Analysis, Australian National University
1990-to date: Professor of Mathematics, UCW, Aberystwyth (now UWA)
1990: Visiting Fellow, Department of Mathematics, Melbourne University
1998: Visiting Scientist, CSIRO Mathematical and Information Sciences, Canberra
2000: Visiting Scientist, CSIRO Mathematical and Information Sciences, Canberra
2000-2003: Head of Department of Mathematics, UWA
2006: Visiting Scientist, CSIRO Mathematical and Information Sciences, Canberra
2006-to date: Head of the School of Mathematics, Cardiff University
Editorial Boards and Advisory Committees
Editorial Board: Inverse Problems (1988-92)
Editorial Board: Numerical Methods for Partial Differential Equations, 1992-
Council: British Society of Rheology (1994 - 2000)
Scientific Committee: Smith Institute for Industrial Mathematics and System Engineering, 2000–2006
Scientific Committee: KTN for Industrial Mathematics 2007-
Awards and Honours
Annual Award: British Society of Rheology 2005
President: Society of Industrial and Applied Mathematics, United Kingdom and Ireland m2007-2009
National Physical Laboratory, Teddington
Shell Research Limited, Thornton
RAPRA Technology, Shawbury
BICC Cables, Chester
Devro-Teepak Limited, Moodiesburn
Nestle Research Laboratory, Lausanne
Leaf UK, Southport
Unilever UK, Port Sunlight | {"url":"http://www.cardiff.ac.uk/maths/contactsandpeople/profiles/daviesr.html","timestamp":"2014-04-20T18:33:29Z","content_type":null,"content_length":"19279","record_id":"<urn:uuid:e240c493-5807-40f0-a570-5f50d7bb1b0c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trigonometry Vocabulary
2. What is the angle of a triangle called when its going up?
4. What is the sum of the measure of the angles of any triangle?
5. In what quadrant would this point be in (-2,-1).
6. An angle that is equal to 90 degrees.
8. Two angles whose sum adds up to 180 degrees.
9. Angles that have the same initial side and the same terminal side are called:
12. Triangles of exactly the same shape but not neccesarily the same size.
14. Two angles whose sum adds up to 90 degrees.
15. What is the amplitude if you were graphing tan, cot, sec, and csc?
17. What are cos and sec & tan and cot?
18. What instrument is used to measure angles?
20. What are equations that are true for all values of the variable for which all expressions are defined?
21. What is the most common unit of measure in angles?
24. Cofunction of csc.
25. What does this mean ∞
28. sin/cos=
30. The ratio of opposite to adjacent. | {"url":"http://www.armoredpenguin.com/crossword/Data/2012.11/0221/02213410.472.html","timestamp":"2014-04-19T17:14:03Z","content_type":null,"content_length":"95486","record_id":"<urn:uuid:646ee42f-f072-4b92-98a7-851ca557d75f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Life and Times of the Central Limit Theorem
The Life and Times of the Central Limit Theorem chronicles the history of the Central Limit Theorem (CLT) from its earliest beginnings to its mature form. The book’s author, William J. Adams, tells
the story from the work of Abraham de Moivre in 1733 to the work of Aleksandr Lyapunov around 1900. Adams includes two expository papers, one by William Feller and another by Lucien le Cam, to cover
the development of the CLT to its final form around 1935. As Feller points out, the “final” form of the theorem was not the end of research around the CLT. However, by 1936 necessary and sufficient
conditions for the CLT were published and in a sense the theorem had reached its final form.
Roughly speaking, the CLT says that the sum of a large number of independent random variables X[1] + X[2] + ... + X[n] has an approximately normal (Gaussian) distribution. This statement begs
numerous questions. What are the requirements on the distributions X[i ]that go into the sum in order for the theorem to hold? Is independence necessary? Must the random variables be identically
distributed, and if not, how different can they be? In what sense does the approximation converge? Could a sum approach some other limiting distribution other than the normal distribution? These are
some of the questions that were resolved over the two centuries between the first hints of the CLT and its mature form.
The following presentation sequence is typical of a contemporary probability course.
1. Introduce the normal distribution as the distribution with density
(2π)^–½ exp(–x^2/2).
2. Present the CLT, proving a special case of the theorem using moment generating functions.
3. Remark that you can approximate binomial probabilities using a normal distribution.
The historical development was quite different than the sequence above, almost the reverse. The CLT began with the problem of computing binomial probabilities. de Moivre discovered that such
probabilities could be approximated using integrals of the form exp(–x^2) and proved a very special case of the CLT. Only later did anyone think of normalizing exp(–x^2) to form the density of a
probability distribution. And although de Moivre did have the idea of using a generating function for binomial probabilities, the technique of using moment generating functions would not appear until
later. Also, the term “central limit theorem” did not arrive until Pólya so named the theorem in 1920.
Adams concludes what he calls the “early life and middle years” of the CLT with the work of Lyapunov and includes four papers of Lyapunov in an appendix. Lyapunov’s version of the CLT was much closer
to its final form than to its embryonic form due to de Moive. Perhaps more important than the theorems he proved was the technique he developed, that of characteristic functions.
The latter half of Adams’ book, what he calls “the modern era”, consists of two expository papers: “The fundamental limit theorems in probability” by William Feller, and “The Central Limit Theorem
around 1935” by Lucien le Cam.
Feller’s paper contains some historical detail but is primarily mathematical rather than historical. Also, his paper is not limited to the CLT but is also concerned with a related theorem, the law of
the iterated logarithm. Le Cam’s paper quickly summarizes the entire history of the CLT but as the title implies the paper concentrates on the endgame, the work of William Feller, Paul Lévy, and
Harald Cramér around 1935.
The Life and Times of the Central Limit Theorem is ostensibly the history of one theorem, but it touches on major themes in the development of probability, statistics, and modern analysis. And while
it is ultimately a history book, it contains a generous portion of precise mathematics. Someone not interested in history would benefit from reading the book, especially Feller’s paper, in order to
learn the nuances of various formations of the CLT.
John D. Cook is a research statistician at M. D. Anderson Cancer Center and blogs daily at The Endeavour. | {"url":"http://www.maa.org/publications/maa-reviews/the-life-and-times-of-the-central-limit-theorem","timestamp":"2014-04-20T16:42:30Z","content_type":null,"content_length":"100670","record_id":"<urn:uuid:5a9953e2-2216-4cbb-b18e-b319b86b02d2>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Frequently Asked Questions (FAQ)
What's an ephemeris?
What are orbital elements?
Orbital elements describe a conic (most commonly an ellipse) in inertial space. They also describe an object's state (equivalent to its Cartesian position and velocity) at a specific epoch. Typically
orbital elements are used to express an object's osculating orbit (an orbit tangent to and approximating the actual orbit) at the specified epoch. On this site, we use both Keplerian elements and
so-called comet elements.
Keplerian elements are eccentricity, semimajor axis, mean anomaly, inclination, longitude of the ascending node, and argument of perihelion. In some cases, longitude of perihelion is used instead of
the argument of perihelion. Comet elements are eccentricity, perihelion distance, time of perihelion passage, inclination, longitude of the ascending node, and argument of perihelion.
Osculating orbital elements are often used in two-body propagation to estimate a body's state (position and velocity) at some time other than the epoch. It is important to realize that for some
bodies, especially planetary satellites and comets, that such estimates may be grossly in error with respect to the actual orbit. In general, the farther away in time from the epoch, the greater the
A complete description of orbital elements and their use in celestial mechanics is beyond the scope of this web site. More information can be found in a number of texts. See the FAQ entry below for
some relevant books.
I'm teaching a course on the solar system. Can you help?
I'd like to publish information from your site on my site. Do I need permission?
The short answer is yes. At the very least, we'd be interesting in knowing what information you intend to use and how you intend to use it. Ideally, we'd prefer you link from your site directly to
the information on our site. This is particularly true of numerical parameters which may be updated frequently.
Most of the content on our site is covered under
JPL's copyright statement
(toward the bottom of the page). It is also important to understand that use of information from our site does not in any way imply endorsement of that end use.
Do you have equations for computing approximate positions of the planets?
I want to write my own solar system "calculator". Where do I find the relevant equations?
The following books provide fundamental equations used in celestial mechanics.
• "Explanatory Supplement to the Astronomical Almanac", ed. P. K. Seidelmann, 1992, University Science Books.
• "Fundamentals of Astrodynamics", R.R. Bate, D.D. Mueller, J.E. White, 1971, Dover Publications, Inc.
• "Fundamentals of Celestial Mechanics", J.M.A. Danby, 1992, Willmann-Bell.
• "Methods of Orbit Determination for the Micro Computer", D. Boulet, 1991, Willmann-Bell.
• "Orbital Mechanics", J.E. Prussing, B.A. Conway, 1993, Oxford University Press.
• "Orbits for Amateurs with a Microcomputer", D. Tattersfield, 1984, Halsted Press.
• "Spherical Astronomy", R. M. Green, 1985, Cambridge University Press.
• "Vectorial Astrometry", C.A. Murray, 1983, Adam Hilger Ltd.
The above list is not necessarily complete nor intended to imply any endorsement by JPL or Caltech.
How do I find out where my favorite planet is going to be at some specified time?
Use our
system to generate an ephemeris for the planet of interest. If you're interested in its location relative to the local horizon, you should request output of azimuth and elevation from
. Elevation indicates height (in degrees) above the horizon while Azimuth indicates the direction (in degrees) relative to true north and increasing clockwise.
Do you provide star-charts, star coordinates, or other stellar data?
Sorry, no. This site does not provide information about stars, galaxies, extra-solar planets, or any other objects outside our solar system.
What is an astronomical unit (au)?
The astronomical unit was redefined as a unit of length exactly equal to 149,597,870,700 meters during the August 2012 General Assembly of the International Astronomical Union (IAU). The astronomical
unit, now denoted with lower case letters (au), is a convenient unit of measure for distance in the Solar System being approximately equal to the average Sun-Earth distance. The average Sun-Earth
distance is not an exact quantity because the orbit of the Earth about the Sun is not exactly elliptical due to changing perturbations by other planets and because general relativity slightly
modifies the elliptical solutions obtained from Newton’s theory of gravity.
Prior to 2012, the astronomical unit was an estimated length such that given the defined mass parameter of the sun (GM in au^3/day^2), a planet orbiting the sun under Newtonian gravity with a
semi-major axis of one au and no other planets perturbing its orbit, would have an orbital period of exactly one year (365.25 days).
In 2012, a new definition (IAU 2012 Resolution B2) of the astronomical unit was adopted. The "au" is now a fixed number of meters and the GM value is estimated. This simplifies the ephemeris
development process, aligns it with an updated IAU definition of solar system coordinate time (TDB) and recognizes that the mass parameter of the sun is changing with time.
What is the time scale used for the solar system ephemerides?
Coordinate time is the time used in the development of ephemerides for solar system objects. Under General Relativity, the rate at which actual clocks tick is called proper time. The rate of proper
time depends on the location and motion of the clock, so there is no single proper time for the solar system as a whole. So for representing the motions of he solar system bodies using numerical
integration, a time scale called 'coordinate time' is used, which is not the rate of any physical clock, but a parameter for which the equations of motion are simply expressed.
The integrated ephemerides for solar system bodies are stored as tables of positions and velocities as a function of coordinate time. In order to evaluate a measurement, such as the round-trip light
time from Earth to Mars, or the direction to Saturn as seen from Earth, the proper times of the measurement must be converted to coordinate time before looking up the positions.
Currently the International Astronomical Union (IAU) has defined two coordinate times for the solar system and both are equally accurate. The coordinate time used for the JPL ephemerides, as the
independent variable in the solar system, barycentric relativistic, equations of motion, is Barycentric Dynamical Time (TDB) or, in French, Temps Dynamique Barycentrique. This coordinate time is
defined such that in the vicinity of the Earth the difference in coordinate time and international atomic time (TAI) is 32.184 seconds plus a small variation that is less than 3 milliseconds. TAI
differs from Universal Coordinate Time (UTC) by an integer number of seconds (34 as of July 1, 2012), which changes only when leap seconds are added. UTC is the basis for civil time (e.g., Pacific
Standard Time equals UTC - 8 hours).
The other solar system coordinate time defined by the IAU is Barycentric Coordinate Time (TCB, Temps-Coordonnée Barycentrigue), which differs from TDB by a defined offset and rate. TCB has the
property that infinitely far from the Sun, where the curvature of space-time from the gravity of the Sun goes to zero, TCB and proper time tick at the same rate. This property makes some theoretical
calculations have a simpler form, but causes a fairly large difference in rate between TCB and TAI near the Earth. TCB is not used in the development of JPL’s ephemerides for solar system bodies.
What's the exact value of... <insert your favorite orbital element here>?
To have an exact value, a quantity must be either strictly constant, or else, exactly periodic.
The orbits of the planets are only approximately elliptical; their motions are only approximately periodic; not exactly. Therefore, it doesn't make much sense to ask questions about "exact" Keplerian
(elliptical) elements.
A simple analogy would be to take a pencil and draw a free-hand circle on a piece of paper, going round-and-round a number of times. Then ask, "what is the EXACT radius of that circle?"
It is impossible to give an answer; the curve that you have drawn is not exactly a circle.
One may define an "osculating" radius, for example: the radius of curvature at any given point on the curve. However, this value is exact at that given point only. The value will change for a
different place on the curve; or, if averaged over some portion of the curve; or, if averaged over some other portion of the curve.
Which result gives the "exact" answer? None; there is no "exact" radius for the curve.
It's a whole different situation with the JPL ephemerides. We do not use things such as periods, eccentricities, etc. Instead, we integrate the equations of motion in Cartesian coordinates (x,y,z),
and we adjust the initial conditions in order to fit modern, highly accurate measurements of planetary positions. As a result, we are able to produce ephemerides which are far more accurate than
those based upon elliptical elements.
In the analogy above, it could be possible to measure each point of the hand-drawn curve very accurately; however, one still could not give a unique value for the curve's radius.
What about planet X?
People used to think that the orbits of Uranus and Neptune could not be properly fit to the observations (measurements of their positions). Therefore, it was assumed that there was an additional
planet out in the farther reaches of the solar system which was perturbing those planets' motions: i.e., Planet X. It is now known, however, that the orbits of Uranus and Neptune can be adjusted to
the accuracy of the data if done properly (as in the reference below). Thus, no need for Planet X.
Over the past decade or so, a number of bodies have been found out past the orbit of Pluto. In fact, one of them is even bigger than Pluto; so, in some sense, it should be considered to be a planet.
Is it "The Planet X"? No; neither it nor Pluto is close in enough or massive enough to significantly affect the orbits of Uranus or Neptune.
Standish,E.M.: 1993, "Planet X: No Dynamical Evidence in the Optical Observations", Astronomical Journal, 105, no.5, 2000-2006. | {"url":"http://ssd.jpl.nasa.gov/?faq","timestamp":"2014-04-21T14:54:51Z","content_type":null,"content_length":"30333","record_id":"<urn:uuid:df77e089-4a22-4811-9cd7-cbe3fd0e96a0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Any help would be appreciated
1. May 2nd 2013, 12:41 AM #1
2. May 2nd 2013, 12:48 AM #2
Re: Any help would be appreciated
Each of those roads will be the hypotenuse of a right angle triangle. The first triangle has legs 1km and x km, use Pythagoras to evaluate the length of the hypotenuse. The second triangle
has legs 2km and (6 - x) km. Use Pythagoras to evaluate the length of that hypotenuse. So the total length of the roads will be the sum of those hypotenuses, then find where the minimum is by
setting the derivative equal to 0.
3. May 2nd 2013, 12:52 AM #3
Re: Any help would be appreciated
Another approach is to reflect one of the triangles across the shoreline, then simply connect the two towns with a straight line.
Similar Math Help Forum Discussions
Search Tags | {"url":"http://mathhelpforum.com/calculus/218468-any-help-would-appreciated.html","timestamp":"2014-04-17T16:29:47Z","content_type":null,"content_length":"37114","record_id":"<urn:uuid:fa599052-fbd4-4dfb-be38-738a1e81d1a9>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculate the distance of two geometries using the specified strategy.
The free function area calculates the area of a geometry. using the specified strategy. Reasons to specify a strategy include: use another coordinate system for calculations; construct the strategy
beforehand (e.g. with the radius of the Earth); select a strategy when there are more than one available for a calculation.
template<typename Geometry1, typename Geometry2, typename Strategy>
strategy::distance::services::return_type<Strategy>::type distance(Geometry1 const & geometry1, Geometry2 const & geometry2, Strategy const & strategy)
Type Concept Name Description
Geometry1 const & Any type fulfilling a Geometry Concept geometry1 A model of the specified concept
Geometry2 const & Any type fulfilling a Geometry Concept geometry2 A model of the specified concept
Strategy const & Any type fulfilling a Distance Strategy Concept strategy The strategy which will be used for distance calculations
The calculated distance
#include <boost/geometry/geometry.hpp>
#include <boost/geometry/algorithms/distance.hpp>
● more (currently extensions): Vincenty, Andoyer (geographic) | {"url":"http://www.boost.org/doc/libs/1_47_0/libs/geometry/doc/html/geometry/reference/algorithms/distance/distance_3_with_strategy.html","timestamp":"2014-04-19T02:57:30Z","content_type":null,"content_length":"13051","record_id":"<urn:uuid:d52c2c47-3bde-4502-8fde-182e8241762a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
NetLogo Models Library:
Sample Models/Mathematics/Probability
(back to the library)
Binomial Rabbits
If you download the NetLogo application, this model is included. (You can also run this model in your browser, but we don't recommend it; details here.)
## WHAT IS IT?
This model simulates a binomial probability distribution or (in the limit) normal distribution. It works by analogizing height variations to rabbit hops.
This model was created by a student in an effort to make sense of normal distributions. In particular, he sought to understand why height is distributed normally in human populations. For a detailed
account of this case, see: Wilensky, U. (1997). What is Normal Anyway? Therapy for Epistemological Anxiety. Educational Studies in Mathematics. Volume 33, No. 2. pp. 171-202. http://
The procedures for this model have been kept largely intact from the original code written by the student. With advances in the language, this code is no longer at all an optimal way of writing this
model. We have kept the original code for research purposes --- please do not use it as an example of good NetLogo coding.
## HOW IT WORKS
A number of rabbits are placed at the center of the bottom of the world. A move pattern determines the way a rabbit moves. Each rabbit can choose to hop right or left a certain hop-size. The
likelihood of a rabbit following each move pattern is given in terms of ratios. Each rabbit may have up to five different move patterns.
## HOW TO USE IT
### Setup
Method one (sliders setup): Press SETUP button. This creates the number of rabbits from the NUMBER slider and up to three hops and associated probability ratios from the six sliders above the NUMBER
slider. Each time a rabbit hops, it chooses one of the three moves -- hop-1, hop-2, or hop-3 -- with a likelihood in the ratio of ratio-1, ratio-2, and ratio-3 to each other. For example, if ratio-1
= 2, ratio-2 = 4, and ratio-3 = 6, the rabbit has a 2-in-12 chance of making the hop-1 move, a 4-in-12 chance of making the hop-2 move, and a 6-in-12 chance of making the hop-3 move.
Method two (manual setup): In the Command Center, type "setup [number] [list of hops [list of probability ratios]" to initialize the rabbits (e.g. "setup 4000 [1 -1] [1 2]" will set up 4000 rabbits
hopping either one unit to the right(1) or one unit to the left (-1) with a chance of hopping to the left being twice as much as that to the right.) Up to five steps and corresponding probability
ratios can be used.
### Running
The GO-ONE-HOP button makes each rabbit hop once.
The GO button tells the rabbits to hop the number of times set by the HOPS slider. For example, if HOPS is set to 10, the GO button makes each rabbit hop 10 times. To stop the rabbits from hopping
once they've started, press the GO button again.
There are two scale monitors and one scale slider in the Interface Window. X-SCALING is used to magnify the width of the world to facilitate more hops. It is manually set by users with the X-SCALING
slider. The setting can be changed as the model runs. Y-SCALE is used to regulate the vertical scale -- to ensure that the highest yellow distribution bar is always 80% of the height of the world.
This is done at each hop.
The figure inside the "y-scale" monitor is the number of rabbits a yellow line the height of the world represents. The figure inside the "x-scale" monitor is the number of steps represented by a full
view. (The rabbits wrap around the left and right edges, so if they get to the edge, you should increase the x-scale.)
The following formulae can be used to evaluate the actual numbers of rabbits or steps hopped:
Actual Number of Rabbits for a Yellow Line = height of line * ( y-scale / 100 )
Cumulative Number of Steps Hopped so far = X-coordinate of a line * ( x-scale / 100 )
To find out exactly how many rabbits are represented by a line, control-click (Mac) or right-click (other) anywhere on the line and choose inspect patch from the menu that appears. The inspector will
have a variable "turtle-bottom" which will tell you how many turtles (rabbits) are at the bottom of the line.)
## THINGS TO NOTICE
The purple average line shows where an average rabbit would be. Observe the movement of this line -- both its position and velocity -- and try to relate these to the settings.
Play with the NUMBER slider to see if what you predict is what you see when the number of rabbits is small. For what numbers of rabbits are your predictions the most accurate?
## THINGS TO TRY
Try different values for list of steps. What happens to the distribution?
Try different values for probability ratios. What happens to the distribution?
Is the distribution always symmetric? What would you expect?
## EXTENDING THE MODEL
Create a plot for 'hopping'. First decide what to plot, and then implement the proper NetLogo plot functions.
Rewrite the model so rabbits take list variables. Are there now new capabilities you can give the rabbits?
## NETLOGO FEATURES
The limitation on the number of turtles constrains the limits of the "number" slider. You can make the corresponding change to the `number` slider --- select the slider by clicking and dragging the
mouse button over it. Then click on the edit button and change 'Maximum' to the new number. Having more rabbits to jump can be useful for certain statistical simulations.
You can also change the settings to have a bigger world to fit more hops or show very fine distribution diagrams.
Note that since turtles could not have list variables in earlier versions of the language, the global lists steps and ratios are used to hold the movement patterns and ratios. The turtles access
these globals to know how to move. (if we were writing this model now, we would not code it this way as turtles in NetLogo can have list variables). The procedures `define-steps` and 'define-ratios'
use the primitives `first` and `butfirst`. Both of these are list operators --- that is, they operate on lists of things. The `first` of a list is simply its first element. Likewise, the `butfirst`
of a list is a list of all elements except for the first.
## RELATED MODELS
Galton Box, Random Walk Left Right
See: Wilensky, U. (1997). What is Normal Anyway? Therapy for Epistemological Anxiety. Educational Studies in Mathematics. Volume 33, No. 2. pp. 171-202. http://ccl.northwestern.edu/cm/papers/normal/
## HOW TO CITE
If you mention this model in a publication, we ask that you include these citations for the model itself and for the NetLogo software:
* Wilensky, U. (1997). NetLogo Binomial Rabbits model. http://ccl.northwestern.edu/netlogo/models/BinomialRabbits. Center for Connected Learning and Computer-Based Modeling, Northwestern University,
Evanston, IL.
* Wilensky, U. (1999). NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.
## COPYRIGHT AND LICENSE
Copyright 1997 Uri Wilensky.
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/ or send a
letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.
Commercial licenses are also available. To inquire about commercial licenses, please contact Uri Wilensky at uri@northwestern.edu.
This model was created as part of the project: CONNECTED MATHEMATICS: MAKING SENSE OF COMPLEX PHENOMENA THROUGH BUILDING OBJECT-BASED PARALLEL MODELS (OBPML). The project gratefully acknowledges the
support of the National Science Foundation (Applications of Advanced Technologies Program) -- grant numbers RED #9552950 and REC #9632612.
This model was developed at the MIT Media Lab using CM StarLogo. See Wilensky, U. (1993). Thesis - Connected Mathematics: Building Concrete Relationships with Mathematical Knowledge. Adapted to
StarLogoT, 1997, as part of the Connected Mathematics Project. Adapted to NetLogo, 2001, as part of the Participatory Simulations Project.
This model was converted to NetLogo as part of the projects: PARTICIPATORY SIMULATIONS: NETWORK-BASED DESIGN FOR SYSTEMS LEARNING IN CLASSROOMS and/or INTEGRATED SIMULATION AND MODELING ENVIRONMENT.
The project gratefully acknowledges the support of the National Science Foundation (REPP & ROLE programs) -- grant numbers REC #9814682 and REC-0126227. Converted from StarLogoT to NetLogo, 2001.
(back to the NetLogo Models Library) | {"url":"http://ccl.northwestern.edu/netlogo/models/BinomialRabbits","timestamp":"2014-04-19T05:01:44Z","content_type":null,"content_length":"13537","record_id":"<urn:uuid:23336b17-ae6d-434a-b99a-9ff6123a03df>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
Photo Caption: M Atiyah 29 Mar 69
“The Atiyah-Singer index theorem was the toughest hurdle for me, but, somehow, we conquered it too. (To be sure, after it appeared in print, Singer told me that it didn’t come out quite right—the
relation with the Riemann-Roch theorem was unclear or perhaps even misstated—but there it was, and I feel sure that my fellow ignoramuses and I learned something worth knowing that we hadn’t known
before.)”–Paul R. Halmos, I Want to Be a Mathematician
Michael Francis Atiyah contributed to a wide range of topics in mathematics centering on the interaction between geometry and analysis. His work showed how the study of vector bundles on spaces could
be regarded as the study of cohomology theory, called K-theory. He was awarded the Fields Medal in 1966.
The ideas which led to Atiyah being awarded a Fields Medal were later seen to be relevant to gauge theories of elementary particles.
The theories of superspace and supergravity and the string theory of fundamental particles, which involves the theory of Riemann surfaces in novel and unexpected ways, were all areas of theoretical
physics which developed using the ideas which Atiyah was introducing.
In addition to the Fields Medal, Atiyah received many honors during his career including the Feltrinelli Prize from the Accademia Nazionale dei Lincei in 1981, the King Faisal International Prize for
Science in 1987, the Benjamin Franklin Medal, and the Nehru Medal. In 2004, he and Isadore Singer were awarded the Neils Abel prize of £480 000 for their work on the Atiyah-Singer Index Theorem.
Michael Francis Atiyah Biography | {"url":"http://halmos.tumblr.com/post/6118254862/photo-caption-m-atiyah-29-mar-69-the","timestamp":"2014-04-18T23:15:18Z","content_type":null,"content_length":"41127","record_id":"<urn:uuid:15cee592-a442-4850-9179-51f874e93f66>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Marple Township, PA Math Tutor
Find a Marple Township, PA Math Tutor
...I can also help students improve their strategies on standardized tests. Determining which problems to solve first, which ones to come back to later, and which ones to avoid entirely helps
students manage the time pressure of these tests.I have taught calculus at several colleges and universitie...
18 Subjects: including trigonometry, differential equations, algebra 1, algebra 2
...I'm kind of nerdy like that. I don't just teach my students what's "right" and "wrong" in regard to grammar. Instead, we study how the language works.
47 Subjects: including SAT math, precalculus, ACT Math, piano
...I look forward to hearing from you and helping to achieve your academic goals!It is very important for students to know which words work best when speaking or writing, as well as their
meanings. My degrees in English and education have provided me with a great deal of experience both learning an...
12 Subjects: including prealgebra, reading, English, writing
...For those four years of high school I played club ball as well. I love it and play whenever possible. Now, I play in local leagues and in work intramurals.
10 Subjects: including geometry, linear algebra, logic, algebra 1
...I am always trying to improve. I also use it for my own finances at home and have used a lot of functions for finances as well as scientific data analysis. I have and use the manuals
frequently to learn to ways to use the software.
16 Subjects: including geometry, linear algebra, logic, algebra 1
Related Marple Township, PA Tutors
Marple Township, PA Accounting Tutors
Marple Township, PA ACT Tutors
Marple Township, PA Algebra Tutors
Marple Township, PA Algebra 2 Tutors
Marple Township, PA Calculus Tutors
Marple Township, PA Geometry Tutors
Marple Township, PA Math Tutors
Marple Township, PA Prealgebra Tutors
Marple Township, PA Precalculus Tutors
Marple Township, PA SAT Tutors
Marple Township, PA SAT Math Tutors
Marple Township, PA Science Tutors
Marple Township, PA Statistics Tutors
Marple Township, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/marple_township_pa_math_tutors.php","timestamp":"2014-04-19T07:17:29Z","content_type":null,"content_length":"23906","record_id":"<urn:uuid:0faf439f-1df3-41a1-8cde-5b7e89404e57>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00237-ip-10-147-4-33.ec2.internal.warc.gz"} |
How To Illustrate A Circle Graph
How To Make A Circle Graph
This video shows a simple explanation of how to take the numbers of a factorized or expanded math equation and use it to create an accurate graph of the circle it describes.
Hi, I'm Dr. Shah. I was the National Lecture Competition winner in 1989, and I'm the math master at Mathscool.
Now, ready for a new way of doing math? How to make a circle graph? I'm going to start with the equation of a circle. And it might look something like this. If you've seen an equation of a circle
like that, it's in the factorized form.
And that's the best way if we want to graph here. Now, what we need to do is identify two things: one, the center of the circle and two, the radius of the circle. We can find the center of the circle
simply by imagining each of these brackets as zero.
If this bracket were zero, the next would have to be minus five. So my 'x' would have to be minus five. And for this bracket to be zero, 'y' would have to be one.
So, one. Those are my x and y coordinates for the center of the circle. So from the brackets, you can easily identify the center of the circle.
And then I need to find the radius of the circle, and to find the radius of the circle, what I do is square root that, which is four as the radius of my circle. So there's my center and there's my
radius, and now to graph that, I'm just going to start with a grid. I know the center is as minus five.
Minus one. Minus two. Minus three.
Minus four. Minus five. And a y coordinate of one.
One. So there's the center of my circle. And its radius is four.
So if I know its radius is four, I can already plot a few points. I know if I go to the right by four - One, Two, Three, Four - that's one point on the circle. Or to the left by four - one, two,
three, four.
Down by four - one, two, three, four. And up by four - one, two, three, four. These are points on the circle, and I can just connect them together to form my circle.
There's the center of the circle, and its radius is four. That's the graph of a circle when the equation is given to us in factorized form. But what if the equation is given to us in expanded form?
So I'll do an example like that now.
This is now written in expanded form. There are no brackets here, which means it's in expanded form. So what I need to do is collect together the x terms.
And then collect together the y terms. And then just leave that number at the end. Now, looking at the x terms, complete the square for these x terms.
So you now have to complete the square: x, half that number minus one squared, and then minus the square of that. So, it would be one. So, that's that one completed.
Again, I have to complete the square for this. So it's going to be y plus half that number- so two squared and then minus that square there. So two squared is four minus 24, equals 20.
Now, all I need to do is just collect these two numbers across to this side. So I'm going to add one and I'm also going to add four. Plus one and plus four.
Plus one and plus four. And so that gives me x minus 1 squared plus y plus two squared equals 25. And now, you see I've written it in factorized form which means you'd be able to identify the center
of the circle, it's when x is one and when y is minus two.
And the radius is going to be the square root of that, which is five. And so again, that would now be easy for you to graph. . | {"url":"http://www.videojug.com/film/how-to-make-a-circle-graph","timestamp":"2014-04-17T12:31:06Z","content_type":null,"content_length":"41096","record_id":"<urn:uuid:5d60ccc2-6bfa-4e50-b0c9-45f14a601515>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Homework
Ive been working on that one -how did you come by those figures-
It seemed to me to be the only suitable pattern left over. But I couldn't create a proof to confirm it. So it's not certain.
None of the lines are consecutive 1,2,3,4, ....
It is not grouped as odds and evens
None of them appear to have a + - x / sequence or similar total.
They didn't fit as Fibonacci numbers
The clue meant that 5 was in the sequence.
However 5 cannot be the last digit, unless you 1,2,3,4,5 which is a direct sequence. Eliminated in step one.
The line below contains 1,2,4, Since the pattern show no duplicates in a three digit combo going down. I went with what would be next. 1,3,4
The next number would have to be 5 to complete the clue.
4,5,6 was also in the line below so 7 seemed appropriate.
It's not a proof. It's a bad guess. But I figured what the heck. | {"url":"http://forum.beemaster.com/index.php?topic=17703.20","timestamp":"2014-04-19T12:12:14Z","content_type":null,"content_length":"78258","record_id":"<urn:uuid:0f93a9b5-e650-4ca8-93f6-19872ec62acb>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Austell Algebra 2 Tutor
Find an Austell Algebra 2 Tutor
...Quantitatively and qualitatively, the student will describe the process of solutions and characteristics of solutions. Thermodynamic relationships will be investigated. Students will explore
the factors that affect the rates of a reaction and apply them to the theory of dynamic equilibrium.
14 Subjects: including algebra 2, chemistry, physics, SAT math
I am unable to take on new students at this time. I am currently a graduate student in a joint program between Emory and Georgia Tech pursuing a PhD in biomedical engineering. I got my bachelor's
from Vanderbilt in Nashville, TN, but I went to high school in Gwinnett County here in Atlanta.
17 Subjects: including algebra 2, chemistry, physics, geometry
...She went through college at an accelerated pace of 3 years instead of 4, while maintaining her HOPE scholarship. She even studied abroad in Ireland during those three years! She's been tutoring
for over 5 years in many different environments that include one-on-one tutoring in-person and online, as well as tutoring in a group environment.
22 Subjects: including algebra 2, reading, writing, calculus
...A little about me: I have strong interests in mathematics, software development, electrical engineering, and I possess excellent troubleshooting skills. My love for these fields is the reason
why I focused in Mathematics, communication and signal processing, power systems during my undergraduate...
27 Subjects: including algebra 2, French, calculus, physics
...You will be charged for at least an hour for each session, regardless of the time elapsed. You will be charged for the time booked or however long the session lasts, depending on whichever is
longest. I offer online and in-person tutoring.
10 Subjects: including algebra 2, calculus, physics, ASVAB | {"url":"http://www.purplemath.com/austell_algebra_2_tutors.php","timestamp":"2014-04-18T16:05:17Z","content_type":null,"content_length":"23906","record_id":"<urn:uuid:6386ca1c-1d5e-4ab5-a63f-01aa733435fe>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Fractions can be defined as any part of a whole number, a small part or even a piece or amount. A fraction is made up of a numerator and a denominator, where the numerator would tell us how many
parts of the whole there are, a while the denominator would give us an idea of how many parts are in the whole.
When and if the numerators are found to be smaller than the denominator then the fraction is considered to be a proper fraction.When and if the numerators are found to be greater than the
denominators then the fraction is labelled as an improper fraction.When and if a fraction is accompanying a whole number then it is considered as a mixed number. In the recent past, the usage of
fraction was done to describe shares of objects or group objects, but now these fractions have been replaced by decimals and the calculations are often completed with computers or calculators. | {"url":"http://www.mathcaptain.com/number-sense/fractions.html","timestamp":"2014-04-18T05:30:08Z","content_type":null,"content_length":"108060","record_id":"<urn:uuid:ed4e01f4-9ab4-416b-9f25-357fd22a92f9>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
R and Big Data
Seamless R and C++ Integration with Rcpp: Introduction and Examples
Seamless R and C++ Integration with Rcpp: Introduction and Examples
A Brief Introduction to Rcpp
Rcpp by Examples
RcppArmadillo: Accelerating R with C++ Linear Algebra
Rcpp by Examples
R and C++ Integration with Rcpp
R and C++ Integration with Rcpp: Motivation and Examples
C++ for R Programmers
Rcpp Workshops at useR! 2012: Introduction and Advanced Use
Wittier Webapps with RInside
Rcpp: Seamless R and C++ Integration (with Romain Francois)
Rcpp: Seamless R and C++ Integration (with Romain Francois)
Presentation at the Seattle R User Group, Seattle, WA, USA, December 7, 2011. Pdf version of the presentation slides.
R / C++ Integration with Rcpp and RInside (with Romain Francois)
Full-day workshop preceding R/Finance 2011, Chicago, IL, April 28, 2011.
Pdf version of the presentation slides: Part 1 (Introduction), Part 2 (Details), Part 3 (Advanced) and Part 4 (Applications).
Also available are the examples as tar.gz and zip file.
Integrating R with C++: Rcpp, RInside and RProtoBuf (with Romain Francois)
Introduction to High-Performance Computing with R
Rcpp: Seamless R and C++ Integration (with Romain Francois)
RProtoBuf: Protocol Buffers for R (with Romain Francois)
RQuantLib: Interfacing QuantLib from R (with Khanh Nguyen)
Seamless R and C++ Integration: Rcpp and RInside (with Romain Francois)
Extending and Embedding R with C++
RQuantLib: Interfacing QuantLib from R++ (with Khanh Nguyen)
Seamless R Extensions using Rcpp and RInside (with Romain Francois)
Programming with Data: Using and Extending R
Introduction to High-Performance Computing with R
Introduction to High-Performance Computing with R
cran2deb: A fully automated CRAN to Debian package generation system (with Charles Blundell; Dirk Eddelbuettel is corresponding author)
Rcpp and RInside: Easier R and C++ integration
Introduction to High-Performance R
R in Finance
Introduction to High-Performance Computing with R
Invited workshop/presentation at the Bank of Canada, Ottawa, Canada, December 22, 2008.
Pdf version of the slides
Live cdrom provided via Quantian's Alioth site, see the tutorial for details.
Introduction to High-Performance R
Three-hour tutorial presented at the useR! 2008 conference at TU Dortmund, Germany, August 11-14, 2008.
Pdf version of the slides from the tutorial.
Live cdrom provided via Quantian's Alioth site, see the tutorial slides appendix for details.
Scripting with R in high-performance computing: An example using littler
Presentation at the useR! 2008 conference at TU Dortmund, Germany, August 11-14, 2008.
Pdf version of the slides from the presentation.
RDieHarder: An R interface to the DieHarder suite of Random Number Generator Tests (with Robert G. Brown; Dirk Eddelbuettel is lead / corresponding author)
Presentation at the useR! 2007 conference at Iowa State University, Ames, Iowa, August 8-10, 2007.
Pdf versions of accepted paper and slides presented at the conference.
apt-get install cran bioc: On automated builds of 1700 R packages for Debian (with David Vernazobres, Albrecht Gebhard and Steffen Moeller; Dirk Eddelbuettel is lead / corresponding author)
Presentation at the useR! 2007 conference at Iowa State University, Ames, Iowa, August 8-10, 2007.
Pdf versions of accepted paper and slides presented at the conference.
Scientific Grid Computing via Community-Controlled Autobuilding of Software Packages Across Architectures (with Steffen Moeller, Daniel Bayer, David Vernazobres and Albrecht Gebhard; Steffen Moeller
is lead / corresponding author)
Presentation at the NETTAB 2007 conference in Pisa, Italy, June 12-15, 2007.
Pdf versions of accepted paper and slides presented at the conference.
Use R! in fifteen different ways: A survey of R front-ends in Quantian
Presentation at the Second international R user conference (useR! 2006) in Vienna, June 15-17, 2006.
Pdf version of slides (1.5 mb).
Quantian as an environment for distributed statistical computing
Presentation at the Directions in Statistical Computing 2005 (DSC 2005) Conference in Seattle, August 13 - 14, 2005.
Pdf version of slides (224 kb).
Quantian: A single-system image scientific cluster computing environment
Programming with financial data: Connecting R to Lim and Bloomberg
Quantian: A single-system image scientific cluster computing environment
R in Debian: Past, Present and Future (with Douglas Bates and Albrecht Gebhardt)
Enjoying a Free Lunch: Computational Economics with Linux
Presented at the Third Conference `Computing in Economics and Finance' organised by the Society for Computational Economics and hosted by the Hoover Institution, Stanford University, California,
30 June - 2 July 1997.
HTML Link
A Hybrid Genetic Algorithm for Passive Management
Presented at the Second Conference `Computing in Economics and Finance' organised by the Society for Computational Economics and hosted by the Department of Econometrics at the University of
Geneva, Switzerland, 26 - 28 June 1996.
Postscript version of paper for US letter paper (159 kByte)
The Impact of News on Foreign Exchange Rates: Evidence from Very High Frequency Data (with Thomas H. McCurdy)
Presented at the 1996 Meetings of the Canadian Economics Association at Brock Univeristy, St. Catherines, 31 May - 2 June 1996 and the 1996 Meetings of the Canadian Econometric Study Group at the
University of Waterloo, 20 - 22 September 1996.
Revised postscript version for US letter paper (208 kByte)
A Genetic Algorithm for Passive Management: Creating Index Funds with Fewer Stocks
Presented at the Third International Conference Forecasting Financial Markets organised by Chemical Bank and Imperial College, London, England, 27 - 29 March 1996.
gzipped Postscript version of paper for A4 paper (70 kByte)
gzipped Postscript version of paper for US letter paper (70 kByte)
Semiparametric Estimation of ARCH Models using Nearest Neighbour Regression: Some Monte Carlo Results (with Russell Davidson)
Presented at Canadian Econometric Study Group Meetings, McGill University, Montreal, September 1995.
gzipped Postscript version of paper (110 kByte)
gzipped Postscript version of overheads (61 kByte) | {"url":"http://dirk.eddelbuettel.com/presenations.html","timestamp":"2014-04-18T13:49:06Z","content_type":null,"content_length":"30924","record_id":"<urn:uuid:9cbd09b8-274b-4105-aa73-4f407dcce723>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Use of Partial Covariances in SEM (Path Analysis)
Use of Partial Covariances in SEM (Pa...
Lynn Imai posted on Thursday, October 05, 2006 - 6:47 pm
Dear Prof(s) Muthen,
I am trying to use path analysis to test a model. I have one main predictor of theoretical interest (a type of personality variable), leading to a process variable, which then influences an outcome
I have 7 other variables I need to control for to see if my main exogenous variable predicts over and beyond the others.
My questions are:
1) Is it common practice to use a partialled out covariance matrix to free up degrees of freedom, instead of explicitly modeling all control variables in path analysis
2) If so, how would you do this using Mplus?
All variables are continuous.
Thank you in advance for any guidance!
Linda K. Muthen posted on Friday, October 06, 2006 - 9:38 am
It is not common practice to analyze a residual matrix in my experience. It is always better to do an anlysis in one step rather than more than one. I would include that control variables in the
model as covariates.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=11&page=1673","timestamp":"2014-04-19T17:09:25Z","content_type":null,"content_length":"17777","record_id":"<urn:uuid:40886cd7-5cc5-42ec-b86f-afe21173f27a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Mascheroni's Construction of the Center of a Circle
How do you find the center of a circle with compass alone? Lorenzo Mascheroni found this beautiful and easy construction in 1797. He did not know that Dane G. Mohr had already discovered it in 1672.
This Demonstration decomposes the steps of the construction. You start with two points. Justifying the result is not trivial, even with the help of the figure.
Lorenzo Mascheroni (1750–1800) asserted in his tract
The Geometry of Compasses
that ruler-and-compass constructions can be accomplished with the compass alone. Starting with two points, other points can be constructed. The solutions are uncomfortable, but they exist!
The construction is not only elegant but also quicker and more exact than using a compass and ruler. | {"url":"http://demonstrations.wolfram.com/MascheronisConstructionOfTheCenterOfACircle/","timestamp":"2014-04-18T03:00:27Z","content_type":null,"content_length":"42727","record_id":"<urn:uuid:49ad600e-1e85-419a-b263-f3d6a3b02732>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
Evaluation of the Component Chemical Potentials in Analytical Models for Ordered Alloy Phases
Journal of Thermodynamics
Volume 2011 (2011), Article ID 874979, 4 pages
Research Article
Evaluation of the Component Chemical Potentials in Analytical Models for Ordered Alloy Phases
^1Institute for Materials Research, University of Salford, Salford, M5 4WT, UK
^2Institute of Metal Research, Chinese Academy of Sciences, Shenyang 110016, China
Received 17 November 2010; Accepted 27 January 2011
Academic Editor: Brian J. Edwards
Copyright © 2011 W. A. Oates et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The component chemical potentials in models of solution phases with a fixed number of sites can be evaluated easily when the Helmholtz energy is known as an analytical function of composition. In the
case of ordered phases, however, the situation is less straightforward, because the Helmholtz energy is a functional involving internal order parameters. Because of this, the chemical potentials are
usually obtained numerically from the calculated integral Helmholtz energy. In this paper, we show how the component chemical potentials can be obtained analytically in ordered phases via the use of
virtual cluster chemical potentials. Some examples are given which illustrate the simplicity of the method.
1. Introduction
Chemical potentials for the components in alloy phases are often required: they are useful, for example, in phase diagram calculations. In the case of a binary substitutional alloy model, (A,B),
which uses a fixed number of sites, , containing and atoms of element A and B, respectively, functions like are not chemical potentials. They are equal to the difference between the component
chemical potentials, for example, . This difference is usually referred to as the diffusion potential [1].
This problem with definition does not mean that the individual component chemical potentials are unobtainable in solution phases with a fixed number of sites. They can be obtained from the partial
derivative of the calculated Helmholtz energy, F, for example, in circumstances where F can be expressed as a function of and . In a completely disordered solution the Helmholtz energy and the
component chemical potentials are well defined Analytical expressions for the chemical potentials can also be derived for models of single lattice phases which take into account deviations from
random mixing. Results for the pair quasichemical (Q-C) approximation [2, 3] and for a four-point cluster in the same approximation [4, 5] have been reported.
It is the evaluation of the component chemical potentials in ordered (or antiferromagnet) phases which poses a problem, because there is no longer an explicit relation between F and , . Instead, F is
a functional involving internal order parameters. It is because of this that the usual way of obtaining the component chemical potentials has been to numerically differentiate the calculated integral
Helmholtz energy.
In this paper, we show how component chemical potentials can be easily obtained in any cluster approximation in either ordered or the single lattice state via the use of virtual chemical potentials
(VCPs). VCPs are defined in Section 2. Previously, only point VCPs appear to have been used, but cluster VCPs are also definable and, as we show, are equally useful. The use of cluster VCPs in
calculating the equilibrium distribution of clusters and species in partially ordered phases is discussed in Section 3. In Section 4, we show how the component chemical potentials are simply related
to the VCPs, and in Section 5, we present the results for some example model calculations.
2. Virtual Chemical Potentials
In their original treatment for calculating the equilibrium distribution of lattice defects in solids, Wagner and Schottky used the law of mass action [6], but later, Schottky gave a more formal
treatment of this approach in terms of point VCPs [7]. These point VCPs were used extensively by Kröger [8] in discussing defect equilibria in ionic and semiconductor compounds.
Schottky distinguished between two types of constituent of a solution phase, building units and structural elements. The building units can be regarded as the normal components, while the structural
elements are the majority and defect species occurring on the sublattice sites. When a structural element is created, the number of complementary structural elements cannot be kept constant due to
the requirement of a definite site ratio; it is, therefore, not possible to assign a true chemical potential to a structural element, nor can they be accessed experimentally. It is possible, however,
to define a point VCP for a species A on a sublattice i as where F is the Helmholtz energy, the number of species or constituents of type A on sublattice i, and to all other point species on all
sublattices. We have used the notation here rather than since the latter is often used to denote the chemical potential of a component A in a phase i.
The concept of VCPs is readily extended to consider larger clusters than the point. For example, the following can be defined for pair and four-point clusters: where and refer to all other pairs and
four-point clusters, respectively.
3. Equilibrium Distribution of Species in Ordered Phases
Schottky showed that the value of point VCPs lies in their computational convenience in a modeling context. This can be illustrated by considering a model for an ordered phase comprising two
elements, A and B, distributed between two sublattices, 1 and 2. If the sublattices are assumed to be of equal size, then this ordered phase can be represented as (A, B):(A, B).
We will consider this phase in the nearest neighbor pair Q-C approximation. If we consider a closed system, Lagrangian multipliers can be assigned to the mass balances Minimization of the Lagrangian
followed by the elimination of the Lagrangian multipliers gives the following equilibrium relations between the pair VCPs The solution of these equations, subject to normalization and mass balance
constraints, leads to the equilibrium values for the pair probabilities.
In the Q-C approximation the relation between the VCPs and the pair probabilities can be obtained from where is the pair exchange energy (the are the bond energies), z is the coordination number, and
is the mean pair probability.
The dimensionless pair and point entropies in (8), , and are given by where is the mean probability or sublattice mole fraction of the species A on sublattice i.
The pair VCPs may then be obtained from the Helmholtz energy minimization (dimensionless) . For example, Substitution of such expressions for the VCPs into (6) then leads to the solution for the
equilibrium pair distribution.
It should be noted that this use of VCPs is not the only, nor necessarily the most convenient, method to calculate equilibrium cluster distributions. Many using the CVM, for example, use the natural
iteration method [9] to calculate these distributions.
4. Component Chemical Potentials
A principal advantage of VCPs lies in their relation to the component chemical potentials. We will first consider the same example as was used in the previous section and then present analogous
relations for other examples.
Consider a system which is open to the component B. We lose the mass balance constraint for B and must now consider a Lagrangian based on the grand potential, : from which we can obtain, so that in
this case, the component chemical potential is related to just the one pair VCP Similar simple expressions are readily obtained for other cluster models. The following lists some examples (n.n.
refers to nearest neighbor interactions and n.n.n. to next nearest neighbor interactions).
Four-sublattices, Bragg-Williams (B-W) approxn. bcc, n.n., Q-C approxn. bcc, n.n. & n.n.n, Q-C approxn. fcc CVM-T approxn. bcc CVM-T approxn. where Here, n is the number of different types of cluster
or subcluster; for example, n = 4 and n = 2, respectively, for the number of types of n.n. and n.n.n. clusters in the bcc n.n. and n.n.n. Q-C approximation.
It should be noted that there is no relation similar to those given for chemical potentials which permit the analytical calculation of partial molar energies or entropies.
5. Example Calculations
In the examples shown in Figures 1 and 2 for a solution phase A-B, the molar integral Helmholtz mixing energy, , has been calculated from the integral mixing energy and integral mixing entropy. For
example, in the Q-C n.n. two-sublattice approximation, the following equations have been used: The chemical potentials shown in the figures have been obtained from (14) and (18). Explicitly, the
chemical potentials of A shown in Figure 2 have been obtained from Here, the n.n.n. have been taken to involve the sublattices 1-2 and 3-4.
The chemical potentials calculated from the VCPs agree well with those obtained numerically from the independently calculated integral quantity from CVM. Slight difference is due to the n.n.
approximation in the VCP calculation, which can obviously be overcome by a straightforward employment of the n.n.n. approximation in the present method.
Besides the simplicity in definition, the use of VCPs also reduces the number of independent variables in the calculation of chemical potentials. In the CVM calculations for n-component alloy, there
are necessarily independent variables, whereas it is significantly decreased to 2n through the definition of VCPs.
6. Conclusion
Component chemical potentials are easily obtained in analytical forms by virtue of cluster VCPs in ordered alloy phases, instead of the usual numerical calculations from the integral Helmholtz
energy. The example calculation based on pair quasichemical approximation is compared with the CVM calculation with a four-point cluster in the same approximation, illustrating the simplicity of the
Furthermore, the use of VCPs benefits direct comparisons with simulation results, in which systems are always restricted to a fixed number of total sites.
The support of the Ministry of Science and Technology of China under Grant no. 2006CB605104 and the Natural Science Foundation of China under Grant no. 50631030 is gratefully acknowledged. W. A.
Oates wishes to acknowledge support for a short visit to the Institute of Metal Research, CAS, Shenyang, China.
1. F. C. Larché and J. W. Cahn, “The interactions of composition and stress in crystalline solids,” Acta Metallurgica, vol. 33, no. 3, pp. 331–357, 1985. View at Scopus
2. G. S. Rushbrooke, Introduction to Statistical Mechanics, Clarendon Press, Oxford, UK, 1949.
3. E. A. Guggenheim, Mixtures, Clarendon Press, Oxford, UK, 1952.
4. E. A. Guggenheim and M. L. McGlashan, “Approximations relating to strictly regular mixtures,” Molecular Physics, vol. 5, no. 13, pp. 433–445, 1962.
5. E. A. Guggenheim and M. L. McGlashan, “Generalization of quasiehemical formulas,” The Journal of Chemical Physics, vol. 42, no. 7, pp. 2544–2547, 1965. View at Scopus
6. C. Wagner and S. Schottky, “Theorie der geordneten Mischphasen,” Zeitschrift fürPhysikalische Chemie B, vol. 11, p. 163, 1930.
7. W. Schottky, in Halbleiter Probleme, W. Schottky, Ed., vol. 4, pp. 235–268, Fr. Vieweg und Sohn, Braunschweig, Germany, 1958.
8. F. A. Kröger, The Chemistry of Imperfect Crystals, North-Holland, Amsterdam, The Netherlands, 1973.
9. T. Kikuchi, “Superposition approximation and natural iteration calculation in cluster-variation method,” Journal of Chemical Physics, vol. 60, p. 1071, 1974. | {"url":"http://www.hindawi.com/journals/jther/2011/874979/","timestamp":"2014-04-16T22:18:34Z","content_type":null,"content_length":"161957","record_id":"<urn:uuid:13dc1fd6-de80-4cfc-8773-66495df052d1>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intermediate Algebra (with CengageNOW, TLE Labs, Personal Tutor Printed Access Card)
ISBN: 9780495117940 | 0495117943
Edition: 8th
Format: Hardcover
Publisher: Cengage Learning
Pub. Date: 1/5/2007
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/bk-detail?isbn=9780495117940","timestamp":"2014-04-18T06:47:40Z","content_type":null,"content_length":"34879","record_id":"<urn:uuid:597de0fe-1786-483c-b4df-1beb806b53d9>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number, Number Sense, and Operations 1 First Grade- Number, Number Sense and Operations Standard Students demonstrate number sense, including and understanding of number ...
4th Grade Math Tutorial Activity
4th Grade Math Tutorial Activity . Objective: Read and write numbers less than one million using standard and expanded notation. Materials: Cards showing numbers in ...
Joan A. Cotter 2001 i We have been hearing for years that Japanese students do better than U.S. students in math in Japan. The Asian students are ahead by the middle of ...
Creative Teaching Press - Math Minutes, 4th Grade
Math Minutes, 4th Grade ... 100 Minutes to Better Basic Skills. Each book in this series features 100 Minutes to help students build basic skills, increase speed in math ...
3rd/4th grade : Place value and expanded notation Quiz
Quiz *Theme/Title: Place value and expanded notation * Description/Instructions ; This quiz will focus on place value. Identify which place value a given number is.
Expanded notation 4rd grade lesson plans :: jagadodharana music ...
Expanded notation 4rd grade lesson plans. calculator for fractions into decimal notations, expanded notation 4rd grade lesson plans, jig notation.
Fourth Grade
Fourth Grade - Table of Contents
Chapter 8 - Assistive Technology for Mathematics
Chapter 8 - Assistive Technology for Mathematics Assessing Students Needs for Assistive Technology (2009) 1 Assistive Technology for Mathematics Marcia Obukowicz, OTR ...
Focal Points
508 Teaching Children Mathematics / May 2008 Sybilla Beckmann, sybilla@math.uga.edu, teaches at the University of Georgia in Athens, Georgia 30602.
Whole Number Multiplication
1 Review of Mathematical Soundness W. Stephen Wilson, Ph.D. A few standards have been chosen for close scrutiny of their mathematical development.
Expanded Notation Math Learning Center - Brooke Beverly ...
Sailing Through Expanded Notation: There are 15 boats with numbers written in either standard notation or expanded form. Students use the recording
expanded notation worksheet for 7th grade
Bing visitors found us today by entering these math terms: printable equation for first grade ; solving addition equations with a negative number
Expanded Notation and Scientific Notation
Curriculum Tie: Educational Technology (Grades 6-8) Standard 8 ; Science 6th Grade Standard 3 Objective 1 : Summary: This activity will help students to learn about expanded ...
Algorithms in Everyday Mathematics
Algorithms in Everyday Mathematics Algorithms in School Mathematics..... 1 ...
The Revision of Investigations in Number, Data and Space
9/15/08, 3 TERC, 2007 Page 1 The Revision of Investigations in Number, Data and Space Susan Jo Russell, Education Research Collaborative at TERC Introduction The 1 st edition ...
Order of operationsand other oddities in school mathematics
Order of operationsand other oddities in school mathematics H. Wu September 13,2007 One of the awsoftheschool mathematics curriculum is that it wastes time in fruitless ...
Bronx PS 6 X West Farms Grades: PreK-6 Student Population: 919 District: 12 PS 6 X and arts partner Learning through an Expanded Arts Program (LEAP) have created a series of ...
5th grade math expanded notation eBook Downloads
5th grade math expanded notation free PDF ebook downloads. eBooks and manuals for Business, Education,Finance, Inspirational, Novel, Religion, Social, Sports, Science ...
Dodson Elementary School
4355 Houston Drive Reno, NV 89502 (775) 689-2530 Fax (775) 689-2531 Kristell Moller, M.A., Principal James Hager, Ph.D., Superintendent 2001-02 Accountability Report ...
Pool Party Expanded Notation Math Center - Shelley Gray ...
Students will love practicing expanded notation with this fun summer-themed math center. Students will match the cards and record the expanded notatio
Expanded notation - Math Mojo Homepage
Professor Homunculus answer: Very good question. It is important to know this one. Expanded form (or expanded notation) simply means the number written out in ...
Fourth Grade Math-Scope and Sequence 2009-10 with Sample Ideas and ...
Fourth Grade July 2009 2 Projected Timeline Topic TEKS Concrete Pictorial Abstract Language, Process Generalizations Comments (Cont.) Aug. 24 - Sept 1 Rounding and Estimation 4.5a. | {"url":"http://www.cawnet.org/docid/teaching+expanded+notation+to+fourth+graders/","timestamp":"2014-04-20T05:43:00Z","content_type":null,"content_length":"54560","record_id":"<urn:uuid:093dcf92-b733-4928-8f64-8d8ff9751fac>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
CU-Boulder FCQ Office: Making Comparisons Across Time and Across Sections
PBA Home > Institutional Research & Analysis > Faculty Course Questionnaire > Section Report Guide > Making Comparisons Across Time and Across Sections
Making Comparisons Across Time and Across Sections
To compare one section to another (or to compare ratings on different questions for one section), follow these general guidelines:
• Because the FCQ score scale and items changed significantly in fall 2006, we do not recommend making comparisons between former (prior to fall 2006) and current FCQ results.
• Are the two sections similar enough to warrant comparison? Differences in discipline, activity type (lecture, lab, seminar), level (especially graduate vs. undergraduate), class size, and other
factors can make comparisons tricky.
• Is the return rate sufficiently high? Be cautious if it's under 70%.
• Do fewer than 90% of returned forms contain ratings on the question you're looking at? This is rare except for the diversity items, but warrants caution when it does occur.
• Check the section standard deviations (SDs). If these are unusually high (over 1.10 on the former FCQ), then the section average probably does not accurately reflect a "typical" or "consensus"
response. In this case comparing averages is not legitimate.
If all checks are OK:
• A good way to estimate the meaningfulness of a difference is to convert the difference to standard deviation units. To compare the average rating of Section 2 to the average of Section 1, for
example, subtract one average from the other and divide the difference by the standard deviation of Section 1. If the result is .8 or greater (or -.8 or less), the difference is typically
considered rather large. See effect size for more information.
• A statistical significance test can also provide information about the magnitude of a difference between two mean scores. One appropriate statistic for the difference between mean scores of two
sections is called a "t test," which gives a "t value" and a "p value." The t value is typically of little interest in and of itself; the associated p value ("p" stands for "probability") is the
important number in evaluating statistical significance.
• Below we'll explain what a p value is, and how to interpret it. There's also a link to an Excel file you can download that will calculate t and p values when you enter section averages, standard
deviations, and Ns. First, however, it is important to emphasize what a statistical significance test does not tell you. The most important caveat is this: Whether or not a difference is
statistically significant has absolutely no bearing on whether it has any practical or educational or evaluative significance; such questions are matters of informed judgment, not statistical
significance tests.
• With that caveat in mind, here's what a p value, which is the direct measure of statistical significance, does mean. Imagine a situation in which you know that the difference between two average
scores is due to chance alone. This would be the case if, for example, you had a set of ratings from a single section of 100 students and you randomly divided them into two samples of 50 ratings
each and compared the averages (or, for that matter, if you had a jar full of 100 balls with the numbers 0-4 on them and drew two random samples of 50 balls each from the jar). It's likely that
the averages wouldn't be exactly the same - one set of 50 scores might average 3.52, for example, while the other averaged 3.73 -- but since they were randomly drawn from the same section, we
know that the difference is clearly due to chance alone. The p value tells you the probability that a given difference in averages would be obtained in that situation, in which there is no "real"
• So let's say you have actual averages from two different sections you want to compare - say they're 3.52 and 3.73, for a difference of 0.21 -- and you find that when you perform a t test, the
associated p value is .17. That means that given the averages (and standard deviations and Ns) of these two distributions of scores, a difference between them as large or larger than 0.21 would
occur 17% of the time even if you were simply drawing two samples of scores randomly from a single section.
• Knowing this p value can help you interpret the statistical significance of the difference in scores; by convention, a difference associated with a p value of less than .05 is considered
statistically significant. In other words, a difference is considered statistically significant if it would occur less than 5% of the time under circumstances, such as described above, in which
it is known that random chance is the only explanation for the difference. If the p value is greater than .05, it's considered not statistically significant. Again, this is a matter of
convention, and the .05 dividing line is somewhat arbitrary. And to emphasize again the point made above, a p value does not tell you whether the difference between two section averages has any
educational or evaluative significance.
If, given that caveat, you still want to do a statistical significance test, download the "t calculator." This is an Excel file that will calculate a t and then tell you whether the difference
between section averages is statistically significant when you enter the mean, standard deviation, and N for each of the two sections. | {"url":"http://www.colorado.edu/pba/fcq/stats/time.html","timestamp":"2014-04-18T10:20:29Z","content_type":null,"content_length":"18287","record_id":"<urn:uuid:73cc570a-2ced-4a20-9f00-e3a62355014e>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
This example shows you how to create a griddedInterpolant for a gridded data set and then interpolate over a finer grid. We will begin by defining a function that generates values for X and Y input:
generatevalues = @(X,Y)(3*(1-X).^2.*exp(-(X.^2) - (Y+1).^2) ...
- 10*(X/5 - X.^3 - Y.^5).*exp(-X.^2-Y.^2) ...
- 1/3*exp(-(X+1).^2 - Y.^2));
We can create a 2D grid and then pass it to the generatevalues function to produce values at the grid points. The grid is created from a pair of grid vectors as follows:
xgv = -1.5:0.25:1.5;
ygv = -3:0.5:3;
[X,Y] = ndgrid(xgv,ygv);
Now generate the value data:
V = generatevalues(X,Y);
We can create an interpolant for this data set that supports interpolation within the grid. Since the interpolant behaves like a function we will give it the variable name F. The 'cubic' option
specifies cubic interpolation.
F = griddedInterpolant(X, Y, V, 'cubic')
F =
griddedInterpolant with properties:
GridVectors: {[1x13 double] [1x13 double]}
Values: [13x13 double]
Method: 'cubic'
ExtrapolationMethod: 'cubic'
The interpolant F has 3 properties: The GridVectors are actually the vectors xgv and ygv we used to create the grid. The interpolant stores the grid in the compact form of GridVectors. This can save
memory if the grid is large. GridVectors is a cell array so we can query the contents as follows:
gridvectorprop = F.GridVectors
firstgridvector = F.GridVectors{1}
secondgridvector = F.GridVectors{2}
gridvectorprop =
[1x13 double] [1x13 double]
firstgridvector =
Columns 1 through 7
-1.5000 -1.2500 -1.0000 -0.7500 -0.5000 -0.2500 0
Columns 8 through 13
0.2500 0.5000 0.7500 1.0000 1.2500 1.5000
secondgridvector =
Columns 1 through 7
-3.0000 -2.5000 -2.0000 -1.5000 -1.0000 -0.5000 0
Columns 8 through 13
0.5000 1.0000 1.5000 2.0000 2.5000 3.0000
The values at the grid points are stored in the Values array. You can access the values using standard MATLAB syntax to index into the data. For example to inspect a 4-by-5 interval:
first4x5values = F.Values(1:4, 1:5)
first4x5values =
0.0042 0.0028 0.0452 0.3265 0.3007
-0.0050 -0.0671 -0.1285 0.3923 0.9838
-0.0299 -0.2346 -0.5921 0.1483 1.8559
-0.0752 -0.5260 -1.4478 -0.6798 2.4537
The interpolation technique is represented by the Method property. We selected cubic interpolation and our choice is reflected as follows:
theinterpolationmethod = F.Method
theinterpolationmethod =
We can now create a finer grid and use the interpolant to compute the values at these points. We will call these points the query points (Xq, Yq) to distinguish them from our original sample points.
xqgv = -1.5:0.1:1.5;
yqgv = -3:0.1:3;
[Xq,Yq] = ndgrid(xqgv,yqgv);
We can now evaluate over the refined grid to compute the corresponding values Vq at (Xq, Yq). Since we named our interpolant F, the calling syntax is
Vq = F(Xq, Yq);
We can now generate a plot for comparison with our initial coarse plot.
title('Gridded Data Set', 'fontweight','b');
surf(Xq, Yq, Vq);
title('Gridded Data Set Refined using Cubic Interpolation', 'fontweight','b');
We can query the interpolant at any location within the domain of the grid.
ans =
You can compare this interpolated value with the value generated by the analytical expression.
ans =
You can query the interpolant using an array of query points as opposed to arrays of query coordinates. We can show this by querying at random locations within the grid.
Xq = -1.5 + 3.*rand(5,2);
Vq = F(Xq)
Vq =
Or using alternative syntax:
Vq = F(Xq(:,1), Xq(:,2))
Vq =
The interpolant supports queries within the domain of the grid. If your query point lies outside the domain of the grid the interpolant will return a NaN.
ans =
Selecting a Different Interpolation Method
You can change the interpolation method on-the-fly. For example, if you wish to use a spline method as opposed to cubic you can change it as follows:
F.Method = 'spline'
F =
griddedInterpolant with properties:
GridVectors: {[1x13 double] [1x13 double]}
Values: [13x13 double]
Method: 'spline'
ExtrapolationMethod: 'cubic'
We can reevaluate and plot using the spline interpolation method.
xqgv = -1.5:0.1:1.5;
yqgv = -3:0.1:3;
[Xq,Yq] = ndgrid(xqgv,yqgv);
Vq = F(Xq, Yq);
We can now generate a plot for comparison with our initial coarse plot.
title('Gridded Data Set', 'fontweight','b');
surf(Xq, Yq, Vq);
title('Gridded Data Set Refined using Spline Interpolation', 'fontweight','b');
Interpolating Data in MESHGRID Format
The griddedInterpolant class is designed to work with gridded data that conforms to the NDGRID format. This provides support for grids in general N-dimensions, including 1-D which can be regarded as
a degenerate grid. In contrast, the MESHGRID format can only support grids in 2D and 3D. Both grid types have identical grid point coordinates; the difference is the format of the coordinate arrays.
If you wish to create a griddedInterpolant using MESHGRID data, you will need to convert the data to NDGRID format. In 2D this involves transposing the arrays as the following example shows.
xgv = -1.5:0.25:1.5;
ygv = -3:0.5:3;
[X,Y] = meshgrid(xgv,ygv);
V = generatevalues(X,Y);
To convert the data to NDGRID format apply a transpose
X = X';
Y = Y';
V = V';
We can now create the interpolant
F = griddedInterpolant(X, Y, V)
F =
griddedInterpolant with properties:
GridVectors: {[1x13 double] [1x13 double]}
Values: [13x13 double]
Method: 'linear'
ExtrapolationMethod: 'linear'
Converting 3D MESHGRID data to NDGRID format involves transposing each page of the 3D arrays. This is achieved using the PERMUTE function to interchange the rows (dimension 1) and columns (dimension
2). Here's an example that shows you how:
gv = -3:3;
[X,Y,Z] = meshgrid(gv);
V = X.^2 + Y.^2 + Z.^2;
P = [2 1 3];
X = permute(X,P);
Y = permute(Y,P);
Z = permute(Z,P);
V = permute(V,P);
We can now create the interpolant
F = griddedInterpolant(X, Y, Z, V)
F =
griddedInterpolant with properties:
GridVectors: {1x3 cell}
Values: [7x7x7 double]
Method: 'linear'
ExtrapolationMethod: 'linear'
Likewise, when querying the interpolant using a MESHGRID, improved performance can be achieved by converting to NDGRID format. For example, if we wish to query our interpolant F using a MESHGRID
composed of query points (Xq, Yq, Zq), we could convert the data to NDGRID format as follows:
[Xq, Yq, Zq] = meshgrid(0:0.5:2);
Xq = permute(Xq,P);
Yq = permute(Yq,P);
Zq = permute(Zq,P);
(Xq, Yq, Zq) is now in NDGRID format and can be queried efficiently.
Vq = F(Xq,Yq,Zq);
Interpolating Grids in General Dimensions
The griddedInterpolant class is not restricted to 2 and 3 dimensions. You can create an interpolant for 1D, 4D or higher. In practice, the memory required to represent the data may be the limiting
factor in higher dimensions. This restriction can impact use in relatively low dimensions, less than ten, depending on the number of grid points and available computing power. The following example
illustrates 1D interpolation using the PCHIP interpolation method.
X = 1:6;
V = [16 18 21 17 15 12];
F = griddedInterpolant(X,V,'pchip')
F =
griddedInterpolant with properties:
GridVectors: {[1 2 3 4 5 6]}
Values: [16 18 21 17 15 12]
Method: 'pchip'
ExtrapolationMethod: 'pchip'
We can now evaluate the interpolant over a finer interval.
Xq = 1:0.05:6;
Vq = F(Xq);
Plotting the query points in blue and the interpolated result in red we get:
title('1D Interpolation of a Data Set using the PCHIP Method', 'fontweight','b');
We can create and query a 4D interpolant as follows:
[X1, X2, X3, X4] = ndgrid(1:6);
V = X1.^2 + X2.^2 + X3.^2 + X4.^2;
F = griddedInterpolant(X1,X2,X3,X4,V)
F =
griddedInterpolant with properties:
GridVectors: {1x4 cell}
Values: [4-D double]
Method: 'linear'
ExtrapolationMethod: 'linear'
Evaluation at a single 4D point
ans =
(1.1)^2 + (2.1)^2 + (3.1)^2 + (4.1)^2
ans =
Evaluate at an array of 4D points
Xq = 1 + 5*rand(5,4);
Vq = F(Xq)
Vq =
Interpolating Grids that have Multiple Values at Each Gridpoint
In some applications there may be more than one value associated with each grid point and we may wish to interpolate each value set in turn. For example, if we have a grid representing the pixels in
an image we may have three color intensities (RGB) associated with each grid point. There are two ways to interpolate this data. One approach is to create a separate interpolant for each of the three
data sets. The other approach is to create a single interpolant and replace the values. The following example illustrates the replacement of values using a single interpolant.
xgv = -1.5:0.25:1.5;
ygv = -3:0.5:3;
[X,Y] = ndgrid(xgv,ygv);
% Create two distinct value sets for this grid
V1 = X.^3 - 3*(Y.^2);
V2 = 0.5*(X.^2) - 0.5*(Y.^2);
% Now create an interpolant for the first value set
F = griddedInterpolant(X,Y,V1, 'cubic')
F =
griddedInterpolant with properties:
GridVectors: {[1x13 double] [1x13 double]}
Values: [13x13 double]
Method: 'cubic'
ExtrapolationMethod: 'cubic'
We can evaluate the V1 data set on a refined grid and plot the result
xqgv = -1.5:0.1:1.5;
yqgv = -3:0.1:3;
[Xq,Yq] = ndgrid(xqgv,yqgv);
Vq1 = F(Xq,Yq);
title('Cubic Interpolation of V1 Dataset', 'fontweight','b');
We can reuse the interpolant to interpolate the second dataset by replacing the Values data.
F.Values = V2
F =
griddedInterpolant with properties:
GridVectors: {[1x13 double] [1x13 double]}
Values: [13x13 double]
Method: 'cubic'
ExtrapolationMethod: 'cubic'
Vq2 = F(Xq,Yq);
title('Cubic Interpolation of V2 Dataset', 'fontweight','b');
The griddedInterpolant class handles large data sets relatively efficiently. These data sets may consist of a grid of values generated externally and imported into MATLAB. For example, large 2D or 3D
images scanned by an external source. In addition, such data sets may not have an explicitly defined grid of coordinate arrays. If the dataset is a large 3D image, the introduction of grid coordinate
arrays would quadruple the memory.
The griddedInterpolant class allows you to create an interpolant from the grid of values and a default grid is then deduced from the size of the array. This default grid is defined in terms of grid
vectors - a compact representation of the grid that uses very little memory.
To show this, we can use the PEAKS function to generate an array of values and then create an interpolant for this data set as follows:
V = peaks(10);
F = griddedInterpolant(V,'cubic')
F =
griddedInterpolant with properties:
GridVectors: {[1 2 3 4 5 6 7 8 9 10] [1 2 3 4 5 6 7 8 9 10]}
Values: [10x10 double]
Method: 'cubic'
ExtrapolationMethod: 'cubic'
Looking at the GridVectors property we can observe the vectors are deduced from the size of the V array. V is a 10x10 array and the corresponding grid vectors are 1:10 and 1:10
firstgridvector = F.GridVectors{1}
secondgridvector = F.GridVectors{2}
firstgridvector =
secondgridvector =
We can interpolate over a refined grid to improve resolution and fortunately we do not have to create a full grid to do so. We can evaluate using a pair of grid vectors and we package them within
curly braces { } to communicate this intent. The default grid has a scaling of 1:10 so we will refine to pick up half intervals using 1:0.5:10. The corresponding query value Vq is as follows:
Vq = F({1:0.5:10, 1:0.5:10});
We can now plot the results side by side.
title('Sample Values', 'fontweight','b');
title('Cubic Interpolation using a Compact Grid', 'fontweight','b');
Note: When we plot the surface the SURF function also uses a default grid to produce the plot. In the first plot the values are represented by a 10-by-10 array and the second a 20-by-20. Hence the
0-to-10 and the 0-to-20 scales on the axes.
The Default Grid can be overridden by specifying grid vectors when you create the interpolant. For example, we could have constructed the interpolant as follows:
F = griddedInterpolant({10:19, 20:29}, V,'cubic')
F =
griddedInterpolant with properties:
GridVectors: {[10 11 12 13 14 15 16 17 18 19] [1x10 double]}
Values: [10x10 double]
Method: 'cubic'
ExtrapolationMethod: 'cubic'
Evaluation would follow the same scaling
ans =
The default grid vectors could also have been replaced as follows:
F.GridVectors = {10:19, 20:29}
F =
griddedInterpolant with properties:
GridVectors: {[10 11 12 13 14 15 16 17 18 19] [1x10 double]}
Values: [10x10 double]
Method: 'cubic'
ExtrapolationMethod: 'cubic'
Interpolating a Dataset in a Repeated Manner
In some applications it may be necessary to interpolate the same dataset in a repeated manner. The griddedInterpolant class can generally handle this scenario more efficiently than the INTERP
functions. The griddedInterpolant class is able to reuse data computed during a previous query to speed up the computation of subsequent queries. The following example shows this advantage:
Sample data set
[X, Y, Z] = ndgrid(1:100);
V = X.^2 + Y.^2 + Z.^2;
Performance data for INTERPN
for i = 1:1000
Xq = 100*rand();
Yq = 100*rand();
Zq = 100*rand();
Vq = interpn(X,Y,Z,V,Xq,Yq,Zq,'cubic');
interpnTiming = toc
interpnTiming =
Performance data for griddedInterpolant
F= griddedInterpolant(X,Y,Z,V, 'cubic');
for i = 1:1000
Xq = 100*rand();
Yq = 100*rand();
Zq = 100*rand();
Vq = F(Xq,Yq,Zq);
griddedInterpolantTiming = toc
griddedInterpolantTiming = | {"url":"http://www.mathworks.nl/help/matlab/examples/grid-based-interpolation.html?s_tid=gn_loc_drop&nocookie=true","timestamp":"2014-04-23T12:46:44Z","content_type":null,"content_length":"51118","record_id":"<urn:uuid:22b996b3-c122-4d20-bd4f-524e8cb63d47>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What are two qualities that make a right isosceles triangle unique?
• 9 months ago
• 9 months ago
Best Response
You've already chosen the best response.
1 right angle 2 congruent sides
Best Response
You've already chosen the best response.
2 congruent angles Now you have 3 to choose from
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
One 45º angle and no congruent sides Two 90º angles and two congruent sides Two 45º angles and two congruent sides One 90º and three congruent sides These are the options ^^^^
Best Response
You've already chosen the best response.
Look at choice A. "no congruent sides". That cannot be. There are 2 here. Choice B: two 90 deg angles. 90 + 90 = 180. Since the measures of all angles in any triangle add up to 180, if only two
angles add to 180, you have 0 deg left for the third angle. That can't be. Keep on going through each choice and see if it makes sense or not.
Best Response
You've already chosen the best response.
Thank mathstudent55
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51c88fafe4b055e613b9b7eb","timestamp":"2014-04-16T07:42:21Z","content_type":null,"content_length":"55020","record_id":"<urn:uuid:ff16e619-dc88-44b9-9c97-8bbd599f3fca>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Branch, NJ Math Tutor
Find a North Branch, NJ Math Tutor
...I participate in NaNoWriMo every year! ** NOTE: I can't travel farther than 10 miles to meet with you, due to an increase in tutees. Sorry! **I got 5s in the following AP tests: Physics B,
Physics C Mechanics, Physics C E&M. I have been designing websites in HTML and CSS for several years. (I ...
26 Subjects: including algebra 2, psychology, literature, drawing
...My BSME involved a course in DE. My MSME is in sound and vibrations with a minor in math. I am specialized in solutions to the Sturm-Louisville equation, which is the basic differential
equation for much of engineering.
8 Subjects: including trigonometry, algebra 1, algebra 2, calculus
...I've worked with students of many different backgrounds, including non-native English speakers and students with learning disabilities. I'm able to explain both the most basic concepts and the
most challenging questions with clarity, and I tailor my instruction to each student's personal learnin...
10 Subjects: including SAT math, ACT Math, GMAT, SAT writing
...As an undergraduate, I studied German and Arabic Literature at the University of Chicago. I also took classes in Greek and Spanish and participated actively in the university's theatre groups
and newspapers. After graduation, I taught at the University of Chicago's Laboratory School, a private school with a well-deserved reputation for excellence.
24 Subjects: including SAT math, English, reading, writing
...I have experience tutoring a variety of subjects at the elementary, middle school as well as at the high school level especially teaching math including Pre-Algebra, Algebra I, ALGEBRA II,
Geometry, SAT, ACT, GED, HSPA, etc. I am enthusiastic and passionate about Mathematics and teaching. I absolutely love kids and love helping people learn.
10 Subjects: including algebra 2, geometry, precalculus, trigonometry
Related North Branch, NJ Tutors
North Branch, NJ Accounting Tutors
North Branch, NJ ACT Tutors
North Branch, NJ Algebra Tutors
North Branch, NJ Algebra 2 Tutors
North Branch, NJ Calculus Tutors
North Branch, NJ Geometry Tutors
North Branch, NJ Math Tutors
North Branch, NJ Prealgebra Tutors
North Branch, NJ Precalculus Tutors
North Branch, NJ SAT Tutors
North Branch, NJ SAT Math Tutors
North Branch, NJ Science Tutors
North Branch, NJ Statistics Tutors
North Branch, NJ Trigonometry Tutors
Nearby Cities With Math Tutor
Branchburg, NJ Math Tutors
Convent Station, NJ Math Tutors
East Millstone, NJ Math Tutors
Finderne, NJ Math Tutors
Greystone Park, NJ Math Tutors
Kingston, NJ Math Tutors
Lower Montville, NJ Math Tutors
Middlebush, NJ Math Tutors
Monroe, NJ Math Tutors
Pluckemin Math Tutors
Readington Math Tutors
Rosedale, NJ Math Tutors
South Branch, NJ Math Tutors
Tabor, NJ Math Tutors
Willow Grove, NJ Math Tutors | {"url":"http://www.purplemath.com/North_Branch_NJ_Math_tutors.php","timestamp":"2014-04-21T15:15:15Z","content_type":null,"content_length":"24187","record_id":"<urn:uuid:e1d1566c-2fea-4be0-8198-4582b9e058d3>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
stuck on a permutations question - A clerk at a bookstore is restocking a shelf of best-selling novels. He has 5 copies each of 3 different novels. How many different ways can he arrange the books on
the shelf?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
figured it out , 15! / 5 ! 5! 5! with cancellation
Best Response
You've already chosen the best response.
It depends on how many books fit the shelf. If he can fit all of them on the shelf, then this is the same as the famous mississippi problem. only instead of letters you have a type of book. If he
can only fit say, 3 books on a shelf, then the problem is somewhat more complicated.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
i think 6 he can arrange them 6 different ways i think
Best Response
You've already chosen the best response.
Imagine of all 15 books were unique. you would then have 15! different combinations. for each combination, though there are 3 sets of 5! equivalencies when you take into account 3 books and 5
copies. so 15!/(5!5!5!)
Best Response
You've already chosen the best response.
If n given things can be divided into c classes of alike things differing from class to class, then the number of permutations of these things taken all at a time is: \[\frac{n!}{n _{1}!n _
{2}!....n _{c}!}\] where \[n _{1}+n _{2}+.......n _{c}=n\] So @Litovel has the correct answer.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b3e679e4b09749ccad09ec","timestamp":"2014-04-16T04:13:14Z","content_type":null,"content_length":"40279","record_id":"<urn:uuid:be52a614-bf79-4ba9-8ea4-cbe90c1f3025>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00148-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pressure levels intersecting the surface
Many modern atmospheric numerical models use terrain following vertical coordinates, meaning that the pressure of the lowest model level tracks the topography and does not intersect the surface.
ERA40 and NCEP reanalyses have produced pressure level data extrapolated downward beneath the Earth.s surface. The result is that for 850, 925 and 1000 mb levels etc, continuous grids are available.
Previous versions of GEOS models and assimilation systems have not extrapolated data beneath the surface, favoring to provide undefined values when the surface pressure is lower than a given pressure
For instantaneous analyses, comparing GEOS5 pressure levels to other reanalyses would be straight forward, once the undefined value is considered. However, monthly averages pose a problem. There are
some regions and pressure levels where the number of valid values may be available for a fraction of the times. If all valid values of GEOS5 are averaged and reported, the average would not be
representative or comparable to NCEP or ERA40 reanalyses which made averages of all times.
Figure 1 850 mb temperature RMS error between GOES5 and NCEP analyses for different criteria of the sampling of missing data in the GEOS5 time series. At the left of the graphs, lower criteria
allow undersampling of the monthly time series to be compared with NCEP complete monthly mean. Far right, rejects points that have missing data in the time series, so there are fewer data points,
but the comparisons to NCEP are more completely sampled. (Click figure to enlarge)
This can lead to an increase in the squared error and systematic bias between GEOS5 and other reanalyses because of the temporal sampling at the edges of topography. This is also noticeable in global
and regional map comparisons. We computed global monthly averages testing a range of criteria for rejecting a monthly average. The criteria are applied at each grid point and are based on the
percentage of valid data over the month. In Figure 1, on the far left, if data are valid only 1% of the time during a month, a valid monthly mean value is saved. Moving right, at 20%, a grid point
with valid data 20% of the month produce a monthly mean (fewer than 20% are reported as undefined). At the farthest right, the strictest criteria requires that for each gridbox to produce a monthly
average much have gridpoints that have valid data 100% of the time. The two figures are global land only and North America (20-70, -170- -60). At higher pressure, there are more points affected by
sub-sampling, and the errors are most noticeable in these large area averages. For higher altitudes, the large scale error drops slowly for criteria greater than 20% (more points valid 100% of the
Figure 2 Comparison between GEOS5 and NCEP for different criteria, and a map of the sampling percentage. At 20% criteria (data is valid only 20% of the month) large differences are apparent.
These are reduced at 80%. At 100% the data should be showing only differences between full monthly averages, no effect of sampling. There are some artifacts because these figures have
interpolated NCEP to the GEOS5 ½degree resolution. Differences near topography can be significant and misleading (to one not knowing about the character of the data). (Click figure to enlarge)
To address this issue in the monthly mean MERRA products, only means which include counts that exceed a threshold of 20% valid data are included in the mean. Otherwise, the monthly mean value is
reported as undefined. This low value is defined to provide as much information as possible.
One difficulty that may arise is the lack of a 1000-500 mb thickness diagnostic. This was produced in some previous versions of GEOS5. However, in revising the pressure level interpolation code for
MERRA, the calculation of 1000 mb height has been left out, and so, 1000-500 mb height is not available. Also, consider that the 1000 mb analyses will have undefined data over large areas of the
globe (land and ocean). Lowest model level data are also available that may be suitable for some purposes, instead of the 1000mb level.
M. Bosilovich | {"url":"http://gmao.gsfc.nasa.gov/research/merra/pressure_surface.php","timestamp":"2014-04-17T06:51:54Z","content_type":null,"content_length":"17218","record_id":"<urn:uuid:c5d8b608-b0ca-4781-82b1-53543360bed2>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00493-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sharon J. Kelly, M.Ed. Andrew L. Meyer, Ph. D., Senior Scientist
Developers: Upper Dublin Schools Rohm and Haas Company
Ft. Washington, PA Spring House, PA
Grade Primary:
Levels: This unit is a series of four experiments that allows primary students to use the study of density to learn the scientific process. The children will have an opportunity to
manipulate the materials and measures themselves, record data, and draw conclusions while working in whole group situations with teacher direction and/or in small cooperative
groups. They will record their data on a rank order line where they will work on the math concepts of greater than, less than, and equal to. They will write up their conclusions
using correct scientific language and format. They will draw a concluding picture of how the density of materials affects their lives. Questions and ideas need to be encouraged and
seen as both an opportunity to teach and as a springboard for further investigation. "Let's find out" is the optimal phrase.
Fourth and Fifth Grades:
This unit could be used as an opportunity to reinforce the math skill of division. By using a scale or a primary balance with weights and by measuring volume, the students can
obtain the numerical values needed to calculate the bulk density of materials. In another of the experiments, the students obtain the data needed to determine the absolute density.
For example, if 30 ml of corn syrup weighed 36 g, the absolute density of corn syrup is calculated by 36 g / 30 ml = 1.20 g/ml.
Older students could also write up the experiments and their conclusions whereas the younger students would be dependent on oral discussion to express their ideas.
Disciplines: General Science, Language Arts, Math
Student On completion of this unit, the student will have:
1. Understood that science is a tool we use to make sense of what we observe and to gain new knowledge.
2. Learned the elements of the scientific method, i.e., careful measurements and observation, changing one variable at a time, generating and testing hypotheses, and repeatability.
3. Learned to record results.
4. Learned to draw inferences and connect his knowledge to his world.
5. Understood that the scientific process is something he can do.
Student On completion of this unit, the student will be able to:
1. Explain orally the relationship between density, weight, and volume.
2. Explain by way of example the difference between bulk and relative density.
3. Explain orally how the relative density of a material determines whether it sinks or floats.
4. Make hypotheses about the density of materials.
5. Work independently or in a small group to experimentally obtain data on the density of self selected materials.
Background: Density is defined as the ratio of the mass of a material to its volume. It may be understood in three senses:
1. Absolute Density: the density of a material in its closest packed form.
2. Relative Density: the density of a material relative to another material, commonly water.
3. Bulk density: the average density of a material which consists of individual macroscopic particles, i.e. not atoms or molecules. Bulk density may change with the degree of
compaction, e.g. freshly fallen snow vs. a packed snow ball.
For most purposes the absolute density of a liquid is equal to the relative density of that liquid compared to water.
For a given volume, the weight of a material is directly proportional to its density. Thus, a more dense material is heavier for a given volume. In other words, a more dense
material weighs more for a given volume than does a less dense material.
For a constant weight, the volume of a material is inversely proportional to its density. Thus a more dense material occupies less volume than does the same weight of a less dense
Introduction 1. Teach the concept of volume: the space that something occupies.
of Use a balloon to demonstrate that the volume of an empty balloon is less than the volume when it is blown up.
These 2. Show the students that they are already aware of bulk density.
Concepts For example:
to the Act out the snowflake example.
Students: Get 10 sheets of paper, crumple 5. Compare the volumes of the crumpled paper to the flat paper.
Weigh a given volume of Rice Crispies and then measure the weight and volume after crushing them.
Weigh and compare the volume of popcorn before and after popping. (Then eat it.)
These examples should lead the children to conclude that the same material has a greater density when its parts are close together than when they are far apart.
3. The concept of relative density will be introduced when the students observe that materials like rice, which were observed to have a bulk density less than water, actually sink
in water. They should either conclude on their own or will be told that the low bulk density occurs because there is air between the particles. (Bulk density is actually the
average density; it is the sum of the weight of the particles and of the air between them divided by the volume.)
Experimental It is critical that the volume of the materials used for experiments 1,2, and 4 be the same for each determination. Any measuring device may be used so long as the volume can be
Notes: controlled to be the same for all experiments. We are recommending baby food jars because they are readily available and in our experience are relatively uniform in volume; our jars
averaged 124+/- 1 ml. In each case care must be taken to fill the jars completely to the top. (Of course, materials whose particles are large like stones or marbles will not allow
complete filling of the jars.
When working with primary students, their inability to accurately measure materials and read a calibrated cylinder will result in data that would be found incorrect if more
sophisticated methods were used by more sophisticated students. They will not be able to measure the small difference in water and cooking oil. The concept of the density of one
material being similar to another material is acceptable for our purposes in the primary grades.
If using a double pan balance, we recommend pennies as the weights because they are uniform in weight. Moreover, the weight of each penny is relatively small allowing small
increments to be measured.
Experiment 1: Ranking Substances by Density
Goals: 1. The students will learn the experimental technique.
2. The students will become active scientists.
3. The students will understand that the study of science requires precision and repeatability.
Objectives: 1. The students will learn the experimental techniques necessary for Experiment
2. a. The students will use a primary balance or a scale correctly.
b. The students will transfer the materials completely.
3. The students will record data by using the rank order line for primary grades or by tabulating the calculated densities when the students are capable of division.
4. The students will be actively involved in the experiment.
5. The students will cooperate with others to complete the scientific tasks.
Materials: baby food jars
primary balance
pennies used as weights
jars filled with water, sand, corn syrup, Rice Crispies, ground coffee
colored paper in the shape of baby food jars
white paper in the shape of baby food jars
rank order line made by the teacher
Procedure: 1. Estimate the order of density of the substances:
A. With all the jars marked as to their contents, pass the jars around to the children and ask them to make a judgement about Encourage the children to use the terms greater
than, less than, shapes and placing them on the rank order line.
2. Test the student's judgments by weighing the contents of the jars:
With teacher supervision:
A. Have the students weigh the jars using either a primary or a scale.
B. Have the students record the weight or the number of pennies on a white jar shape and place it on a rank order line. White jar shapes are used to distinguish data determined
by measurement from data determined by estimation or hypothesis which was recorded using the colored jar shapes.
C. Ask the students if they have drawn any conclusions or want to make a hypothesis about their observations. Keep in mind that some conclusions are also hypotheses which may
need to be tested. (A hypothesis is an unproved statement which accounts for the observations or conclusions.)
D. Record conclusions and/or hypotheses for future reference.
3. Group ranking of additional materials:
A. Divide the class into groups.
B. Give each group a jar containing one of the initial test materials. Also give each group jars containing 3 additional materials. Each group should get the same 3 additional
C. Have the students repeat Step 2 in their groups.
D. Bring the class together and report and record their results on the rank order line. All groups should get approximately the same results if the differences in densities
between materials are not too small to measure. The teacher could ask the students to make hypotheses as to why their results differed. For older students it would be good to
emphasize that repeatability is the hallmark of good science and they should be able to duplicate their results.
F. Ask the class to suggest ways to solve the problem of how to rank materials whose density is similar. Good ways might be to measure them using weights with a smaller
increments between them such as plastic poker chips or matches, or to use larger amounts of the materials so that the incremental difference in weights would be a smaller
fraction of the total weight. Discuss the merits of other ideas and test them if time allows. Younger students may be satisfied to know that the density of some material is
equal to the density of another.
Experiment 2: Further Ranking Substances by Density
Objectives: 1. To further involve the students in recognizing the concept of density in their everyday experience.
Materials: Have the students bring in materials of their choosing from home in filled jars.Other materials to be included are rice, cooking oil, wood (These materials and their density will be
needed for Experiment 3 also.)
Procedure: 1. 1. Have the students estimate the rank order of their materials.
A. Circulate some of the jars containing materials from Experiment 1 and the jars containing materials the students brought in.
B. Have the children write a sentence or two predicting where their materials would be positioned on the rank order line and why. Remind them to think of their conclusions or
hypotheses from Experiment 1. This could be done individually or in small groups.
2. Rank the materials by density.
A. Repeat steps 2 A, B, and C from Experiment 1.
B. Ask the children to discuss why their prediction was close or not to the observed ranking.
Outcomes from Experiments 1 and 2
In these two experiments the students have made subjective judgments about the rank order of several materials by density and have tested whether these judgments were born out by
They have learned the usefulness of recording their results.
They have been asked to draw conclusions and make hypotheses based on the observations from Experiment 1 which were or could be tested in Experiment 2.
Experiment 3: The Inverse Relationship of Volume and Density
Goals: 1. The student will observe that if the weight of several materials is held constant, the volume will be found to be greater for less dense substances.
2. The students will discover the concept of relative density.
Objectives: 1. The students will be able to provide reasons orally or in writing for the rank order established by testing.
Background: In this experiment the students will be learning another way of evaluating or ranking materials by density. By placing solid materials in water only the volume taken up by the
particles is measured. The effect of the air between the particles is eliminated. This will allow the students to recognize the differences that will lead to the concept of relative
density. Once this concept is understood, it will allow them to better understand their world by understanding the principle which explains why things float or sink in water.
Materials: primary balance or scale
100 ml. plastic graduated cylinders
selected materials from Experiment 1 and 2 or other appropriate materials.
These should include solids that float such as wood or butter and at least one liquid that floats such as cooking oil. Rice should also be included. (See below.)
pennies for weights
Procedure: 1. Select an arbitrary weight to be used for the experiment. This should be large enough so that materials with large, heavy particles can be used and small enough so that amount
of the smaller, less dense particles is not too great to put into a cylinder.
2. Have the students predict whether the materials used will float or sink using the chart in the appendix to record their prediction. The chart will be completed as the experiment
3. Put about 40 ml of water in the plastic graduated cylinder. You may want to mark that level with tape so the students can clearly see the beginning water level. Younger students
may have difficulty reading small increment marks.
4. If a primary balance is used, balance an empty jar with pennies or other suitable weights. In the case of a scale weigh the empty jar; the teacher will then have to make the
children understand that the weight of the material is the difference between the weight of the jar with its contents and the weight of the empty jar.
5. Add the amount of each material to the jar until it balances or until it has the required weight.
6. Place the contents of the jar into the cylinder and measure the increase in the volume of the water. If the material floats, it will be necessary to use a rod or some other
device to push it just under the surface of the water, taking care not to displace additional water with the rod.
For liquids, be they less dense or more dense than water, the water displacement is superfluous because their volume could be measured directly in the cylinder. However it is
recommended to use the same experimental method for all the determinations so as not to confuse the students.
Older students might not be confused by using different procedures to get the volume. For them it could be instructive to measure the volume of the liquids by both methods to
show that the results are the same. Likewise, they should get the same numerical value for the density of the liquids that they determined in Experiments 1 and 2.
7. Record the number of ml of water displaced on a white cut out jar and place it on the ran order line.
8. Repeat the procedure with the remaining materials completing the chart after each material.
9. With all the students able to see the rank order line from Experiment 1 and 2 and the rank order line from Experiment 3, discuss the similarities and differences and why they
may have occurred.
This experiment can be thought of as testing the hypothesis: If popped popcorn is less dense than the same weight of unpopped corn, then its volume should be greater.
Plastic graduated cylinders are recommended so the students can handle them safely. They are inexpensive and available from scientific supply houses. Other measuring devices could
be substituted so long as they have narrowly spaced gradations.
In this experiment students should conclude that materials less dense than water will float on it. But they also will observe that rice which had a bulk density less than water
sinks. This observation is an indication that the students' current understanding of density is incomplete and that their "theory" must be modified to incorporate the fact that some
materials which seem to be less dense than water sink. In this way they can come to an understanding of the concept of relative density. Perhaps they will independently come up with
the idea that the air between the particles makes the bulk material appear less dense than the particles that make it up.
Experiment 4: Using Flotation as a Way of Evaluating the Density of Materials
Background: The greater the weight of the cargo, the lower a boat floats in water. Thus the depth to which a floating container sinks in water is a measure of the weight in the container. This
idea can be used in place of, or in addition to, a balance and weights to rank the weight of a given volume of a material.
Objective: 1. The children will discover another way of determining the rank order of materials by density.
Materials: a 1 or 2 liter transparent soda bottle with the neck cut off
many smaller diameter plastic bottles whose length are much greater than their width or diameter and which fits inside the soda bottle with out too much friction. A scale in
centimeters or quarter inches should be marked on the side of these bottles. (See drawing below.)
test materials from Experiment 1 and 2
baby food jars from Experiment 1 and 2
pennies or other suitable ballast to keep the inner bottle floating upright if necessary
rank order line
jar shapes for recording data
Procedure: Review with the students what they learned about density when they used the balance. Discuss that now they are going to use another method of determining the rank order of
materials. As an introduction the children could be asked to discuss how the weight of the material in a boat effects how it floats.
With the students doing the work, but the teacher directing the experiment:
1. Fill the soda bottle half full of water.
2. Have the children predict the order of the materials based on their previous experiments and record their predictions using colored cut outs as before.
3. Place the contents of the baby food jar into the inner container.
4. Float the inner container in the soda bottle. Observe how low it floats based on the markings on the side of the inner container.
5. Have the children record the observation on white cut outs and place them on the rank order line.
6. Alternatively, after a few demonstrations, the children can be divided into groups. They would be instructed to make the measurements on several materials themselves and then
report back to the whole group later.
Conclusion: Bring the children together with the three rank order lines and ask them what conclusions they can draw from the experiments about the weight of the materials and their densities.
They should be asked to compare the rank order of the materials as determined in Experiment 1 and 2 and in Experiment 4. They should see that the rank order is the same and conclude
that this method also confirms the results of Experiment 2.
What did they learn from Experiment 3? Help them see the inverse order of the materials on the rank order line. The densest material, the one that required the least volume to
balance the assigned weight, took up the least amount of space when put into the water.
The children should be asked to think about how this knowledge of density could be useful in their lives. Examples might be deciding how full they can fill their cereal bowls with
cereal that is more or less dense than milk. Or how to pack their suitcase to get the most clothes in it. Younger children might enjoy drawing a picture depicting how the density of
a material affects their lives.
Extensions: 1. Test different kinds of cereals to see which are more dense or less dense by adding them to milk. Have the children bring in many different kinds of cereals.
2. Giving each student or each group of students a ball of clay or a piece of aluminum foil, have them form a boat that will float when an equal amount of a designated material is
put in it. Have a big tub of water and allow the students to learn by trial and error. When they have built a boat that floats with the material in it, have them draw a picture
of it. Tell them to take care to get the length and width measured exactly. When all the pictures have been drawn, discuss as a group what kind of boat was needed to float the
material. Have the children predict what kind of boat would be needed to carry a heavier material or a lighter material. They could make those boats as time allowed. | {"url":"http://www.juliantrubin.com/encyclopedia/physics/density.html","timestamp":"2014-04-20T00:38:18Z","content_type":null,"content_length":"43745","record_id":"<urn:uuid:ccce0443-6c29-46b9-8670-e7e72cb337d3>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
Program by Special Session
Joint Mathematics Meetings Program by Special Session
Current as of Wednesday, January 16, 2008 00:26:54
Program | Deadlines | Timetable | Inquiries: meet@ams.org
Joint Mathematics Meetings
San Diego, CA, January 6-9, 2008 (Sunday - Wednesday)
Meeting #1035
Associate secretaries:
Michel L Lapidus, AMS lapidus@math.ucr.edu, lapidus@mathserv.ucr.edu
James J Tattersall, MAA tat@providence.edu
AMS-SIAM Special Session on Environmental Mathematics: Some Mathematical Problems on Climate Change and Geophysical Fluid Dynamics
• Sunday January 6, 2008, 8:00 a.m.-10:40 a.m.
AMS-SIAM Special Session on Environmental Mathematics: Some Mathematical Problems on Climate Change and Geophysical Fluid Dynamics, I
Samuel S. Shen, San Diego State University shen@math.sdsu.edu
Gerald R. North, Texas A\&M University
• Sunday January 6, 2008, 2:15 p.m.-6:05 p.m.
AMS-SIAM Special Session on Environmental Mathematics: Some Mathematical Problems on Climate Change and Geophysical Fluid Dynamics, II
Samuel S. Shen, San Diego State University shen@math.sdsu.edu
Gerald R. North, Texas A \& M University
• Wednesday January 9, 2008, 8:00 a.m.-10:50 a.m.
AMS-SIAM Special Session on Environmental Mathematics: Some Mathematical Problems on Climate Change and Geophysical Fluid Dynamics, III
Samuel S. Shen, San Diego State University shen@math.sdsu.edu
Gerald R. North, Texas A \& M University
MAA Online Inquiries: meet@ams.org | {"url":"http://jointmathematicsmeetings.org/meetings/national/jmm/2109_program_ss29.html","timestamp":"2014-04-18T22:12:29Z","content_type":null,"content_length":"10041","record_id":"<urn:uuid:a3da76c5-97d2-4a0f-b164-46ed9b2f3c8b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mind over matter in sports
The materials required for this science project:
- 30 long distance runners between ages 15 to 18 years
- A running track
- A stopwatch
- 30 bottles of water sweetened with syrup
1. For this experiment, the independent variable is the pre-conditioning information given to the runners before the 2nd race, i.e. (a) that the sweetened water will improve their performance, (b)
that the water will not affect their performance and (c) that the water will worsen their performance. The dependent variable is the time taken by the athletes to complete the 3 laps. The time taken
to complete the 3 laps is measured with a stopwatch. The constants (control variables) are the age of the participants and the distance of each lap.
2. On the 1st day of the experiment, the 30 athletes are made to run 3 laps around the track. The time taken by each of the athlete to complete the race is noted. The athletes are then divided into 3
groups of 10 persons each so that the average time of the runners in each group is almost the same. The average time is calculated by totaling the time taken by all the runners in the group and
dividing by 10.
3. On the 2nd day of the experiment, all 30 athletes are brought to the football field again. They are separated into the 3 groups as mentioned in procedure 2. The 3 groups are pre-conditioned as
follows :
a. the 1st group of athletes is given 1 bottle of sweetened water each. They are informed that the bottle contains nutrients required to enhance their stamina and running performance.
b. the 2nd group of athletes is also given the same bottle of sweetened water each. They are informed that the water in the bottle will not affect their running performance.
c. the 3rd group of athletes is also given the same bottle of sweetened water each. They are informed that the water in the bottle will reduce their stamina and worsen their performance.
4. The 30 athletes are asked once more to run the 3 laps around the field. The time each of them took to complete the 3 laps is recorded. The new average time taken for each of the 3 groups is
calculated and is recorded in the table given below | {"url":"http://www.all-science-fair-projects.com/project1132_140_2.html","timestamp":"2014-04-18T21:44:29Z","content_type":null,"content_length":"25682","record_id":"<urn:uuid:81ecbbfa-9201-4647-a4f3-312431b7010b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
Student Support Forum: 'Calculating mean of multiple stochastic process...' topic
Author Comment/Response
Thanks to “Rod Lm” I got some very nice input for the following Problem.
My intention is to simulate within a manipulative function a given number of possible stock price paths and to calculate their mean. So If something is manipulated the “new mean” is also
calculated automatically and both shown by a line and numerically within the chart. I would also like to know how I can also calculate the mean outside the Chart, I tried the following but it
didn't work:
GeometricBrownianMotionProcess[0.1, 0.2, 100], {0, 250, 0.05}][
"Path"], {20}][[All, -1]]]
The next step if possible is to introduce a lower boundary. So e.g. if the initial Stock Price is 100 then the boundary could be at let’ s say 70. First, I want the boundary to be shown by a
line. Second, I would like to stop all stochastic processes that fall below this threshold and then receiving the mean from all remaining processes.
Alternatively, if it’s not possible to stop the processes that fall below the threshold, I also just want the mean of the remaining processes that are beyond the threshold. Third, I also
would like to have a manipulable threshold. So In short, I want to be able to switch the threshold between 20 and 90 and then receive the mean of the processes that remain beyond this
Is something possible? If yes, I would be really thankful if I would receive some suggestions how to solve my problem, or at least parts of it. So far I have done the following and is it even
possible to modify this code in such a manner that my problem can be solved?
My code so far:
GeometricBrownianMotionProcess[μ,σ,S_0], {0, 250, 0.05}]["Path"], {P}], Joined -> True,
AxesLabel -> {"Time", "S_t"},
PlotLabel ->
Style["Forecasted Stock Price\n (Brownian Motion)", Bold],
PlotRange -> All, ImageSize -> 500,
PlotStyle -> Directive[{Thin, Lighter@Gray}]], {{S_0,
100, "Initial Stock Value"}, 100, 500, 0.05,
Appearance -> "Labeled"}, {{μ, 0.01, "Drift μ"}, 0.01, 1,
0.05, Appearance -> "Labeled"}, {{σ, 0.01,
"Standard Deviation σ"}, 0.01, 1, 0.05,
Appearance -> "Labeled"}, {{P, 1, "Paths"}, 1, 100, 1,
Appearance -> "Labeled"}, {{seed, 77777, "New Random Case"}, 10000,
999999, 1},
Button["Set Initial Values", {S_0 = 25, μ =
0.01, σ = 0.01}, ImageSize -> 150],
ControlPlacement -> Left]
URL: , | {"url":"http://forums.wolfram.com/student-support/topics/513969","timestamp":"2014-04-18T15:47:56Z","content_type":null,"content_length":"25782","record_id":"<urn:uuid:a9ee0a4e-5c8d-4417-8e01-b3f9746ef2b0>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
Crofton, MD Precalculus Tutor
Find a Crofton, MD Precalculus Tutor
...Since then, I informally assisted fellow students in my college studies in engineering. I worked successfully for many years as an electrical engineer, and then began the highly rewarding job
of homeschooling my two children, who majored in biology in college, and are now continuing their gradua...
12 Subjects: including precalculus, reading, calculus, geometry
...That is, it is more about why something is a certain way rather than memorizing ideas. I have taught high school math for 44 years. The subjects I have taught are Algebra 1 and 2 for all those
years and Geometry, Trigonometry, Precalculus, and Calculus for at least 35 years.
21 Subjects: including precalculus, calculus, world history, statistics
...Let me help. I love this stuff. I have a Carnegie Mellon University Master of Science Degree.
17 Subjects: including precalculus, reading, English, chemistry
...The students who benefit the most from tutoring are those who fall outside that model. My BS degree was earned at the Florida Institute of Technology in Physics, and I have two Masters
degrees: one in Mathematics and the other in Physics. They were obtained from the University of Maryland and from Johns Hopkins University.
39 Subjects: including precalculus, English, calculus, physics
...Tutoring subjects include but not limited to chemistry, pre-algebra, algebra, calculus trigonometry, environmental science, biology etc. I take great pride and joy into discovering the gaps in
understanding for my students and I believe it is this ability that sets me apart from other tutors. P...
13 Subjects: including precalculus, chemistry, geometry, biology
Related Crofton, MD Tutors
Crofton, MD Accounting Tutors
Crofton, MD ACT Tutors
Crofton, MD Algebra Tutors
Crofton, MD Algebra 2 Tutors
Crofton, MD Calculus Tutors
Crofton, MD Geometry Tutors
Crofton, MD Math Tutors
Crofton, MD Prealgebra Tutors
Crofton, MD Precalculus Tutors
Crofton, MD SAT Tutors
Crofton, MD SAT Math Tutors
Crofton, MD Science Tutors
Crofton, MD Statistics Tutors
Crofton, MD Trigonometry Tutors | {"url":"http://www.purplemath.com/Crofton_MD_Precalculus_tutors.php","timestamp":"2014-04-17T13:10:56Z","content_type":null,"content_length":"24010","record_id":"<urn:uuid:15dc5dee-f22f-4117-9774-a7fc974fe79a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00463-ip-10-147-4-33.ec2.internal.warc.gz"} |
Old version - Road to Star Ocean - The possibility of the super lightspeed
[Prev] [Top] [Next]
Road to Star Ocean
Old version
This is not the latest version.
Chapter 1: The possibility of the super lightspeed flight (The special theory of relativity and my theory on the space)
Chapter 2: G which acts on a spaceship (The general theory of relativity and a spaceship)
Chapter 3: Traveling the Galaxy
Chapter 1: The possibility of the super lightspeed flight (The special theory of relativity and my theory on the space)
Section 1
The negative-world and the imaginary number
According to the special theory of relativity, [1] is formed. It is said the inside in the root number of [1] becomes minus and makes a imaginary number which doesn't exist when v becomes bigger
than the velocity of light c and that makes one of the bases which make the super lightspeed flight impossible.
Is the imaginary number actually the number which can not exist? First, I think about so-called real numbers. The real number has a basis on matter in our world. Therefore, supposing that the
matter is positive, all real numbers are positive, too. Then, we think about minus numbers -1 and so on. They are nothing but numbers we have put minus marks on the convenience of our
calculations. Resultingly, the real numbers, both plus numbers and minus numbers are all positive numbers based on the matter.
Well, what are negative numbers against the real numbers which are positive? I think they are imaginary numbers. The imaginary numbers are the numbers which become positive when squared. In our
positive number world, all numbers become positive when squared. However, it is logical to suppose all numbers in a negative number world become negative when squared. In the negative number
world, the imaginary numbers do exist and become the real numbers in the negative number world. The imaginary numbers are essentially negative numbers.
Well, does the negative number world exist? As basis to support the existence of the negative numbers, antigravitational matter can be thought of, which repulsion works to the matter (There is a
possibility that so-called antimatter such as antiproton is antigravitational matter). Electric power and magnetic power have gravitation and repulsion. Gravitation acts between matter and
matter. Equally gravitation acts between antigravitational matter and antigravitational matter. Therefore I suppose repulsion acts between matter and antigravitational matter. I think the world
of this antigravitational matter is the negative number world. In our world, the antigravitational matter is not found, because the repulsion worked in the process of the space forming and has
formed their unique world apart from our matter world. Then, there is no basis to deny this negative number world was generated in the process of the space forming and the very existence of the
antigravitational matter world makes it possible to think the whole world is symmetrical. As long as the structure of the space is pending, I think such assumption is not forbidden.
I consider the history of the space from the relation between matter and antigravitational matter. At the beginning of the space, matter and antigravitational matter were in minimal space. In
this case, the repulsion which acted between matter and antigravitational matter was extremely strong and as an unstabilizing factor, it made very big power to expand space. From this, the
inflationary space is explicable. Then it is unnecessary to consider the negative pressure occurred in the space on the supercooling condition and resulted in the inflation.
When the expansion of the space moved ahead, matter and antigravitational matter were scattered into each unique spaces. And as they existed in the maximal space, the repulsion and the
gravitation intersected and acted as stabilizing factors.
Hereinafter, we call our world of matter the positive-world and call the world of antigravitational matter the negative-world.
If the imaginary numbers are the real numbers in the negative-world, what do [2] and [3] in v>c mean? [2] is a formula about matter. But it is difficult to think matter converts into
antigravitational matter. And matter follows the nature of its peculiar time and space. Therefore, [2] means the existence of matter in the negative-world. In other words, at super lightspeed,
matter can shift into the negative-world. Case of [3] means the time in the negative-world passes on the matter at super lightspeed in the negative world.
Incidentally, doctor Hawking admit the imaginary time. But as long as physics is the science which handles existence, it means the imaginary numbers do exist. Also, if the existence of Einstein's
space clause is admitted, I think it means the existence of the repulsion of antigravitational matter in the negative-world.
Section 2
The rush into the negative-world
In the negative-world, the super lightspeed flight becomes possible. Well, by what means can we rush into the negative-world? It is a wall that our mass becomes infinite when our speed approaches
the velocity of light.
However, I discovered [1] can be integrated by the speed at the section from 0 to c. Because there is a formula [4], [5] is available. This integration has a definite answer, and that means
infinite mass can be gotten by giving the mass definite impulse ( force*time) of pimc/2. By this, we think the wall of infinite mass is cleared.
Then, you suppose speed of a spaceship approached lightspeed and mass of the spaceship became infinite by giving impulse. You can think increased mass is used for rushing into the negative-world
because mass is equivalent to energy. It is possible to think we can go to another world by using the infinite energy, too. Also, by Lorentz's shrinkage, it is possible to think that our sizes
become 0 and pass through the wall between two worlds, too.
I try to think about this from the structure of our world. The structure of our world is pending but it is clear that the space has warps according to the theory of relativity and so on. I think
our world is non- Euclid where parallel lines cross. From macroscopic point of view the Newtonian dynamics is applicable, while from microscopic point of view the quantum mechanics is applicable.
On the other hand, in human size world Euclidean geometry is applicable while in space size world the non-Euclidean geometry is applicable. Then, the destination where crossed parallel lines go
is the negative-world. You can think it means spaceships rush into the negative-world if they advance straight in that way.
Cf. figure 1
Then, if supposing space is warped, a spaceship in the deep space does not go straight if it moves with uniform velocity. We need to give force to make it go straight. For that purpose, we should
give the impulse of pimc/2.
What relation do the negative-world and the positive world have? Because the negative-world is the world of antigravitational matter and symmetrical world to the positive-world, it can be
sufficiently thought of that the negative-world is the world of shadow against the positive world. The world of the shadow is assumed by the superstring theory, too.
[Prev] [Top] [Next] | {"url":"http://se-engine.org/res/p5t1r1.html","timestamp":"2014-04-19T04:42:53Z","content_type":null,"content_length":"9971","record_id":"<urn:uuid:740a3c94-930a-4ff0-9a63-5c86c9ce6a0c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Study on harvested population with diffusional migration.
(English) Zbl 0976.92023
Summary: This paper is devoted to the study of the dynamical behavior and harvesting problem of an exploited population with diffusional migration, for which a protective patch is established. We
examine the effects of protective patch and harvest on the population resources and conclude that the protective patch is effective for the conservation of population resources and ecological
environment, though in some cases the extinction can not be eliminated.
The dangerous region, the parameters domains and the typical bifurcation curves of stability of steady states for the considered system are determined. The optimal harvest policy for the considered
population is made also. The explicit expressions are obtained for the optimal harvesting effort, the maximum sustainable yield and the corresponding population density. Our results provide a
theoretical evidence for the practical management of biological resources.
92D40 Ecology
34D05 Asymptotic stability of ODE
37N25 Dynamical systems in biology
34D23 Global stability of ODE | {"url":"http://zbmath.org/?q=an:0976.92023","timestamp":"2014-04-19T19:54:41Z","content_type":null,"content_length":"21344","record_id":"<urn:uuid:08e52947-5bb1-4565-a344-9ffe2a04334d>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mixing Algorithms with Phase Transitions
Cool water down to 0 degrees Celsius, and it hardens into ice. Heat up a magnet and suddenly, its magnetism vanishes. Phase transitions have long been a focus for physicists, but now their
computational manifestations have attracted the attention of computer scientists. "Physicists and computer scientists have realized that they are investigating the same questions from different
angles and using complementary techniques," says Alistair Sinclair.
EECS Professor Alistair Sinclair (Photo by Peg Skorpinski) Sinclair works on mixing problems, which include combinatorial questions such as how many times do you have to shuffle a deck of cards
before each ordering occurs with nearly equal likelihood. Instead of cards, however, he has focused on the mixing problems of statistical physics, such as shuffling the orientation of atomic magnets
in the Ising model for ferromagnetism.
The Ising model describes the behavior of a large array of individually magnetized atoms, each pointing up or down. Atoms like to take on the orientation of their neighbors, and at low temperatures
this interaction is so strong that the magnet as a whole tends to be ordered, with nearly all atoms aligned. At high temperatures, however, entropy takes over, and each atom picks a direction more or
less at random. At a temperature somewhere in the middle of the scale, there is a phase transition—an abrupt change from order to chaos. Mysteriously, at that same temperature, natural algorithms for
solving the Ising model appear to shift from inefficient to efficient. "Our goal is to turn this apparent connection—and others like it—into theorems," says Sinclair.
The Ising model doesn't give the exact configuration of ups and downs a given magnet will adopt; rather, it gives the odds of finding the system in a particular configuration. To "solve" the model is
to compute a function of the odds that determines the model's entropy, specific heat, and other thermodynamic properties. Computing this function is known to be hard; indeed, it has a property
analogous to NP-completeness for combinatorial counting problems.
A quest for algorithms for such problems is what led theoretical computer scientists, starting in the late 1980s, to statistical physics. Physicists had long ago developed a clever technique for
getting at some of the properties of their models by producing sample configurations according to the probabilities with which they naturally occur at equilibrium. Computer scientists went further by
figuring out how to use such sampling methods to devise approximation algorithms for the hardest counting problems of combinatorics and statistical physics. "We were interested in sampling," says
Sinclair, "not because sampling is so interesting in its own right, but because if you can sample, you can count."
To arrive at a sample, the physicists would follow a mixing procedure analogous to shuffling a deck of cards. For the Ising model, for example, the procedure starts with a given configuration and
then repeatedly picks an atom and flips its orientation with a probability that depends on the orientations of its neighbors and the temperature. After a while, this procedure (known as a "Markov
chain Monte Carlo" algorithm) arrives at a “typical” configuration and is said to be "mixed." This process, Sinclair notes, is not only a reasonable way of producing a sample but also a plausible
model for how the actual magnet evolves toward equilibrium. A key question, then, is: How do you know when you've done enough flips to produce a typical configuration?
The physicists' criteria for when to stop flipping were often ad hoc. Typically, they would continue flipping until certain observables stopped changing. "This is known to be dangerous," says
Sinclair. "You might get something that looks mixed but isn’t."
So, two decades ago, Sinclair, together with Mark Jerrum, his Ph.D. advisor at the University of Edinburgh, set out to place the physicists' sampling methods on a solid mathematical foundation.
Together, they wrote a series of seminal papers on mixing, introducing a new technique for showing whether or not a particular sampling algorithm takes polynomial time to mix. Their method is
intuitive to visualize: Suppose the sample space is shaped like a barbell, and the probability of crossing the neck is exponentially small. Then, when moving randomly around the space, it's easy to
get stuck for a long time on one side of the barbell and never get a representative sample. What Jerrum and Sinclair showed is that this is the only way to get stuck: If there is no bottleneck in the
sample space, sampling methods are guaranteed to converge quickly.
In a subsequent paper, they introduced a new analytic method for showing that a sample space has no bottlenecks and used it to devise an efficient algorithm for a classic hard counting problem:
approximating the "permanent" of an important class of matrices. (The permanent looks similar to the determinant but is computationally far harder to compute.) Finding the permanent of a matrix, as
it turns out, corresponds to solving the dimer model, a classical physical model for diatomic molecules. In 2001, Jerrum, Sinclair, and Eric Vigoda, Sinclair's former student, now at the Georgia
Institute of Technology, extended the algorithm so that it now approximates the permanent of any positive matrix. They won the 2006 Fulkerson Prize for their work.
Sinclair's recent work is on the relationship between phase transitions of physical models and mixing times of the corresponding natural flipping process. The dimer model does not have a phase
transition, but the Ising model does, which makes the behavior of the mixing process more complex. The exponentially slow mixing behavior below the phase transition is due to a bottleneck in the
sample space: At low temperatures, each atom exerts a strong influence on its neighbors. Starting with the all-down configuration, it is difficult to get to the all-up configuration because it's
improbable that significant up-pockets will form.
Thus far, researchers have been able to show that this bottleneck suddenly vanishes at the phase transition only for the case of two-dimensional Ising models. Sinclair is exploring the question of
whether there is a similar precise correspondence between mixing times and phase transitions in higher dimensions and for other statistical physics models.
He is also looking at what happens when a model is affected by an exterior environment. For instance, if the Ising model is surrounded by a fixed boundary of upward-pointing atoms, the physical phase
transition disappears. Does this mean that the computational phase transition also disappears? In recent work—with his former student Dror Weitz, now a postdoctoral researcher at Rutgers' DIMACS
Center, and mathematical physicist Fabio Martinelli of the University of Rome—Sinclair showed that for the Ising model and several other models, if the underlying graph is a tree, the answer is yes.
—Sara Robinson with Erica Karreich | {"url":"http://www.eecs.berkeley.edu/department/EECSbrochure/c5-s3.html","timestamp":"2014-04-20T14:04:27Z","content_type":null,"content_length":"11199","record_id":"<urn:uuid:7c2cf9d8-bff0-452d-aefe-37ffe8d12518>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: norm
Replies: 8 Last Post: Mar 10, 2013 1:45 PM
Messages: [ Previous | Next ]
quasi Re: norm
Posted: Mar 9, 2013 4:39 AM
Posts: 9,922
Registered: 7/15/05 novis wrote:
>Suppose A is a p x q columnwise orthonormal matrix and suppose
>x is any vector in R^p. Then what is the relation between
>||x|| and ||Ax|| ?
A is a p x q matrix, so regarded as a function,
A maps R^q to R^p.
x is in R^q
not in R^p as you specified, and
Ax is in R^p
Also, since A is columnwise orthonormal, it follows that
p >= q.
As far as norm comparison, since A is orthonormal,
|Ax| = |x|
where the norms are the usual Euclidean norms in R^p and R^q, | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2439857&messageID=8577271","timestamp":"2014-04-24T17:14:22Z","content_type":null,"content_length":"25616","record_id":"<urn:uuid:409aa375-3bcd-4a07-abef-56418a5244c8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00376-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-dev] tanh(j*pi/2)
Robert Kern robert.kern@gmail....
Wed Jun 10 01:29:16 CDT 2009
On Wed, Jun 10, 2009 at 01:22, David Goldsmith<d_l_goldsmith@yahoo.com> wrote:
> --- On Tue, 6/9/09, Robert Kern <robert.kern@gmail.com> wrote:
>> > I should probably 'Note' this tanh's doc (and other
>> transcendentals that may exhibit this behavior).
>> <shrug> It's generic to all floating point
>> implementations of
>> transcendentals. I'd prefer not to clutter up each ufunc's
>> docs. A
>> section in the User's Guide about floating point arithmetic
>> in general
>> might be worthwhile, though.
> With a list of numpy-furnished functions to which the "problem" applies. (It's one thing to explain the phenomenon in general, and once this is done the "pedestrian" user may then be alert to the danger in source code, but still forget, or simply not realize, that it applies to library functions as well. Put another way, though one will certainly recognize that their own numerics will be subject to this, they might still expect that standard, furnished mathematical functions will have been written to be more "robust" w/ respect to their exact mathematical properties. I certainly wouldn't have _expected_ this problem w/ good old regular tan - my default assumption would be that N.tan(N.pi/2) would return N.inf or N.nan, not:
>>>> N.tan(N.pi/2)
> 16331778728383844.0) :-)
Umm, explaining the phenomenon necessarily entails showing examples of
the library functions it applies to. This particular phenomenon is
*about* how the standard, furnished mathematical functions behave,
nothing else. But I don't think a full, explicit list is necessary or
desirable. "Transcendental functions" plus a few notable examples
should cover it.
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the Scipy-dev mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-dev/2009-June/012125.html","timestamp":"2014-04-19T09:39:14Z","content_type":null,"content_length":"4620","record_id":"<urn:uuid:672b849f-8182-4641-bf96-f3b6e23f01ba>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pure Type Sytems formalized
Results 1 - 10 of 13
, 1998
"... The programming language Standard ML is an amalgam of two, largely orthogonal, languages. The Core language expresses details of algorithms and data structures. The Modules language expresses
the modular architecture of a software system. Both languages are statically typed, with their static and dy ..."
Cited by 69 (9 self)
Add to MetaCart
The programming language Standard ML is an amalgam of two, largely orthogonal, languages. The Core language expresses details of algorithms and data structures. The Modules language expresses the
modular architecture of a software system. Both languages are statically typed, with their static and dynamic semantics specified by a formal definition.
, 1994
"... LEGO is a computer program for interactive typechecking in the Extended Calculus of Constructions and two of its subsystems. LEGO also supports the extension of these three systems with
inductive types. These type systems can be viewed as logics, and as meta languages for expressing logics, and LEGO ..."
Cited by 68 (10 self)
Add to MetaCart
LEGO is a computer program for interactive typechecking in the Extended Calculus of Constructions and two of its subsystems. LEGO also supports the extension of these three systems with inductive
types. These type systems can be viewed as logics, and as meta languages for expressing logics, and LEGO is intended to be used for interactively constructing proofs in mathematical theories
presented in these logics. I have developed LEGO over six years, starting from an implementation of the Calculus of Constructions by G erard Huet. LEGO has been used for problems at the limits of our
abilities to do formal mathematics. In this thesis I explain some aspects of the meta-theory of LEGO's type systems leading to a machine-checked proof that typechecking is decidable for all three
type theories supported by LEGO, and to a verified algorithm for deciding their typing judgements, assuming only that they are normalizing. In order to do this, the theory of Pure Type Systems (PTS)
is extended and f...
- In The Informal Proceeding of the 1993 Workshop on Types for Proofs and Programs , 1993
"... this paper appears in Types for Proofs and Programs: International Workshop TYPES'93, Nijmegen, May 1993, Selected Papers, LNCS 806. abstraction, compute a type for its body in an extended
context; to compute a type for an application, compute types for its left and right components, and check that ..."
Cited by 24 (3 self)
Add to MetaCart
this paper appears in Types for Proofs and Programs: International Workshop TYPES'93, Nijmegen, May 1993, Selected Papers, LNCS 806. abstraction, compute a type for its body in an extended context;
to compute a type for an application, compute types for its left and right components, and check that they match appropriately. Lets use the algorithm to compute a type for a = [x:ø ][x:oe]x.
FAILURE: no rule applies because x 2 Dom (x:ø )
, 1999
"... Two of the distinguishing features of Standard ML Modules are its term dependent type syntax and the use of type generativity in its static semantics. From a type-theoretic perspective, the
former suggests that the language involves first-order dependent types, while the latter has been regarded as ..."
Cited by 17 (4 self)
Add to MetaCart
Two of the distinguishing features of Standard ML Modules are its term dependent type syntax and the use of type generativity in its static semantics. From a type-theoretic perspective, the former
suggests that the language involves first-order dependent types, while the latter has been regarded as an extra-logical device that bears no direct relation to typetheoretic constructs. We
reformulate the existing semantics of Modules to reveal a purely second-order type theory. In particular, we show that generativity corresponds precisely to existential quantification over types and
that the remainder of the Modules type structure is based exclusively on the second-order notions of type parameterisation, universal type quantification and subtyping. Our account is more direct
than others and has been shown to scale naturally to both higher-order and first-class modules.
- PROCEEDINGS OF THE SECOND INTERNATIONAL CONFERENCE ON TYPED LAMBDA CALCULI AND APPLICATIONS, VOLUME 902 OF LECTURE NOTES IN COMPUTER SCIENCE , 1995
"... ..."
- in Dybjer, Nordstrom and Smith (eds), Types for Proofs and Programs: International Workshop TYPES'94, Bastad , 1995
"... This paper is about mechanical checking of formal mathematics. Given some formal system, we want to construct derivations in that system, or check the correctness of putative derivations; our
job is not to ascertain truth (that is the job of the designer of our formal system), but only proof. Howeve ..."
Cited by 6 (2 self)
Add to MetaCart
This paper is about mechanical checking of formal mathematics. Given some formal system, we want to construct derivations in that system, or check the correctness of putative derivations; our job is
not to ascertain truth (that is the job of the designer of our formal system), but only proof. However, we are quite rigid about this: only a derivation in our given formal system will do; nothing
else counts as evidence! Thus it is not a collection of judgements (provability), or a consequence relation [Avr91] (derivability) we are interested in, but the derivations themselves; the formal
system used to present a logic is important. This viewpoint seems forced on us by our intention to actually do formal mathematics. There is still a question, however, revolving around whether we
insist on objects that are immediately recognisable as proofs (direct proofs), or will accept some meta-notations that only compute to proofs (indirect proofs). For example, we informally refer to
previously proved results, lemmas and theorems, without actually inserting the texts of their proofs in our argument. Such an argument could be made into a direct proof by replacing all references to
previous results by their direct proofs, so it might be accepted as a kind of indirect proof. In fact, even for very simple formal systems, such an indirect proof may compute to a very much bigger
direct proof, and if we will only accept a fully expanded direct proof (in a mechanical proof checker for example), we will not be able to do much mathematics. It is well known that this notion of
referring to previous results can be internalized in a logic as a cut rule, or Modus Ponens. In a logic containing a cut rule, proofs containing cuts are considered direct proofs, and can be directly
accepted by a proof ch...
, 1995
"... Introduction The Tait--Martin-Lof proof is the best known and simplest proof of confluence (the Church--Rosser theorem) for various lambda calculi. It is explained in detail, for example, in
[Bar84, HS86, Rev88]. The desire to clarify this proof has inspired work on concrete representation of bindi ..."
Cited by 6 (0 self)
Add to MetaCart
Introduction The Tait--Martin-Lof proof is the best known and simplest proof of confluence (the Church--Rosser theorem) for various lambda calculi. It is explained in detail, for example, in [Bar84,
HS86, Rev88]. The desire to clarify this proof has inspired work on concrete representation of binding [dB72, Coq91]. Perhaps the best modern version is given in [Tak95]. Formal proofs are reported
in [Hue94, MP93, Pfe92, Sha88] 1 . In this note I outline the innovation given in [Tak95] (and formalized by McKinna [MP93]), and present a further improvement which I believe has not appeared in the
literature before. 1.1 Preliminary Definitions Let Rel2 be the class of binary relations, and R; T 2 Rel2 ; we write aRb for (a; b) 2 R . For R 2 Rel2 the transitive reflexive closure of R , wri
, 1999
"... We experiment a method for representing a concurrent object calculus in the Calculus of Inductive Constructions. Terms are first defined in de Bruijn style, then names are re-introduced in
binders. The terms of the calculus are formalized in the mechanized logic by suitable subsets of the de Bruijn ..."
Cited by 3 (0 self)
Add to MetaCart
We experiment a method for representing a concurrent object calculus in the Calculus of Inductive Constructions. Terms are first defined in de Bruijn style, then names are re-introduced in binders.
The terms of the calculus are formalized in the mechanized logic by suitable subsets of the de Bruijn terms; namely those whose de Bruijn indices are relayed beyond the scene. The ff-equivalence
relation is the Leibnitz equality and the substitution functions can de defined as sets of partial rewriting rules on these terms. We prove induction schemes for both the terms and some properties of
the calculus which internalize the renaming of bound variables . We show that, despite that the terms which formalize the calculus are not generated by a last fixed point relation, we can prove the
desire inversion lemmas. We formalize the computational part of the semantic and a simple type system of the calculus. At least, we prove a subject reduction theorem and see that the specications and
proofs have the nice feature of not mixing de Bruijn technical manipulations with real proofs. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1451906","timestamp":"2014-04-20T07:00:54Z","content_type":null,"content_length":"35097","record_id":"<urn:uuid:ab324b20-c39a-4f0b-82d5-24ed9079008d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alogorithm/maths behind rendering cross sections?
What is the maths behind a program which render 2D cross sections from a 3D model? (or 3D cross sections from a 4D model)
OR how does computers calculate the (n-1)D cross section obtained from a nD model?
when trying to look for the ans in wikipedia and google, there are a lot of stuff (some maths articles and some irrelevant stuff) that I don't know where to look first
Re: Alogorithm/maths behind rendering cross sections?
It depends on what form the input has, i.e., do you have a bunch of vertices, or a bunch of faces, or equations defining the input polyhedron/polytope, etc.. But the main underlying idea is the same:
intersect the object with a horizontal plane at some user-given height. How this is calculated depends on the input representation, of course. If you have a bunch of halfspace equations, it's as
simple as adding one more equation to it (the plane's equation). For a bunch of vertices, you need to apply some convex hull algorithm to find the faces and edges, and then calculate their
intersection with the plane. If the input already has edges (say it's in the form of a graph) then you can just use simple line equations to solve for the points of intersection. (But you'll still
have to solve for how those points are connected in the intersection -- probably some kind of convex hull algorithm. If the cross-section is 2D, then a simple "gift-wrapping" algorithm should do it.
If it's 3D, then a more involved convex hull algo is required. If you're brave and making 4D cross-sections of objects in 5D or beyond, then you need a full-blown, general convex hull algo to compute
the boundaries of the thing.)
Re: Alogorithm/maths behind rendering cross sections?
I'm also interested in this, too. I would like to find/develop a program that can do midsection cuts of toratopes. The shape can be programmed in by its parametric equation or surface hypervolume
equation. I would like the ability to move and rotate the cross section plane around, and fly through the cut array. I also have zero programming skill in this respect. But, I do have a good feel for
how the arrays should look by now. It looked like the one made by Mrrl could work, but I have no idea where to start.
Re: Alogorithm/maths behind rendering cross sections?
ICN5D wrote:I'm also interested in this, too. I would like to find/develop a program that can do midsection cuts of toratopes. The shape can be programmed in by its parametric equation or surface
hypervolume equation. I would like the ability to move and rotate the cross section plane around, and fly through the cut array. I also have zero programming skill in this respect. But, I do have
a good feel for how the arrays should look by now. It looked like the one made by Mrrl could work, but I have no idea where to start.
If you can somehow derive the equation(s) for the toratope you're interested in, then the equation(s) for the cross-sections are trivially derived by setting one or more coordinates to a constant
value. For example, a sphere has equation x^2 + y^2 + z^2 = r^2. So to find its cross-section with the z=1 plane, you just substitute z=1 and obtain: x^2 + y^2 + 1^2 = r^2, which simplifies to: x^2 +
y^2 = r^2 - 1, the form of which indicates that the cross-section is a circle. Of course, this is a trivial example; interesting toratopes will have more complex equations and produce different
cross-section shapes.
P.S. Generally I'd expect toratope analysis to be primarily driven by equations, because most of the interesting toratopes have no vertices, so convex hull based approaches like I described in the
previous post don't work very well. You could use vertex-based approaches if you use polytopic approximations of the toratopes, but then you'll have to make sure the approximations used are
sufficiently fined-grain to yield correct results, and to yield results to the desired accuracy.
Re: Alogorithm/maths behind rendering cross sections?
Well, the beautiful thing about the toratope notation is that is reflects the equations! So, every one of the equations can be derived through a formulaic conversion. Also, the cut algorithm works by
removing a marker, or " I " , to make a cross section. This also happily reflects the actual method of doing so, as you describe. So, in theory, I ought to be able to do this up pretty simple. I am
interested in equation-only input, no approximation. It would be wayyyy easier. I have seen a few 9D shapes recently, and I thought it would be cool to see them in action!
Re: Alogorithm/maths behind rendering cross sections?
So, this sounds pretty straightforward. Where do I find a program that can do this?
Re: Alogorithm/maths behind rendering cross sections?
ICN5D wrote:So, this sounds pretty straightforward. Where do I find a program that can do this?
It's only straightforward with the simplest of equations, and the simplest of slices! Once you get into more complex (i.e., interesting) toratopes, you start getting things with high-degree
polynomials, and once you start doing oblique slices, the equations will turn hairy. Very hairy. For example, the equation for a plain ole innocent 3D torus requires a degree-4 polynomial. While
innocent enough when you're dealing with axis-aligned slices, it quickly turns very hairy with oblique slices... or, for that matter, when trying to solve for the intersection of a ray with a torus
(e.g., done in ray-tracing or when you want to calculate projections), since you have to solve a quartic polynomial. And consider yourself lucky that it's "only" degree-4, since if you go any higher
than that, then there is in general no solution to the equations in the form of +, -, *, /, and n-th root extraction. You'll need to start using heavy-duty weapons like hyperradical functions to deal
with those things.
Unless, of course, you're content with axis-aligned slicing and non-analytic (i.e. numerical) solutions, then it's just a matter of choosing some value for some of the coordinates, and substituting
it into the equations and seeing what comes out. You could use povray's isosurface feature to help with doing renderings of the resulting equations -- IIRC, it uses Newton's method to solve
high-order polynomials so that at least something will come out when you throw something like a 5th degree polynomial at it. Good luck if you're trying to do algebraic analysis on 5th degree (or
higher) polynomials, though... I don't recommend attempting that at home; your brain may catch fire.
Re: Alogorithm/maths behind rendering cross sections?
Well, it is straightforward, relatively speaking
EDIT: Okay, downloaded POVray. I have NO idea how to use this thing. It looks like it only accepts software code. Hmm, can it do plain old equations without all the other stuff? I certainly hope so!
Re: Alogorithm/maths behind rendering cross sections?
I'm quite certain you're not afraid of high-order polynomials, but the question is, should you be?
They're (relatively) tame up to degree 4, where if the polynomial has any solution at all, it can be expressed in terms of +, -, *, /, and the n-th root operation (square root, cube root, fourth
root, etc.). The trouble is, while it may seem at first glance that this simple pattern should hold for polynomials of any degree, it's a proven fact that from degree 5 onwards, only a subset of
polynomials will have a solution of this form; there will be some polynomials for which the solution cannot be expressed in terms of +, -, *, /, and the n-th root operation. This is bad news, because
it means that algebraic analysis will be hampered by the fact that there is no way to write down the value of the root, except implicitly as the solution to a degree ≥5 polynomial. Which sux if
that's the polynomial you're trying to solve in the first place! In order to get around this, you will have to resort to heavy weaponry like the hyperradical functions, which are not so nice because
they're now opaque functions that you can't manipulate as easily as simple algebraic expressions. Well, you still can, but things will no longer be so nice.
Furthermore, even if your initial polynomial is relatively tame (e.g., x^2 + y^2 = r^2, sweet and simple), as soon as you start doing oblique sections or ray-tracing, all sorts of complicated terms
will start crawling out of the woodwork, and your equation quickly grows into something like 5*x^2 + 1/2*y^2 - 14*x + 7*y - 2*x*y + r^2 + ... = 0. For this particular example, it's not so bad:
everything is quadratic, and when things get unruly, we just pull out the quadratic formula hammer and smash it open. The resulting root values may look ugly with ± and √ sprinkled everywhere, but it
works. It's a totally different story when you start dealing with degree-5 polynomials. For example, even the relatively tame-looking x^5 - x + 1 = 0 has a solution that cannot be written in terms of
+, -, *, /, and n-th root extraction. (You're welcome to try it if you don't believe me. really big guns like the hyperradical functions (which, being big guns, are beasts to handle).
That's only the beginning of it, though. You may think that as long as you stay with low-degree polynomials, you're safe, but that's not true if you have more than one simultaneous equation. What
starts out as a pair of innocent quadratic equations, say x^2 + y = 0, and x = z^2 - y^2, once you try solving them, you discover that suddenly you have a degree-4 polynomial on your hands. Add a few
more variables and (quadratic) equations to the mix, and you may find yourself grappling with a degree-5 (or higher) polynomial pretty quickly. Worse, it's been shown that any system of polynomials
(of arbitrary degree) can be reduced to a system of quadratics by adding more variables (of the form x = y^2), or, to put it another way, that a system of quadratic equations is potentially
equivalent to a system of degree-n polynomials where n can be arbitrarily large, like 100. And then you're back to choosing between Newton's method (numerical approximation) or hyperradical functions
(all-round nastiness when you're trying to manipulate them algebraically).
So you could say, let's resort to numerical methods instead of dealing with these crazy hyperradicals, which is perfectly fine, except that numerical methods suffer from locality, that is, without
knowing beforehand the analytical properties of the polynomial, you may not know where to fix the starting point of the algorithm, and so it may get stuck in a local minimum/maximum and not be able
to find the root you're looking for. Newton's method suffers from this if you don't give it a "good" starting point.
All this is important because while povray does have the powerful isosurface primitive, which lets you use almost any arbitrary equation to define your shapes (which lends itself rather conveniently
to what you want, since the toratopic equations are easily derived from their symbolic representation), the way povray implements this in the back-end is via numerical methods like Newton's method.
So if you give it a particularly pathological equation, it may suffer from getting "stuck" in local minima, thus not finding the root(s) it's supposed to, which in turn causes the output to have
visual artifacts (like missing patches of surface where they're supposed to be there). You also run the risk of numerical overflow/underflow/catastrophic roundoff error, which can introduce
strange-looking geometric artifacts in your isosurface like holes or "spikes" that vanish/appear when you move the camera, because they aren't really there analytically, they're just artifacts of the
limitations of numerical methods.
But having said all that, generally numerical methods do work rather well, so unless you're dealing with particularly pathological shapes, you shouldn't run into any big problems. But you should be
aware that problems sometimes do crop up, so when it happens, just remember that you've been warned.
Re: Alogorithm/maths behind rendering cross sections?
Okay, no oblique cuts for now. I'm happy with midsection cuts. So, I guess what I'm looking for, to jump into playing around with it, is a premade fully functional template of code that sets up the
camera, lightsource, and math operator equation in it. Then, I can play around and modify it to see how it works. I'm afraid I still don't know how to use it at all, though. I can press the run
button, and I see it runs the code in the text field. One thing I've been thinking about is how to render the merge sequences. It seems like I can translate the midpoint of the shape with respect to
the cross-cut plane, but I'm not sure it can work that way.
Re: Alogorithm/maths behind rendering cross sections?
ICN5D wrote:Okay, no oblique cuts for now. I'm happy with midsection cuts. So, I guess what I'm looking for, to jump into playing around with it, is a premade fully functional template of code
that sets up the camera, lightsource, and math operator equation in it. Then, I can play around and modify it to see how it works. I'm afraid I still don't know how to use it at all, though. I
can press the run button, and I see it runs the code in the text field. One thing I've been thinking about is how to render the merge sequences. It seems like I can translate the midpoint of the
shape with respect to the cross-cut plane, but I'm not sure it can work that way.
Keep in mind that povray is a 3D rendering program, and although it has some hacks to deal with 4D and 5D vectors, those are not fully supported and only have very limited operations available. So if
you're going to be using it, you'll have to first reduce your equations to 3 variables first, otherwise you'll have a hard time coaxing it to do what you want.
But I think we're getting way ahead of ourselves. First, you should learn how to use the thing and get some feel for how it works. povray tutorial, especially the "Getting Started" section. Also,
don't just read through it; actually try out the examples and fiddle with the scripts yourself so that you understand what's going on. Yes, I know most of this stuff is probably boring to you, since
it's mostly talking about setting up boring ole 3D objects, lighting effects, and stuff, but bear with it, you'll be glad later when you need to make use of some 3D tricks to make things show up
nicely in toratope renders.
Once you're finished with the "Getting Started" section and feel reasonably confident in your command of the scripting language, read the "isosurface object" and "poly object" subsections under
"Advanced Features". These are some of the most powerful of povray's features, and probably what you'll want to use to render toratopic sections.
Re: Alogorithm/maths behind rendering cross sections?
P.S. Keep in mind that povray basically is using numerical methods to solve polynomial equations, so you can actually coax it to do oblique sectioning if you feel up to it. You just have to figure
out a way to express this as a polynomial equation, and then let povray do the rest of the job. The difficulties I talked about earlier mainly come up when you're trying to manually tackle these
things or you want to get algebraic solutions out of them; for the purposes of making images, you don't need that (numerical methods are good enough), so as long as you can write out the polynomial
equation, povray will take care of the rest.
Re: Alogorithm/maths behind rendering cross sections?
All right, finally found the part about the poly equations and functions. You weren't kidding about the complexity. There MUST be another way. I guess the CSG can be used to manually build the cuts,
but it would be cheating the system a little bit. Man, this will take some time and learning. I have no idea how to derive those polynomials for the shapes. I didn't see it anywhere, but can POVray
use parametric equations? Of course, they would be even more cryptic. Hmm, going to take some time.
Re: Alogorithm/maths behind rendering cross sections?
ICN5D wrote:All right, finally found the part about the poly equations and functions. You weren't kidding about the complexity. There MUST be another way. I guess the CSG can be used to manually
build the cuts, but it would be cheating the system a little bit. Man, this will take some time and learning. I have no idea how to derive those polynomials for the shapes. I didn't see it
anywhere, but can POVray use parametric equations? Of course, they would be even more cryptic. Hmm, going to take some time.
You can only use CSG if you're doing 3D toratopes. For anything higher, you'll have to, essentially, do CSG manually on your toratopic equations.
I thought you said you knew how to derive polynomials for the shapes? 'cos if not, you're up polynomial creek without a paddle... This page may help. Good luck.
Another approach might be to manually implement parametric equations in using povray's scene language, by implementing an intersection algorithm that solves said parametric equations for ray-shape
intersections. But I'm guessing this is going to be even harder than finding the polynomial form of the equations, because parametric equations don't lend themselves very well to linear
Or if all else fails, you might want to consider Matlab instead.
Re: Alogorithm/maths behind rendering cross sections?
I thought you said you knew how to derive polynomials for the shapes?
Nope, I said they didn't scare me. But, I probably should be
But, other than that, there was some reference to a program that could make the vector arrays. When I checked out the layout of the vector arrays, it wasn't that bad. Just a long way to do it. POVray
seems like a cool program, but it also seems kind of primitive when trying to do some real complex math. It's mainly the part with having to convert into a 3D equation. The cut algorithm is doing
this too, so I feel that there is some kind of way to directly translate it into an equation. But, maybe a more advanced program designed to handle heavier calculations is what I should find. I have
the perfect one in my head, but sadly a lack of programming knowledge. Boy would it be awesome, too. I know exactly how to represent the rest of the shape from the cut, I just need to find a way to
illustrate it. Heck, I could draw it way faster than on the computer!
So, before you mentioned something like deriving the equation for a cut, as simple as setting a variable to 0 or 1, like here:
For example, a sphere has equation x^2 + y^2 + z^2 = r^2. So to find its cross-section with the z=1 plane, you just substitute z=1 and obtain: x^2 + y^2 + 1^2 = r^2, which simplifies to: x^2 + y^
2 = r^2 - 1, the form of which indicates that the cross-section is a circle
This mimics the cut algorithm when we take a sphere (III) = (x^2 + y^2 + z^2) = r^2, and cut it into a circle (IIi) = (x^2 + y^2 + 1^2) = r^2 . Moving along the cut axis " i ", we see the circle
shrink, which would be made by increments of the value of 1 in the equation: (x^2 + y^2) = r^2 - 1 . The extra parentheses aren't really necessary, I'm throwing them in there for illustrative
And, for a torus ((II)I) = ((√(x^2+y^2) − R)^2 + z^2) = r^2 . There are two midsection axial cuts ((I)I) making two circles along a line, and ((II)) making two concentric circles.
((I)I) = ((√(x^2 + 1) − R)^2 + z^2) = r^2 , increase 1 to merge displaced circles into one
((II)) = ((√(x^2 + y^2) − R)^2 + 1) = r^2 , increase 1 to merge concentric circles into one
So, I feel that these equations for the cut arrays can be made through a direct translation into 3D, and what the merge sequence would be. But, wouldn't you wan't to set the cut axis value to zero?
As in, (x^2 + y^2 + 0) = r^2 ? Then, of course keeping in mind that the value can be changed when moving out from center. I have yet to try my hand at converting the toratope notation into equations,
much less into the 9D ones I've been working with.
So far, I can put these together:
(II) = (x^2 + y^2) = r^2
(III) = (x^2 + y^2 + z^2) = r^2
((II)I) = ((√(x^2+y^2) − R)^2 + z^2) = r^2
(IIII) = (x^2 + y^2 + z^2 + w^2) = r^2
((III)I) = ((√(x^2 + y^2 + z^2) − R)^2 + w^2) = r^2
((II)II) = ((√(x^2 + y^2) − R)^2 + z^2 + w^2) = r2
(((II)I)I) = ((√((√(x^2 + y^2) − ρ)^2 + z^2) − r)^2 + w^2) = R^2
((II)(II)) = ((√(x^2 + y^2) − a)^2 + (√(z^2 + w^2) − b)^2) = r^2
which definitely follows some very strict congruencies between both. By setting any one of the x, y, z, w axes to 0 or 1, the equation for the resulting array of shapes should come out. If a
rendering program can take these equations and make cool pictures, and then adjust the values, the complex merge sequence can be made. I'm not so sure about the oblique slices, though. They are
pretty cool, but the maths required sound heavy. It's a good challenge, and probably the next step.
Re: Alogorithm/maths behind rendering cross sections?
ICN5D wrote:
I thought you said you knew how to derive polynomials for the shapes?
Nope, I said they didn't scare me. But, I probably should be
Perhaps they should.
But, other than that, there was some reference to a program that could make the vector arrays. When I checked out the layout of the vector arrays, it wasn't that bad. Just a long way to do it.
POVray seems like a cool program, but it also seems kind of primitive when trying to do some real complex math. It's mainly the part with having to convert into a 3D equation.
And the part about it being a 3D ray-tracing program, not Matlab...
The cut algorithm is doing this too, so I feel that there is some kind of way to directly translate it into an equation. But, maybe a more advanced program designed to handle heavier calculations
is what I should find. I have the perfect one in my head, but sadly a lack of programming knowledge. Boy would it be awesome, too. I know exactly how to represent the rest of the shape from the
cut, I just need to find a way to illustrate it. Heck, I could draw it way faster than on the computer!
Last I heard, Matlab had a built-in graphing function... but with the caveat, of course, that it has to be 3D or less, since most people don't do things like visualize 4D or higher in their free
time, so such a feature would be rather rare!
So, before you mentioned something like deriving the equation for a cut, as simple as setting a variable to 0 or 1, like here:
For example, a sphere has equation x^2 + y^2 + z^2 = r^2. So to find its cross-section with the z=1 plane, you just substitute z=1 and obtain: x^2 + y^2 + 1^2 = r^2, which simplifies to: x^2
+ y^2 = r^2 - 1, the form of which indicates that the cross-section is a circle
This mimics the cut algorithm when we take a sphere (III) = (x^2 + y^2 + z^2) = r^2, and cut it into a circle (IIi) = (x^2 + y^2 + 1^2) = r^2 . Moving along the cut axis " i ", we see the circle
shrink, which would be made by increments of the value of 1 in the equation: (x^2 + y^2) = r^2 - 1 . The extra parentheses aren't really necessary, I'm throwing them in there for illustrative
I didn't say 0 or 1, it can be any constant value. By varying the value, you get different cuts. Let's take something other than a sphere for example, since spheres are so boring. What about a
cylinder? We can describe its surface as x^2 + y^2 = r^2, with the additional constraint -1 < z < 1 for the end caps. So let's say we cut it at z = 0. What do we get? Well, we substitute z=0 into x^2
+ y^2 = r^2, but since z doesn't occur there, the result is still x^2 + y^2 = r^2, i.e., a circle of radius r. This holds for every z between -1 and 1. When z is outside this range, then the section
is empty, because our constraint -1 < z < 1 stipulates that the equation x^2 + y^2 = r^2 only holds for that particular range of z values. What about a vertical cut? Say x = 0. So we get: 0 + y^2 = r
^2, or y^2 = r^2, which means y = ±r. This gives us a pair of lines. And of course, we always have to take care of the constraint that z must be between -1 and 1, so taken together, this gives us a
rectangular section. Suppose we move the cut to x = 1. Then we get 1^2 + y^2 = r^2, which rearranges to y^2 = r^2 - 1, meaning that y = ±√(r^2 - 1). Again taking the constraints on z into account,
this gives us a rectangle: but a narrower one now, since √(r^2 - 1) < r. Notice also, that if we move the cut to x = r+1, then we get (r^2 + 2r + 1) + y^2 = r^2, which rearranges to y^2 = r^2 - r^2 -
2r -1 = -2r - 1. Notice that y^2 is negative, which means y has no real solutions. This means the intersection is empty: the cutting plane falls past the cylinder and doesn't intersect it at all.
All of this is boring, of course. Let's do an oblique cut. We may specify an oblique cutting plane by choosing a normal vector N, say N = (1,2,3), and for simplicity let's say the plane passes
through the origin. This gives us the plane equation x + 2y + 3z = 0. To find the intersection, then, involves solving the following system of equations:
Code: Select all
x^2 + y^2 = r^2
x + 2y + 3z = 0
-1 < z < 1
One way to solve this, is to rearrange the plane equation into x = -2y - 3z, and then substitute that into the first equation in order to eliminate x from it. So we get (-2y-3z)^2 + y^2 = r^2, which
expands to 4y^2 + 6yz + 9z^2 + y^2 = r^2, and rearranges to 5y^2 + 6yz + 9z^2 = r^2. This is the equation of an ellipse (since x doesn't occur in it anymore). There are some further algebraic tricks
you can use to rewrite this into canonical form Ap^2 + Bq^2 = R^2, for some A, B, and R, but we won't do that here (you could do that as a practice exercise
So this should give you some taste of what's involved in doing oblique cuts.
Solving this sort of thing requires some pretty heavy-duty algebraic algorithms (probably some application of the calculus of variations), which require an internal algebraic representation of the
equations -- most software don't bother with this level of complexity! The only ones that do are the ones designed to do heavy-duty math -- Matlab, for example. But then Matlab only has limited
visualization capabilities (last I checked, but I could be wrong), so you still need to convert the final solution forms into some kind of 2D or 3D representation in order for it to be render-able to
the screen.
And, for a torus ((II)I) = ((√(x^2+y^2) − R)^2 + z^2) = r^2 . There are two midsection axial cuts ((I)I) making two circles along a line, and ((II)) making two concentric circles.
((I)I) = ((√(x^2 + 1) − R)^2 + z^2) = r^2 , increase 1 to merge displaced circles into one
((II)) = ((√(x^2 + y^2) − R)^2 + 1) = r^2 , increase 1 to merge concentric circles into one
So, I feel that these equations for the cut arrays can be made through a direct translation into 3D, and what the merge sequence would be. But, wouldn't you wan't to set the cut axis value to
zero? As in, (x^2 + y^2 + 0) = r^2 ? Then, of course keeping in mind that the value can be changed when moving out from center. I have yet to try my hand at converting the toratope notation into
equations, much less into the 9D ones I've been working with.
The variable along the cut axis can take on any value you want. The form of the resulting equations will tell you what shape the cut has, as I illustrated above. It's just that 0 is a convenient
value for getting results that have a simple-enough form for you to be able to tell what it represents. two variables. In 9D you have nine variables, and a complex toratope may have an equation form
with a degree 8 polynomial, say, so after sticking in your cutting plane's equations, you get this insane-looking 8th degree polynomial with 9 variables appearing in all sorts of combinations, x^9, x
^4y^2z, w^2x^4y^3, etc., just to name a few, in who knows how many terms, where would one even begin to understand what sort of shape it's describing?
It's only that the axial cuts have (relatively) tame-looking forms that we have any hope of even understanding what it describes.
So far, I can put these together:
(II) = (x^2 + y^2) = r^2
(III) = (x^2 + y^2 + z^2) = r^2
((II)I) = ((√(x^2+y^2) − R)^2 + z^2) = r^2
(IIII) = (x^2 + y^2 + z^2 + w^2) = r^2
((III)I) = ((√(x^2 + y^2 + z^2) − R)^2 + w^2) = r^2
((II)II) = ((√(x^2 + y^2) − R)^2 + z^2 + w^2) = r2
(((II)I)I) = ((√((√(x^2 + y^2) − ρ)^2 + z^2) − r)^2 + w^2) = R^2
((II)(II)) = ((√(x^2 + y^2) − a)^2 + (√(z^2 + w^2) − b)^2) = r^2
which definitely follows some very strict congruencies between both. By setting any one of the x, y, z, w axes to 0 or 1, the equation for the resulting array of shapes should come out. If a
rendering program can take these equations and make cool pictures, and then adjust the values, the complex merge sequence can be made. I'm not so sure about the oblique slices, though. They are
pretty cool, but the maths required sound heavy. It's a good challenge, and probably the next step.
Did you go through the povray tutorial? 'cos you really should. I mean it. Like, actually do each of the suggested exercises. It may sound boring, but it's worth the time to learn. Some of your
equations above are very easy to turn into povray animations, if you only knew how to make povray sing to your tune. Cool pictures take effort to produce; we don't just plug numbers into a magic box
and get nice pictures out of them.
Re: Alogorithm/maths behind rendering cross sections?
Actually, I'm going to blow your mind on that note. I did some internet snooping around, and found a cool 3D calculus rendering program. It has an implicit equation input field. It also has 4
adjustable and animated variables. So, I experimented with plugging in the torus equation, and modifying it in its cut form. The shapes I got were cassini oval deforming circles, in the form of two
hollow tubes. The circle-cuts of the tubes were the actual cross sections of a torus, and when animated, produced the real result. Then, I played with a 4D toratope, the torisphere. I modified it
into it's 3D cut form by changing
((III)I) = (sqrt(x^2 + y^2 + z^2) − R)^2 + w^2 = r^2
into its 3D cut equation
(sqrt(x^2 + y^2 + A^2) - R)^2 + z^2 = r^2
where the variable " A " is the cut axis, and Z is in place of W.
The end result was a torus when a=0, but when animating " A " , I saw the cut sequence of a torisphere in motion for the first time ever! It was mind blowing at how easy it was. I can derive the 3D
equation for any cut, and apply adjustable parameters to the removed cut axes. The animation feature lets you fly through the shape, exploring other cuts and merge sequences. It can have up to 4
parameters, which allows for a full 7D shape to be animated. Higher than that will require manual adjustments, no big deal. There's a lot of potential in this little program, I can feel it. Not to
mention your little schooling on oblique cuts just got my attention. I was wondering how it was done with equations. That's very cool at how it works. I'm going to study up on that, because I know I
can make them and put it into this new program.
The 3D calc rendering program:
http://web.monroecc.edu/manila/webfiles ... Plot3D.htm
Go to :
- Graph
- Add Implicit Surface
Copy-paste into the equation field:
(sqrt(x^2 + y^2 + a^2) - 2.5)^2 + z^2 = -0.5^2
Go to:
- Parameters
- Adjust parameters
- Range tab
-- change " A " to -3 > A > 3
-In the parameters box, you have to check the box at the right of the slider for " A "
-Click Animate Parameter, and watch the cut evolution of moving up and down through a torisphere
I did take a look at the POVray tutorial, but I went searching for an easier way, as you can tell. I think this program is an awesome find, and I'm going to pursue it. I can catalog all of the
equations and parameter fields. Learning how to derive oblique equations will be the next decent challenge and expansion to my mind. Then, I can make oblique animations as well! For now, I will be
experimenting with deriving the 3D equations and parameters. It's going to be fun
Re: Alogorithm/maths behind rendering cross sections?
Here you go. Check them out. I tested all of them, and they work!!!!!! It's the merges and tiger dances in motion rendered in real time.
Toripshere ((III)I) = (sqrt(x^2 + y^2 + z^2) - R)^2 + w^2 = r^2
• ((IIi)I) - torus
(sqrt(x^2 + y^2 + a^2) - 2.5)^2 + z^2 = -0.5^2
-3 > a > 3
• ((III)i) - concentric spheres
(sqrt(x^2 + y^2 + z^2) - 2.5)^2 + a^2 = -0.5^2
-0.7 > a > 0.7
Spheritorus ((II)II) = (sqrt(x^2 + y^2) - R)^2 + z^2 + w^2 = r2
• ((Ii)II) - displaced spheres
(sqrt(x^2 + a^2) - 2.5)^2 + y^2 + z^2 = -0.5^2
-3 > a > 3
• ((II)Ii) - torus
(sqrt(x^2 + y^2) - 2.5)^2 + z^2 + a^2 = -0.5^2
-0.7 > a > 0.7
Ditorus (((II)I)I) = (sqrt((sqrt(x^2 + y^2) - ρ)^2 + z^2) - r)^2 + w^2 = R^2
• (((Ii)I)I) - displaced toruses
(sqrt((sqrt(x^2 + a^2) - 2.5)^2 + y^2) - 1)^2 + z^2 = -0.3^2
-4.2 > a > 4.2
• (((II)i)I) - concentric toruses
(sqrt((sqrt(x^2 + y^2) - 2.5)^2 + a^2) - 1)^2 + z^2 = -0.3^2
-1.4 > a > 1.4
• (((II)I)i) - cocircular toruses
(sqrt((sqrt(x^2 + y^2) - 2.5)^2 + z^2) - 1)^2 + a^2 = -0.3^2
Tiger ((II)(II)) = (sqrt(x^2 + y^2) - Ra)^2 + (sqrt(z^2 + w^2) - Rb)^2 = r^2
• ((II)(Ii)) - vertical stack of torii
(sqrt(x^2 + y^2) - 2.5)^2 + (sqrt(z^2 + a^2) - 2.5)^2 = -0.5^2
-2.5 > a > 2.5
Re: Alogorithm/maths behind rendering cross sections?
Thanks to Marek, I have a neat little shortcut way to derive oblique cuts. Using sine and cosine, any oblique angle can be described as fractions of pi. This helps with navigating the orientation of
the cut plane, since setting a parameter to 3.14 gives a full 360 degree rotation. Then of course, there's the consequence of having multiple 360 rotation directions, made by reducing the shape to
lower dimensions multiple times. Each cutting plane can be rotated separately, or simultaneously at different rates. That's what I did with a rotation animation of the (((II)I)(II)), I set both cut
axes to rotate at a 3:5 ratio, where the 3-cycle was offset by 180 degrees. Interesting emergent oblique cuts can be found using this method. Lying on these particular angles of the 3-plane, are a
host of beautiful structures, some a single object, some several, and some sharing the attributes of combined axial cuts. I'm literally having to explore these shapes with this program, and I
discover amazing things and effects, like multiple component symmetries. For once now, I'm exploring real places that aren't in some computer game ( hehe )
Using the equation for a torisphere:
(sqrt(x^2 + y^2 + z^2) - R)^2 + w^2 = r^2
cut down to:
(sqrt(x^2 + y^2 + a^2) - R)^2 + z^2 = r^2
a = slides through 4D when changed
we can then establish a special rotation equation:
(sqrt(x^2 + y^2 + (z*cos(a))^2) - R)^2 + (z*sin(a))^2 = r^2
that will allow you to flip between the torus cut or the two concentric spheres cut. Setting a=0 makes an axial 0 degree cut, a=0.392 makes a 45 degree oblique cut, and a=0.784 makes a 90 degree cut.
It's important to place the sine and cosine in different parts of the toratope, or you'll make nothing interetsing. What's really cool is seeing what it looks like to rotate into an empty 3D cut.
This is a space where an infinitely sized cube could fit, and never touch the surface. Actually, there's two of these holes in duotorus tiger (((II)I)((II)I)), a shape I've been exploring a lot
lately. Made a bunch of cool pics on the tiger explained thread. I see some future potential with this program and my new mathematical tools. It's mindblowing to see a cross section of a hypertorus
in 3D. | {"url":"http://hddb.teamikaria.com/forum/viewtopic.php?p=16079","timestamp":"2014-04-19T12:12:41Z","content_type":null,"content_length":"82660","record_id":"<urn:uuid:56ba67e4-c557-4405-843b-f0d835587aae>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alexander Borisov's MATH 2370 Fall 2011 page
MATH 2370 Fall 2011
Contact Information
Instructor: Alexander Borisov
Office: 414 Thackeray Hall
Office hours: MW 12-1pm + by appointment
e-mail: borisov"at"pitt"dot"edu
Classes meet:
MWF 10:00-10:50 am in 704 Thackeray Hall.
Th 10:00-10:50 am in 704 Thackeray Hall. Recitations Instructor: Takuya Murata.
Final Examination date is set: Friday, December 9th. Format of the exam: one theorem, two problems. All topics are fair game, except 37 and 38. the problems may be harder that in the Fall 2010 final
Homework due Monday, Nov. 21.
Midterm Examination date is set: Monday, October 24. Please see the Midterm Study Guide for more information.
A Day-by-day List of Topics covered so far may help you to review the material.
Course Description
Linear transformations of finite dimensional vector spaces are studied in a semi-abstract setting. The emphasis is on topics and techniques which can be applied to other areas, e.g. bases and
dimension, matrix representation, linear functionals, duality, canonical forms, vector space decomposition, inner products and spectral theory.
Any linear algebra textbook that you can read, which is sufficiently advanced. It must include the notion of a quotient space, and the proof (not just the statement) of the Jordan Canonical Form
Theorem. I also recommend the free online book Linear Algebra Done Wrong by Sergei Treil . This book will be used for some homework assignments.
Course Objectives
1) To help the students prepare for the Linear Algebra Preliminary Examination.
2) To help the students formulate their goals to get the most from their graduate school training.
Grading Policy
There will be a midterm and a final examination, and the overall grade in the course will be primarily determined by the performance on these two tests. Additionally, there will be occasional quizzes
and graded homework that will have some effect on the overall grade. It is a student's responsibility to attend every lecture, take notes and go over the notes before the next lecture to fully
understand the material covered. If for any reason you can not attend a lecture, please get the notes from another student in the class. You should also notify the instructor of your absense. If
needed, ask the instructor for help with the missed material. Some homework problems may be assigned, but not collected. It is your responsibility to do them and check with other students whether or
not you got them right. If in doubt, seek help from the instructor or the recitation instructor. Recitation may have separate homework.
Tentative Syllabus
Note: the topics and the order of coverage are subject to change. Some of the topics will take several classes to cover.
1) Definition of a linear vector space, subspace.
2) Linear maps between spaces; Ker and Im; Hom(V,W). Dual space, dual map.
3) Construction of the quotient space; first isomorphism and lattice isomorphism theorems.
4) Definition of the dimension (independence of basis);
5) Extension of a basis of a subspace to a basis of the space. Dimension of proper subspace.
6) Dimension of a sum of two subspaces. Complements.
7) Dimension and basis of Hom(V,W). Dual basis. Double dual theorem.
8) Annihilator and its properties.
9) Matrix of a linear map, change of basis formula;
10) Definition of the determinant; properties of the determinant.
11) Characterizations of invertible maps.
12) Invariant subspace. Map on the quotient. Matrix of an operator in a basis that extends a basis of an invariant subspace.
13) One-dimensional invariant subspaces, eigenvectors, eigenvalues. The eigenvalues are the roots of the characteristic polynomial.
14) Linear independence of eigenvectors with different eigenvalues.
15) Diagonalization. Characterizations of diagonalizable operators and matrices as having a basis of eigenvectors.
16) Minimal polynomial. Roots=eigenvalues theorem. Diagonalzation criterion.
17) Projections and complements; reflections.
18) Commuting maps and simultaneous diagonalization.
19) Generalized eigenspaces. Nilpotent operators and matrices; theorem that L^(dim V)=0 for a nilpotent L on V.
20) Linearly independent subspaces. Linear independence of the generalized eigenspaces.
21) Partial Fractions Theorem.
22) Spectral decomposition (V is the direct sum of the generalized eigenspaces for L).
23) Algebraic multiplicity is the dimension of the generalized eigenspace.
24) Hamilton-Cayley Theorem.
25) The existence of the Jordan basis for a nilpotent map. Jordan Canonical Form Theorem.
26) Jordan canonical form and power series on matrices.
26) Every operator can be uniquely written as the sum of commuting semisimple and nilpotent.
27) Bilinear forms, matrix of a form.
28) Degenerate and non-degenerate forms, symmetric forms, positive-definite forms and matrices. Change of basis formula.
29) Quadratic functions are in 1-to-1 correspondence with the symmetric bilinear forms.
30) Positive-definite forms and distances; triangle inequality and Cauchy-Schwarz inequality; Euclidean vector spaces; identification of dual with the original space.
31) Orthonormal basis; Gram-Schmidt algorithm; every finite-dimensional Euclidian space is isomorphic to R^n with the dot product.
32) Adjoint of an operator from one Euclidean space to another.
33) Self-adjoint operators and (real) symmetric matrices; orthogonal matrices.
34) Orthonormal diagonalization of symmetric matrices.
35) Diagonalization of symmetric bilinear forms; definition of signature (theorem that it is well defined).
36) Sylvester's characterization of positive-definite matrices.
37) Tensor products of linear vector spaces.
38) Exterior powers of linear vector spaces. | {"url":"http://www.pitt.edu/~borisov/courses/Math2370Fall11.html","timestamp":"2014-04-18T02:09:48Z","content_type":null,"content_length":"7891","record_id":"<urn:uuid:c34c2415-f571-4970-8968-1d00744dfeb2>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
Method for Detecting Anomalies in Multivariate Time Series Data
Patent application title: Method for Detecting Anomalies in Multivariate Time Series Data
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A method detects anomalies in time series data, wherein the time series data is multivariate, by partitioning time series training data into partitions. A representation for each partition in each
time window is determined to form a model of the time series training data, wherein the model includes representations of distributions of the time series training data. The representations obtained
from partitions of time series test data are compared to the model to obtain anomaly scores.
A method for detecting anomalies in time series data, wherein the time series data is multivariate, comprising the steps of: partitioning time series training data into partitions; determining a
representation for each partition in each time window to form a model of the time series training data, wherein the model includes representations of distributions of the time series training data;
comparing representations obtained from partitions of time series test data to the model to obtain anomaly scores, wherein the steps are performed in a processor.
The method of claim 1, wherein the time series data in the partitions have correlated dimensions.
The method of claim 1, wherein each distribution is joint over the time window of two variables from the partition.
The method of claim 1, wherein each distribution is a joint distribution over the time window at time t and time t+d of a single variable from the partition, wherein d is a delay.
The method of claim 1, wherein each distribution is a joint distribution over the time window at time t, t+d, and t+2d of a single variable from the partition, wherein d is a delay.
The method of claim 1, wherein each distribution is a joint distribution over the time window of z(t), z(t+d) and an angle between the vectors (z(t), z(t+d)) and (z(t+d), z(t+2d)) for a single
variable z(t) from the partition, where z is a variable of the partition and d is a small positive integer.
The method of claim 1, wherein the distribution is a distribution over the time window of a single variable from the partition at time t minus the same variable at time t+d.
The method of claim 1, wherein one or more of the distributions computed is a two-dimensional histogram representing a joint distribution within the time window of two variables from the partition.
The method of claim 1, wherein one or more of the distributions computed is a two-dimensional histogram representing a joint distribution within the time window of a variable from the partition at
time t and at time t+d.
The method of claim 1, wherein one or more of the distributions computed is a three-dimensional histogram representing a joint distribution within a time window of a variable from the partition at
time t, at time t+d and at time t+2d.
The method of claim 1, wherein one or more of the distributions computed is a three-dimensional histogram representing the joint distribution over the time window of z(t), z(t+d) and an angle between
the vectors (z(t), z(t+d)) and (z(t+d), z(t+2d)) for a single variable z(t) from the partition, and d is a small positive integer.
The method of claim 1, wherein one or more of the distributions computed is a one-dimensional histogram representing the distribution over the time window of a single variable from the partition at
time t minus the same variable at time t+d.
The method of claim 1, wherein the comparing is done by computing a chi-square distance between a histogram computed over a current time window and a histogram computed from training data.
FIELD OF THE INVENTION [0001]
This invention relates generally to processing time series data, and more particularly to determine anomalies during operation of equipment from time series data acquired by sensors.
BACKGROUND OF THE INVENTION [0002]
Equipment monitoring can avoid costly repairs. This can be done by analyzing time series data acquired by sensors. One method treats each multivariate data point at time t independently. That method
does not use sliding windows over time. Because that method does not analyze data in time windows, the method cannotdetect "collective anomalies," which are anomalies in the dynamics of a variable,
i.e., changes over time. That method does not compute any feature vectors, or representation of the data. The method simply compares raw time series test data with raw training data.
Another method assumes multivariate time series can be modeled locally as a vector autoregressive (AR) model. This is a fairly restrictive assumption. That method first learns a distribution of AR
model parameters for each time window of the training data. During testing, for each time window, the AR model parameters are estimated and the probability of these parameters are computed from the
previously learned probability distribution. The distribution learned by that method uses a restrictive autoregressive assumption.
SUMMARY OF THE INVENTION [0004]
A method detects anomalies in time series data, wherein the time series data is multivariate, by partitioning time series training data into partitions.
A representation for each partition in each time window is determined to form a model of the time series training data, wherein the model includes representations of distributions of the time series
training data.
The representations obtained from partitions of time series test data are compared to the model to obtain anomaly scores.
BRIEF DESCRIPTION OF THE DRAWINGS [0007]
FIG. 1 is a flow diagram of a method for detecting anomalies in time series data according to embodiments of the invention;
FIG. 2 is a flow diagram of a training phase of the method in FIG. 1; and
FIG. 3 is a flow diagram of a testing phase of the method of FIG. 1.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS [0010]
The embodiments of our invention provide a method for detecting anomalies in multivariate time series data. Specifically, time series data acquired by sensors of equipment. Multivariate time series
arise in many different applications. We are particularly interested in monitoring equipment condition.
Equipment condition monitoring is the process of analyzing signals from various sensors attached to equipment, such as pumps, condensors, fans, etc., to determine if the equipment is operating
normally, or not. The sensors, such as vibration sensors, pressure sensors, temperature sensors, etc., output a series of sensor data, which is called a time series. When data from multiple sensor
are available, the time series is a multivariate time series. Each dimension of the multivariate time series contains data from one sensor.
The goal of equipment monitoring is to analyze the time series data from equipment sensors to automatically detect if the equipment has failed, or, perhaps, will soon fail.
The basic strategy for accomplishing this is to first construct a model of the time series data during normal operation. Then, detecting equipment failures, or impending failures, is performed by
detecting anomalies in the time series test data, which are differences from the time series training data acquired during normal operation.
Method for Detecting Anomalies in Multivariate Time Series
As shown in FIG. 1, our method 100 of detecting anomalies 103 in multivariate time series data has two main phases: model 111 construction 200; and testing 300. For model construction, we use
multivariate time series training data 101 acquired during normal operation. The model characterizes important aspects of the operation.
During testing 200, we acquire, e.g., real-time multivariate time series test data 102, and use the model to determine if the test data are anomalous by determining how well time windows in the test
data are described by the model learned during the construction phase.
The steps of the method can be performed in a processor connected to memory and input/output interfaces as known in the art.
Model Construction Phase
As shown in FIG. 2, during model construction 200, pairs of variables from the multivariate time series training data are analyzed to determine which pairs, if any, are correlated. A variable
represents a single dimension of the multivariate time series (i.e., the readings from a single sensor). The mutual information between variable i and j is determined as a measure of correlation
between the variables.
Because there can be a time delay between two variables, an optimal time shift is searched so that the mutual information between variable i and the shifted version of variable j is maximized.
A similarity matrix 211 is formed 210, where entry i,j in the matrix stores the mutual information between variable i and a possible shifted version of variable j.
A reverse Cuthill-McKee procedure is then used to form 220 a block diagonal matrix 221 from the similarity matrix. The reverse Cuthill-McKee procedure permutes a sparse matrix that has a symmetric
sparsity pattern into a band matrix form with a small bandwidth.
The blocks of the matrix are segmented 230 and the set of variables in each block define a partition 231. Every variable is in exactly one partition. A partition represents a set of variables
(possibly of size 1) that have high mutual information.
After partitioning the variables of the multivariate time series data into correlated sets, each partition is treated as a separate and independent multivariate time series.
Each multivariate time series forming a partition is processed 240 using a sliding time window 239 of a predetermined fixed length to consruct the model 111. The time series data in the partitions
have correlated dimensions. For each time window, various representations of the time series in the window can be computed.
These representations are all statistical distributions computed over one or more variables in the partition. There are five types of distributions that we can use. The distributions can be viewed as
feature vectors of the time series data.
(1) One is a representation of the joint distribution of the time series values of variable z
and variable z
where both z
and z
are members of the partition. This joint distribution can be represented as a two-dimensional histogram. This representation is a "joint distribution of variables z
and z
(2) The second type of representation is the joint distribution of z(t) and z(t+d) where z is a variable of the partition and d is a small positive integer. This joint distribution can be represented
as a two-dimensional histogram. This representation is a "2D phase space distribution of variable z."
(3) The third type of representation is the joint distribution of z(t), z(t+d), and z(t+2d) where z is a variable of the partition and d is a small positive integer. This joint distribution can be
represented as a three-dimensional histogram. This representation is a "3D phase space distribution of variable z."
(4) The fourth type of representation is the joint distribution of z(t), z(t+d) and the angles formed between (z(t), z(t+d)) and (z(t+d), z(t+2d)) where z is a variable of the partition and d is a
small positive integer. This joint distribution can be represented as a three-dimensional histogram. This representation is a "phase space angle distribution of variable z."
(5) The fifth type of representation is the distribution of differences between a variable z at time t and the same variable z at time t+d. This distribution can be represented as a one dimensional
histogram. This representation is a "difference distribution of variable z."
During the model construction 200, the set of distributions of the types described above are computed for each time window. Because time windows substantially overlap, and because the training time
series is very similar at different times, many of the representations computed for different time windows are similar. The overlap can be achieved by stepping the time forward for each data sample
received. For this reason, a merging process is used to merge the similar sets of representations.
The final result is a compact set of representations, the model 111, that characterizes the important variability that is present in the time series training data.
For an accurate representation of a time window of multivariate time series data, the representation should ignore aspects of the data that are unimportant in comparing similar windows, but retain
aspects of the data that are important in distinguising significantly different windows.
A collection of statistical distributions over time windows have this property for the types of multivariate time series encountered in practical equipment monitoring applications.
Testing Phase
As shown in FIG. 3 for the testing phase 300, after constructing the model as a set of representations in the form of histograms, the multivariate time series test data can be compared to the model
to detect the anomalies.
To do this, the same partitions 311 of the dimensions of the multivariate time series that we used for the training data are 310 used again for testing for the anomalies 103. Each partition is again
treated independently. For each partition, the same sized sliding time window is passed over the test data.
The same set of representations used in the training phase are computed on each time window. The set of representations, typically stored as histograms, are compared 320 against the representations
learned during the model construction phase 200. For the comparison, we can use, e.g., a chi-square distance.
The chi-square distance between two histograms is
_sq ( H 1 , H 2 ) = i ( H 1 ( i ) - H 2 ( i ) ) 2 / H 2 ( i ) ; ( 1 ) ##EQU00001##
where H
1 and H2 are the histograms being compared, and i is an index over all of the bins in the histograms.
Various other methods of comparing distributions can also be used in place of chi-square, such as Student's t test, the Kolmogorov-Snirnov test, histogram intersection, KL divergence and Renyi
Each representation in the set yields the anomaly score for the time window. The set of anomaly scores can be combined to get a single anomaly score, or the scores can be kept separate to give more
information about what variables triggered an anomaly detection. The anomaly scores can be thresholded 330 to detect abnormal operation 309 of equipment from which the time series data were acquired,
e.g., an impending failure.
EFFECT OF THE INVENTION [0043]
The invention provides a method for detecting anomalies in multivariate time series data acquired by sensors of equipment to monitoring equipment condition.
The goal is to monitor the equipment to automatically detect if the equipment has failed, or, perhaps, will soon fail.
Specifically, a method detects anomalies in time series data, wherein the time series data is multivariate, by partitioning time series training data into partitions.
A representation for each partition in each time window is determined to form a model of the time series training data, wherein the model includes representations of distributions of the time series
training data. The representations obtained from partitions of time series test data are compared to the model to obtain anomaly scores.
Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications can be made within the spirit and scope
of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
Patent applications by Daniel Nikolaev Nikovski, Brookline, MA US
Patent applications by Michael Jeffrey Jones, Belmont, MA US
Patent applications in class MACHINE LEARNING
Patent applications in all subclasses MACHINE LEARNING
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20130318011","timestamp":"2014-04-16T16:09:06Z","content_type":null,"content_length":"41884","record_id":"<urn:uuid:a460868b-ca60-4ff4-9edc-411760124709>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Laurent Series Help
Find Laurent expansions for:
valid in the annuli;
(a) 0 ≤ |z| < 1,
(b) 1 < |z| < 3,
(c) 3 < |z|.
I've found a Laurent expansion but I'm not sure what to do about the different annulus ranges.
which forms the geometric series
which I think simplifies to
but I don't know how to address the problem of the annulus ranges given for (a), (b) and (c). I think mine is valid for the range in (a), but I don't know what to do about the others. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=238152","timestamp":"2014-04-18T06:12:45Z","content_type":null,"content_length":"22618","record_id":"<urn:uuid:e6239b49-ccda-4f16-a135-29f157491864>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] BG and the semantics of set theory
Ralf Schindler rds at logic.univie.ac.at
Tue Oct 15 06:16:34 EDT 2002
On Mon, 14 Oct 2002, Jeffrey Ketland wrote:
> And, since T(v) is itself \Sigma^1_1, do we have:
> Corollary: BG + \Sigma^1_1 induction proves Global Reflection Principle for
> ZF?
> I.e., BG + \Sigma^1_1 induction proves the statement "forall x \in
> LST(Bew_ZF(x) -> T(x))"?
Yes. Work in BG + \Sigma^1_1 ind. First prove that every axiom of ZF
is true. To do this for the separation and replacement schema one uses
induction on the complexity of the relevant formula. (To make this work,
use an appropriate formulation of replacement.) Then prove that every
proof in ZF only yields truths. This is done by an induction on the length
of proofs. Both steps use the Tarski rules.
Thank you for pointing out that this should hold true! --Ralf
Ralf Schindler Phone: +43-1-4277-50511
Institut fuer Formale Logik Fax: +43-1-4277-50599
Universitaet Wien E-mail: rds at logic.univie.ac.at
1090 Wien, Austria URL: http://www.logic.univie.ac.at/~rds/
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2002-October/005976.html","timestamp":"2014-04-20T23:26:17Z","content_type":null,"content_length":"3857","record_id":"<urn:uuid:5253ae07-4bfe-44f9-8e46-414027a8bdee>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
exponential object
Definition from Wiktionary, the free dictionary
exponential object (plural exponential objects)
1. (category theory) A categorical object which generalizes its interpretation in category Set; namely, as a function set^[1]. An exponential may be introduced through the "currying" inference rule^
[2] ${f : C \times A \rightarrow B \over \lambda_f : C \rightarrow B^A}$ and eliminated through the "function application" ("eval") rule^[2] $\epsilon_{{}_{A, B}} : B^A \times A \rightarrow B$.
1. ^ ncatlab.org's article on function set (Revision 237)
2. ↑ ^2.0 ^2.1 Jeltsch, Wolfgang (2012). An Introduction to Category Theory and Categorical Logic, slide 20. | {"url":"http://en.wiktionary.org/wiki/exponential_object","timestamp":"2014-04-16T13:17:13Z","content_type":null,"content_length":"24660","record_id":"<urn:uuid:02c99947-6fca-4bb4-a068-7f88b02309e4>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
On 2D Inverse Problems/Special matrices
An important object containing information about a weighted graph G(V,E,w) is its Laplacian matrix. Given a numbering of the vertices of the graph, it's an n by n square matrix L[G], where n is the
number of vertices in the graph, with the entries:
$l_{kl}:= \begin{cases} \sum_{v_k \rightarrow v_l} w_{kl} \mbox{ if}\ k = l, \\ -w_{kl} \mbox{ if}\ v_k \rightarrow v_l, \\ 0, \mbox{ otherwise,} \end{cases}$
where v[k] → v[l] means that there is a directed edge from vertex v[k] to the vertex v[l], and where w is the weight function.
Exercise (*). Given a directed graph G without cycles, prove that one can number its vertices so that the corresponding Laplacian matrix L[G] is triangular.
Given a weighted graph with boundary it is often convenient to number its boundary vertices first and to write its Laplacian matrix in the block form.
$L_G = \begin{pmatrix} A & B \\ C & D \end{pmatrix}$
Exercise (*). Prove that a function/vector u is harmonic at the interior nodes of a graph G if
$u|_{int G} = -D^{-1}Cu|_{\partial G}.$
$M=\begin{pmatrix} A & B \\ C & D \end{pmatrix}$
be a block matrix with an invertible square block D.
Then the Schur complement of the matrix M w/respect of the block D is the matrix
$M/D = A-BD^{-1}C.$
The Leibniz definition of determinant (a multilinear function in the rows and columns) of a square n by n matrix M is:
$\det M = \sum_{\sigma \in S_n} \sgn(\sigma) \prod_{k=1}^n M_{k,\sigma_k}.$
Exercise (*). Prove the following determinant identity for a square matrix M:
$\det M = \det(M/D) \det D.$
The following matrix W(G) consisting of random walk exiting probabilities (sums over weighted paths in a graph) plays an important role as boundary measurement for inverse problems. Suppose a
weighted graph G has N boundary nodes, then the kl 'th entry of the N by N matrix equals to the probability that the first boundary vertex, that a particle starting its random walk at the boundary
vertex v[k] occupies, is the boundary vertex v[l]. For a finite connected graph the columns of the matrix W(G) add up to 1.
Exercise (**). Derive an explicit formula for the matrix W(G) in terms of the blocks of Laplace matrix L(G) of the graph G.
$W_G = I - D_A^{-1}(A - BD^{-1}C).$
Exercise (***). Prove the following expansion formulas for entries and blocks of the matrix W(G),
• for two boundary vertices p[k] and p[l] of a graph G
$w_{kl} = \sum_{v_k\xrightarrow[]{path} v_l}\prod_{e=(p,q)\in path}w(e)/l_{pp},$
• for two distinct boundary vertices v[k] and v[l] of a graph G
$w_{kl}\det D = \frac{1}{l_{kk}}\sum_{v_k\xrightarrow{path}v_l}\prod_{e\in path}l(e)\det D(\tilde{path},\tilde{path}),$
$\tilde{path} = V-(\partial G\cup path).$
• for two disjoint subsets of boundary vertices P and Q of size n of a graph G, see [6],[7] and [14]
$\det W(P,Q) \det D = \pm\frac{\sum_{\sigma\in S_{n}}(-1)^{\sgn(\sigma)}\sum_{p_k\xrightarrow{paths}q_{\sigma_k}}\prod_{e\in paths}l(e)\det D(\tilde{paths},\tilde{paths})}{\prod_{p\in P}l_{pp}},$
$\tilde{paths} = V-(\partial G\cup paths).$
The exercises above provide a bridge b/w connectivity property of graph G and ranks of submatrices of its Laplacian matrix L(G) and the matrix of hitting probabilities W(G).
Exercise (*). Let G be a planar graph w/natural boundary, numbered circulary. Let P and Q be two non-interlacing subsets of boundary nodes of size n. Prove that
$(-1)^{\frac{n(n+1)}{2}}\det\Lambda_G(P,Q) \ge 0,$
w/the strict inequality iff there is a disjoint set of paths from P to Q.
Exercise (*). Show that the numbers of paths in the following graph are equal to the binomial coefficients.
Gluing the graphs w/out loops corresponds to multiplication of the weighted paths matrices.
Exercise (**). Use the result of the previous exercise to it to prove the following Pascal triangle identity, see[13],
$\begin{pmatrix} 1 & 1 & 1 & 1 & \ldots \\ 1 & 2 & 3 & 4 & \ldots \\ 1 & 3 & 6 & \ldots & \ddots \\ 1 & 4 & 10 & \ldots & \ddots \\ 1 & \ldots & \ldots & \ddots & \ddots \\ \end{pmatrix} = \begin
{pmatrix} 1 & 0 & 0 & 0 & \ldots \\ 1 & 1 & 0 & 0 & \ldots \\ 1 & 2 & 1 & 0 & \ddots \\ 1 & 3 & 3 & \ldots & \ddots \\ 1 & \ldots & \ldots & \ddots & \ddots \\ \end{pmatrix} \begin{pmatrix} 1 & 1
& 1 & 1 & \ldots \\ 0 & 1 & 2 & 3 & \ldots \\ 0 & 0 & 1 & 3 & \ddots \\ 0 & 0 & 0 & \ldots & \ddots \\ 0 & \ldots & \ldots & \ddots & \ddots \\ \end{pmatrix}.$
Exercise (***). Give a proof of a Menger's theorem based on the results of the exercise above: Let G be a finite graph and p and q two vertices that are not neighbors. Then the size of the minimum
vertex cut for p and q (the minimum number of vertices whose removal disconnects p and q) is equal to the maximum number of pairwise vertex-independent paths from p to q.
Last modified on 23 February 2013, at 00:21 | {"url":"http://en.m.wikibooks.org/wiki/On_2D_Inverse_Problems/Special_matrices","timestamp":"2014-04-17T15:33:34Z","content_type":null,"content_length":"21982","record_id":"<urn:uuid:deff6e18-e644-4317-9583-e3ac8bf866eb>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
Hi! I've tried to solve this question, but I came to the conclusion that I couldn't so I gave up! I hope you can help me with this one
The equation x^2 - 2kx + 1 = 0 has two distinct real roots. Find the set of all possible values of k.
I would appreciate your help | {"url":"http://mathhelpforum.com/algebra/74873-real-roots.html","timestamp":"2014-04-18T03:19:49Z","content_type":null,"content_length":"40676","record_id":"<urn:uuid:82a2963d-6128-4963-b4e9-1192bc7a2b57>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Many Feet Tall Can a Maple Tree Grow? | Home Guides | SF ...
These include vine maple (Acer circinatum), a 15- to 20-foot tree that grows in U.S. . Many of the maple species that are native to the United States are .
How many square feet makes a acer
So they can be different. How many square feet are in a 5 acer? 1 acre = 43,560 square feet 5 acres = 5 x 43,560 = 217,800 square feet. How many square feet .
Acres to Square Feet - How many sq feet in an acre ?
Acre to square foot (sq ft) conversion table. How many square feet in an acre ?
How many square feet equal an acer
There are 43,560 square feet in an acre, assuming there are 3 feet in a yard. Also assuming the question is about an acre (a measure of area) and not acre (a .
alan lidstone venice fl bailout
Silver Maple Plant Fact Sheet - USDA Plants Database - US ...
Acer saccharinum L., silver maple is one of the fastest growing . It can grow 3-7 feet per year. Silver maple shares many of its sites with red maple, but the two .
How many foot of fencing will fence in 1 acer? - Yahoo! Answers
Depends on the shape of the lot; if it is square then 835 feet will be enough. If it is a long and skinny lot then 4000 feet may NOT be enough.
Oh, right… | {"url":"http://zehexohyxy.bytepub.com/how-many-feet-in-an-acer.php","timestamp":"2014-04-17T09:51:01Z","content_type":null,"content_length":"24605","record_id":"<urn:uuid:258e94b9-e70b-4c85-923c-3e24e64971cb>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00178-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number of Steps (nsteps)
Next: write_switch (lwrite) Up: Hidden Parameters Previous: Hidden Parameters Contents
This parameter controls the maximum number of spatial zones used in a calculation, only in the case where the Courant condition step is larger than the size of the slab. That is, the step size is
calculated as:
where emult is defined below, and
Tim Kallman 2014-04-04 | {"url":"http://heasarc.gsfc.nasa.gov/docs/software/xstar/docs/html/node51.html","timestamp":"2014-04-23T06:52:39Z","content_type":null,"content_length":"3464","record_id":"<urn:uuid:c20ccd54-6900-4afc-9380-38b597de27d0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math A - Semester 3
The following is a tentative list of topics which will be covered in M$3. The order in which they will be covered may change and some related topics may be added while others may be removed.
Operations with polynomials
1. Adding monomials and polynomials
2. Subtracting monomials and polynomials
3. Multiplying monomials
4. Dividing monomials
5. Meaning of a negative exponent and a zero exponent
6. Writing numbers in scientific notation
7. Multiplying polynomials (Distributive Property.)
8. Dividing polynomials
1. Graphing inequalities
2. Solving systems of linear inequalities
3. Solving inequalities in one variable.
4. Solving verbal problem through inequalities.
Quadratic Functions and Factoring
1. Graphs of non-linear equations
2. Quadratic equations in two variables; Parabolas; x-intercepts
3. Factoring and its relationship to area
4. Factoring quadratic trinomials
5. Factoring the difference of two squares
6. Complete factorization of a polynomial: common factors
7. Solving quadratic equations through factoring
8. Solving verbal problems with quadratic equations
9. Solving number and consecutive integer problems using quadratic equations
10. Solving area problems using quadratic equations
11. Solving fractional equations with integer and monomial denominators.
Right Triangles and Trigonometric Functions
1. Properties of a right triangle
2. Relations of sides and angles in a 30-60-90 Right Triangle
3. Relations of sides and angles in a 45-45-90 Right Triangle
4. The sine ratio and its applications
5. The relationship of the sine and cosine rations
6. Applications with the sine, cosine and tangent ratios
7. Using trigonometric ratios to solve problems involving the angle of elevation and depression
Irrational numbers and Pythagoras
1. Irrational numbers
2. The use of the radical sign
3. Simplifying radicals
4. Developing the Pythagorean Theorem
5. Applications of the Pythagorean Theorem | {"url":"http://schools.nyc.gov/SchoolPortals/03/M541/Academics/Mathematics/Math+A+-+Semester+3.htm","timestamp":"2014-04-18T08:26:31Z","content_type":null,"content_length":"23673","record_id":"<urn:uuid:e9b13667-2341-4183-be53-ec4662a68e9f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00185-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Parallel Solution of Systems of Linear Equations using Iterative Methods on Transputer Networks, 1– 13, Transputing for
Results 1 - 10 of 11
, 1992
"... . We present an implementation of a finite-difference approximation for the solution of partial differential equations on transputer networks. The grid structure associated with the
finite-difference approximation is exploited by using geometric partitioning of the data among the processors. This pr ..."
Cited by 12 (3 self)
Add to MetaCart
. We present an implementation of a finite-difference approximation for the solution of partial differential equations on transputer networks. The grid structure associated with the finite-difference
approximation is exploited by using geometric partitioning of the data among the processors. This provides a very low degree of communication between the processors. The resultant system of linear
equations is then solved by a variety of Conjugate Gradient methods. Care has been taken to ensure that the basic linear algebra operations are implemented as efficiently as possible for the
particular geometric partitioning used. 1 Introduction We consider the solution of the non-singular system of N linear equations, Ax = b (1) derived from a finite-difference approximation to a
partial differential equation (PDE), using parallel implementations of various Conjugate Gradient iterative methods. Systems like (1) are characterized by being large, structured and sparse. Because
of the very low d...
, 1993
"... . We show how highly efficient parallel implementations of basic linear algebra routines may be used as building blocks to implement efficient higher level algorithms. We discuss the solution of
systems of linear equations using a preconditioned Conjugate-Gradients iterative method on a network of t ..."
Cited by 5 (4 self)
Add to MetaCart
. We show how highly efficient parallel implementations of basic linear algebra routines may be used as building blocks to implement efficient higher level algorithms. We discuss the solution of
systems of linear equations using a preconditioned Conjugate-Gradients iterative method on a network of transputers. Results are presented for the solution of both dense and sparse systems; the
latter being derived from the finite-difference approximation of partial differential equations. 1 Introduction The numerical solution of non-singular systems of N linear equations, Ax = b (1) is
required in a wide range of practical applications. When N is large, the solution of (1) becomes time-consuming and techniques must be sought to accelerate the solution process. This may be
accomplished by 1. using an iterative method with an improved rate of convergence, possibly with the aid of preconditioning, 2. devising a parallel implementation of the method that allows the
efficient use of a parallel m...
"... Abstract. The purpose of this paper is to show a contributor the required style for a paper for ECAI-2006 and PAIS-2006. The specifications for layout are described so that non-L ATEX users can
create their own style sheet to achieve the same layout. The source for the sample file is available for L ..."
Add to MetaCart
Abstract. The purpose of this paper is to show a contributor the required style for a paper for ECAI-2006 and PAIS-2006. The specifications for layout are described so that non-L ATEX users can
create their own style sheet to achieve the same layout. The source for the sample file is available for L ATEX users. The PostScript and the PDF file is available for all. The layout is identical to
ECAI’02 and ECAI’04 papers. The publisher (IOS Press) will insert a footer for each page. 1 PAGE LIMIT The page limit for ECAI-2006 and PAIS-2006 full papers is 5 pages in the required format. The
page limit for poster submissions is 2 pages. This is a strict limit. Overlength papers will not be reviewed. 2 GENERAL SPECIFICATIONS The following details should allow contributors to set up the
general page description for their paper: 1. The paper is set in two columns each 20.5 picas (86 mm) wide with a column separator of 1.5 picas (6 mm). 2. The typeface is Times Modern Roman. 3. The
body text size is 9 point (3.15 mm) on a body of 11 point (3.85 mm) (i.e. 61 lines of text). 4. The effective text height for each page is 56 picas (237 mm). The first page has less text height. It
requires an additional footer space of 3.5 picas (14.8 mm) for the copyright inserted by the publisher and 1.5 picas (6 mm) of space before the title. The effective text height of the first page is
51 picas (216 mm). 5. There are no running feet for the final camera-ready version of the paper. The submission paper should have page numbers in the running feet.
"... . The purpose of this paper is to show a contributor the required style for a paper for IDAMAP-98. The specifications for layout are described so that non-L AT E X users can create their own
style sheet to achieve the same layout. The source for the sample file is available for L AT E X users. The P ..."
Add to MetaCart
. The purpose of this paper is to show a contributor the required style for a paper for IDAMAP-98. The specifications for layout are described so that non-L AT E X users can create their own style
sheet to achieve the same layout. The source for the sample file is available for L AT E X users. The PostScript file is available for all. 1 GENERAL SPECIFICATIONS The following details should allow
contributors to set up the general page description for their paper: 1. The paper is set in two columns each 20.5 picas (86 mm) wide with a column separator of 1.5 picas (6 mm). 2. The typeface is
Times Modern Roman. 3. The body text size is 9 point (3.15 mm) on a body of 11 point (3.85 mm) (i.e. 61 lines of text). 4. The running feet are set 1.5 picas (6 mm) below the last line of text. 2
TITLE, AUTHOR AND AFFILIATION 2.1 Title The title is set in 20 point (7 mm) bold with leading of 22 point (7.7 mm), centred over the full text measure, with 1.5 picas (6 mm) of space before and
after. 2.2 Aut...
"... . The purpose of this paper is to show a contributor the required style for a paper for ECAI 96. The specifications for layout are described so that non-L A T E X users can create their own
style sheet to achieve the same layout. The source for the sample file is available for L A T E X users. The P ..."
Add to MetaCart
. The purpose of this paper is to show a contributor the required style for a paper for ECAI 96. The specifications for layout are described so that non-L A T E X users can create their own style
sheet to achieve the same layout. The source for the sample file is available for L A T E X users. The PostScript file is available for all. 1 GENERAL SPECIFICATIONS The following details should
allow contributors to set up the general page description for their paper: 1. The paper is set in two columns each 20.5 picas (86 mm) wide with a column separator of 1.5 picas (6 mm). 2. The typeface
is Times Modern Roman. 3. The body text size is 9 point (3.15 mm) on a body of 11 point (3.85 mm) (i.e. 61 lines of text). 4. The running feet are set 1.5 picas (6 mm) below the last line of text. 2
TITLE, AUTHOR, AFFILIATION, COPYRIGHT AND RUNNING FEET 2.1 Title The title is set in 20 point (7 mm) bold with leading of 22 point (7.7 mm), centred over the full text measure, with 1.5 picas (6 mm)
of space ...
"... . The purpose of this paper is to show a contributor the required style for a paper for ECAI 94. The specifications for layout are described so that non-L A T E X users can create their own
style sheet to achieve the same layout. The source for the sample file is available for L A T E X users. The P ..."
Add to MetaCart
. The purpose of this paper is to show a contributor the required style for a paper for ECAI 94. The specifications for layout are described so that non-L A T E X users can create their own style
sheet to achieve the same layout. The source for the sample file is available for L A T E X users. The PostScript file is available for all. 1 GENERAL SPECIFICATIONS The following details should
allow contributors to set up the general page description for their paper: 1. The paper is set in two columns each 20.5 picas (86 mm) wide with a column separator of 1.5 picas (6 mm). 2. The typeface
is Times Modern Roman. 3. The body text size is 9 point (3.15 mm) on a body of 11 point (3.85 mm) (i.e. 61 lines of text). 4. The running feet are set 1.5 picas (6 mm) below the last line of text. 2
TITLE, AUTHOR, AFFILIATION, COPYRIGHT AND RUNNING FEET 2.1 Title The title is set in 20 point (7 mm) bold with leading of 22 point (7.7 mm), centred over the full text measure, with 1.5 picas (6 mm)
of space...
"... . The purpose of this paper is to show a contributor the required style for a paper for ECCOMAS96. The source for the sample file is available for L A T E X users on the Internet in ftp://
ftp.wiley.co.uk/pub/eccomas-styles. Non-L A T E X users should create their own style sheet, using whichever ..."
Add to MetaCart
. The purpose of this paper is to show a contributor the required style for a paper for ECCOMAS96. The source for the sample file is available for L A T E X users on the Internet in ftp://
ftp.wiley.co.uk/pub/eccomas-styles. Non-L A T E X users should create their own style sheet, using whichever software they wish, to achieve the same layout. 1 GENERAL SPECIFICATIONS The following
details should allow contributors to set up the general page description for their paper: 1. The paper is set in two columns each 18 picas (76 mm) wide with a column separator of 1.5 picas (6 mm). 2.
The typeface is Times Modern Roman. 3. The body text size is 9 point (3.15 mm) on a body of 11 point (3.85 mm) (i.e. 56 lines of text). 4. The running feet are set 1.5 picas (6 mm) below the last
line of text. 2 TITLE, AUTHOR, AFFILIATION, COPYRIGHT AND RUNNING FEET 2.1 Title The title is set in 20 point (7 mm) bold with leading of 22 point (7.7 mm), centred over the full text measure, with
1.5 pica...
, 2000
"... . The purpose of this paper is to show a contributor the required style for a paper for ECAI-2000 and PAIS-2000. The specifications for layout are described so that non-L A T E X users can
create their own style sheet to achieve the same layout. The source for the sample file is available for L A ..."
Add to MetaCart
. The purpose of this paper is to show a contributor the required style for a paper for ECAI-2000 and PAIS-2000. The specifications for layout are described so that non-L A T E X users can create
their own style sheet to achieve the same layout. The source for the sample file is available for L A T E X users. The PostScript and the PDF file is available for all. The layout is identical to
ECAI-98 papers except for the footers. The publisher (IOS Press) will insert a footer for each page. 1 GENERAL SPECIFICATIONS The following details should allow contributors to set up the general
page description for their paper: 1. The paper is set in two columns each 20.5 picas (86 mm) wide with a column separator of 1.5 picas (6 mm). 2. The typeface is Times Modern Roman. 3. The body text
size is 9 point (3.15 mm) on a body of 11 point (3.85 mm) (i.e. 61 lines of text). 4. The effective text height for each page is 56 picas (237 mm). The first page has less text height. It requires an
, 1992
"... We present an implementation of a finite-difference approximation for the solution of partial differential equations on transputer networks. The grid structure associated with the
finite-difference approximation is exploited by using geometric partitioning of the data among the processors. This p ..."
Add to MetaCart
We present an implementation of a finite-difference approximation for the solution of partial differential equations on transputer networks. The grid structure associated with the finite-difference
approximation is exploited by using geometric partitioning of the data among the processors. This provides a very low degree of communication between the processors.
, 1993
"... We show how highly efficient parallel implementations of basic linear algebra routines may be used as building blocks to implement efficient higher level algorithms. We discuss the solution of
systems of linear equations using a preconditioned Conjugate-Gradients iterative method on a network of ..."
Add to MetaCart
We show how highly efficient parallel implementations of basic linear algebra routines may be used as building blocks to implement efficient higher level algorithms. We discuss the solution of
systems of linear equations using a preconditioned Conjugate-Gradients iterative method on a network of transputers. Results are presented for the solution of both dense and sparse systems; the
latter being derived from the finite-difference approximation of partial differential equations. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2134960","timestamp":"2014-04-23T19:00:26Z","content_type":null,"content_length":"37806","record_id":"<urn:uuid:53b77d0f-ecc1-428d-8c08-f8ac56980cb0>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
El Cajon Science Tutor
Find an El Cajon Science Tutor
...For the past 6 years I also tutored as often as my schedule allowed for mostly chemistry subjects. I have worked with both high school and college students in the tutor setting. As an
undergraduate student, I also tutored entry-level and high school chemistry and math for 3 years.
9 Subjects: including organic chemistry, chemistry, reading, English
Hello! My name is Eric, and I hold a Bachelor's degree in Mathematics and Cognitive Science from the University of California - San Diego. I began tutoring math in high school, volunteering to
assist an Algebra 1 class for 4 hours per week.
14 Subjects: including physical science, psychology, physics, calculus
...I was responsible for designing the curriculum for a variety of classes including Anatomy and Physiology, Nutrition, Biomechanics and Proprioception, and many more. The enhanced curriculum paid
tremendous dividends for the students and their overall grades. My belief throughout the years in the...
14 Subjects: including biology, reading, writing, anatomy
...While attending West Virginia University, I took many classes that dealt with politics, ranging from Ancient Greek to modern day law and politics, and also history, ranging from the start of
Western Civilization to modern day events. I am a researcher, and if there is anything that you need help...
7 Subjects: including biology, world history, social studies, American history
...I am a San Diego native so I know my way around the area and I also play guitar and volleyball. I'm a very friendly, approachable person and I can't wait to hear from you!I worked at a
non-profit organization for a year doing math and science tutoring and homework help with under-served students...
41 Subjects: including organic chemistry, English, Spanish, chemistry | {"url":"http://www.purplemath.com/el_cajon_ca_science_tutors.php","timestamp":"2014-04-18T06:03:03Z","content_type":null,"content_length":"23751","record_id":"<urn:uuid:9b76a48a-6e43-4c63-92a0-10514146f2fa>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |
AW Procedure
Adjusted Winner
AW starts with the designation of goods or issues in a dispute. The parties then indicate how much they value obtaining the different goods, or "getting their way" on the different issues, by
distributing 100 points across them. This information, which may or may not be made public, becomes the basis for fairly dividing the goods and issues later. Once the points have been assigned by
both parties (in secret), a mediator (or a computer) can use AW to allocate the goods to each party, and to determine which good (there will be at most one) that may need to be divided.
Let's illustrate the procedure with an example. Suppose Bob and Carol are getting a divorce and must divide up some of their assets. We assume that they distribute 100 points among the five items as
Item Carol Bob
Retirement Account 50 40
Home 20 30
Summer Cottage 15 10
Investments 10 10
Other 5 10
Total 100 100
To see a justification for these points click here
AW works by assigning, initially, the item to the person who puts more points on it (that person's points are underlined above). Thus, Bob gets the home, because he placed 30 points on it compared to
Carol's 20. Likewise, Bob also gets the items in the "other" category, whereas Carol gets the retirement account and the summer cottage. Leaving aside the tied item (investments), Carol has a total
of 65 (50 + 15) of her points, and Bob a total of 40 (30 + 10) of his points. This completes the "winner" phase of adjusted winner.
Because Bob trails Carol in points (40 compared to 65) in this phase, initially we award the investments on which they tie to Bob, which brings him up to 50 points (30 + 10 + 10). Now we will start
the "adjusted" phase of AW. The goal of this phase is to achieve an equitable allocation by transferring items, or fractions thereof, from Carol to Bob until their points are equal.
What is important here is the order in which items are transferred. This order is determined by looking at certain fractions, corresponding to the items that Carol, the initial winner, has and may
have to give up. In particular, for each item Carol won initially, we look at the fraction giving the ratio of Carol's points to Bob's for that item:
(Number of points Carol assigned to the item)/(Number of points Bob assigned to the item)
In our example, Carol won two items, the retirement account and the summer cottage. For the retirement account, the fraction is 50/40 = 1.25, and for the summer cottage the fraction is 15/10 = 1.50.
We start by transferring items from Carol to Bob, beginning with the item with the smallest fraction. This is the retirement account, with a fraction equal to 1.25. We continue transferring goods
until the point totals are equal.
Notice that if we transferred the entire retirement account from Carol to Bob, Bob would wind up with 90 (50 + 40) of his points, whereas Carol would plunge to 15 (65 - 50) of her points. We
conclude, therefore, that the parties will have to share or split the item. So our task is to find exactly what fraction of this item each party will get so that their point totals come out to be
We can use algebra to find the solution. Let p be the fraction (or percentage) of the retirement account that we need to transfer from Carol to Bob in order to equalize totals; in other words, p is
the fraction of the retirement account that Bob will get, and (1-p) is the fraction that Carol will get. After the transfer, Bob's point total will be 50 + 40p, and Carol's point total will be 15 +
50(1-p). Since we want the point totals to be equal, we want to choose p so that it satisfies
50 + 40p = 15 + 50(1-p)
Solving for p we get
90p = 15
p = 15/90 = 1/6
Thus, Bob should get 1/6 of the retirement account and Carol should get the remaining 5/6.
Recall that initially Bob is receiving: (1) the home (30 points), (2) the "other" items (10 points), and (3) the investments (10 points). Together with 1/6 of the retirement account, Bob's point
total is now
30 + 10 + 10 + 40(1/6) = 50 + 40(1/6)
Recall that initially Carol is receiving: (1) the summer cottage (15 points). Together with 5/6 of the retirement account, Carol's point total is now
15 + 50(5/6)
Thus, each person receives exactly the same number of points, as he or she values their allocations.
Adjusted Winner (General Description)
Suppose that Bob and Carol want to fairly divide k goods, where k
AW allocates goods as follows:
1. Each player is given 100 points to assign to each good as he/she sees fit.
2. Suppose X is sum of the points of all goods that Bob reports that he values more than Carol does. Let Y be the sum of the values of the goods that Carol reports that she values more than Bob
does. Assume X Y. Assign the goods so that Bob initially gets all the goods where x[i] [i], and Carol gets the others.
3. List the goods in an order G[1],G[2],..., so that the following holds:
□ Bob, based on his reported values, values goods G[1],...,G[r] at least as much as Carol does (i.e. x[i ][i] for 1 )
□ Carol, based on her reported values, values goods G[r+1],...,G[k] at least as much as Bob does (i.e. y[i] [i] for r+1 )
□ x[1]/y[1 ] x[2]/y[2 ] . . . [] x[r]/y[r
4. Transfer as much of G[1] from Bob to Carol as needed to achieve equitability -- that is , until the point totals of the two players are equal. If the scores are not equal after transferring all
of G[1] to Carol, transfer as much of G[2], G[3], etc., as needed.
The AW procedure satisfies the following properties:
• Envy-free: Bob and Carol both believe that their portion is at least tied for largest (based on their announced valuations)
• Equitability: Bob and Carol both believe that their portion is valued the same as the other player's (based on their announced valuations)
• Efficient: There is no allocation that is strictly better for Bob (or Carol) and as good for Carol (or Bob)
Suppose Bob and Carol are dividing three goods: A, B, and C.
1. They report the following point assignments:
Item Bob's reported values Carol's reported value
A 6 5
B 67 34
C 27 61
Total 100 100
2. X = 6+67 = 73 and Y = 61 so, initially Bob is assigned A and B, giving him 73 points, and Carol is assigned item C, giving her 61 points.
3. Since 6/5 = 1.2 and 67/34 = 1.97, order the goods: (1) A, (2) B, and (3) C.
4. We first transfer item A from Bob to Carol, so Bob has 67 points and Carol has 66 points (61 + 5). This allocation is not equitable, so we need to transfer part of item B from Bob to Carol. Let p
denote the proportion of B that Bob will keep, and (1-p) the proportion that Carol will keep. Then Bob's point total will be 67p, and Carol's point total will be 61 + 5 + 34(1-p) = 100 - 34p. We
will choose p so that
67p = 100 - 34p
yielding p = 100/101, which is approximately equal to 0.99.
Thus, Bob ends up with 99% of item B for a total of 66.3 of his points (67 x 0.99), whereas Carol ends up with all of item C, all of item A, and 1% of item B for a total of 66.3 of her points [61
+ 5 + ( 34 x 0.01)].
Check the properties
• Envy-free: Bob would not trade his allocation for Carol's allocation, since that would give him only 33 points. Similarly, Carol would not trade her allocation for Bob's allocation, since that
would give her only 34 points.
• Equitable: This property is obviously satisfied since both parties receive 66.3 points.
• Efficient: This is more difficult to show, because it requires showing that there can be no better allocation for both players. The formal proof can be found in the book Fair Division by Brams
and Taylor. It should be noted that the initial allocation is efficient, since each player receives all the goods he or she most values, and the equitability adjustment step does not affect | {"url":"http://www.nyu.edu/projects/adjustedwinner/awProcedure.htm","timestamp":"2014-04-20T05:15:30Z","content_type":null,"content_length":"24750","record_id":"<urn:uuid:f24a65a5-307c-497a-93df-64660fd9ff4a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00574-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: 'Parakeet Basketball' Brain Teaser
Parakeet Basketball
Math brain teasers require computations to solve.
Puzzle ID: #49969
Category: Math
Submitted By: eighsse
Your favorite college basketball team, the Fighting Parakeets, is playing against a rival, but you haven't been able to see any of the game yet because you have been busy. However, your brother was
in a nearby room watching it. Your brother sees a statistics display during the halftime break and decides to give you a trick math test. You come into the room as the second half is beginning and
see that your team is winning 32-29. Your brother says, "The Fighting Parakeets have scored on 18 shots. They have made at least twice as many 2-point shots as free throws. How many free throws,
2-pointers, and 3-pointers have they made?" He thinks you will become confused in your figuring because he has not given you quite enough information to find the solution. However, you could hear
bits and pieces of the game from the next room while you were busy, and you happened to hear that your favorite Fighting Parakeets player nailed a long 3-point shot. What is the answer to your
brother's question?
Show Hint
Show Answer
What Next?
Show 1 comment | {"url":"http://www.braingle.com/brainteasers/49969/parakeet-basketball.html","timestamp":"2014-04-19T07:15:32Z","content_type":null,"content_length":"25177","record_id":"<urn:uuid:5b2338a7-0e07-4b6f-8d7b-bc4b04346483>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00272-ip-10-147-4-33.ec2.internal.warc.gz"} |
Marc's MetaPost Pages (Examples: Mathematical Section)
Please find below links to a selection of MetaPost sources. You can download the whole bunch as a gzipped tar-file. The archive is a lot smaller than it used to be (it's about 110KB). It includes
tools to rebuild all the graphics and an INSTALL script. Just run the INSTALL script in the mpost directory and Bob's you uncle.
For most (if not all) of the examples it is assumed that you have set your TEX environment variable to latex. It is also assumed that you have ghostscript, latex, and metapost working.
Click on the links to see the pictures. There is also a page with links. Time permitting, I will add some more examples as well as some useful MetaPost tips.
Marc van Dongen | {"url":"http://www.cs.ucc.ie/~dongen/mpost/MathMenu.html","timestamp":"2014-04-21T07:04:27Z","content_type":null,"content_length":"3062","record_id":"<urn:uuid:cc6b965b-6775-4776-b614-02eb5f004d0a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
Assembly Language Routines
20th May 2012, 21:51
Assembly Language Routines
I am looking for PIC asm routines for
Long Integer to String (Long Integer - 4bytes)
Double to String (Double - 4bytes)
Floating Point Math
Square root
I want to calculate numbers like 24.2874353903764903627393 * 5.527932455274853690390.
output string to lcd.
20th May 2012, 23:47
Re: Assembly Language Routines
21st May 2012, 04:43
Re: Assembly Language Routines
Can I use this for getting 4 byte variable routines?
22nd May 2012, 08:00
Re: Assembly Language Routines
You have to try to see the results.
22nd May 2012, 09:20
Re: Assembly Language Routines
In assembly, there is no concept of conversions between Strings, integer, Long, short. In high level languages that concepts is there by restricting mathematical manipulating of variables.
In assembly it's only memory register that you store data and you are free to manipulate them as you wish (using assembly instructions). You may store 0x48, 0x45, 0x4c, 0x4c, 0x4f in 5 registers
and you can add them together or consider them as ASCII values and get the word "HELLO" on a LCD.
However you may write functions in assembly for data manipulations (eg. convert ASCII values of small-letters to uppercase, add multi-register values, get decimal representation of a register
value...) but conversion between variable types is not there.
Use a good book on assembly;I hope you can get the idea.
22nd May 2012, 10:30
Re: Assembly Language Routines
The problem is, I am readind through adc. The byte value I get in ADRESH and ADRESL (12-bit) have to be converted to Decimal value and then converted to ascii for displaying in lcd. Give some
names of assembly books.
22nd May 2012, 10:54
Re: Assembly Language Routines
On the Piclist site referenced above, there is a whole section on radix conversion: http://www.piclist.com/techref/micro...adix/index.htm
There are routines to go from binary 12 or 16 bit to BCD as well as from binary directly to ASCII decimal.
22nd May 2012, 11:22
Re: Assembly Language Routines
I need to do floating point multiplication and division on adc value. The resulting floating point value has to be converted to ascii for lcd display.
22nd May 2012, 11:45
Re: Assembly Language Routines
That is not quite what you described needing in post#6.
It appears now that you want to do math and multiply a 24-digit decimal by 22-digit decimal in floating point based somehow on results from 12-bit ADC. The max resolution that can produce is 1
part in 4096. The rest of the digits in your 24-digit number are meaningless.
At this point, I think the best way for you to get meaningful help is to describe what it is you are trying to do in more detail. Are you sure you need floating point? Why? For example, if you
need the remainder from a division, say to 4 decimal precision, you may be able to avoid floating point by simply left shifting the dividend to the left by two or more bytes. Then strip
(concatenate) the insignificant bits from the result later on.
22nd May 2012, 12:21
Re: Assembly Language Routines
12-bit gives 4095 digital count. I've to multiply 4095 by some no. like 0.5346. Then I get some result. Then there are some similar divisions and multiplications. The final result will be some
hex value in the register. I need to convert that into decimal and display it on lcd.
22nd May 2012, 12:41
Re: Assembly Language Routines
The precision of your result is still limited by the precision of your ADC. Try to visualize your result, if the ADC only gave 1-bit precision (i.e., 0 or 1), or to avoid zero, try 2-bit
It is still not clear why you need floating point, as it appears the manipulations are known beforehand and the only data entry is the ADC value. Have you considered using tables? If you are
certain you need floating point point, have you seen this?
22nd May 2012, 12:44
Re: Assembly Language Routines
There is a reason most people use C instead of assembler. Imagine having to write a whole load of floating point functions, and then someone changes
their mind, and wants to use AVR instead of PIC. And then changes mind again to ARM.
Better to use C if you want to do a lot of floating point, unless you can use lookup tables or scale up and do non-floating point as mentioned above.
22nd May 2012, 12:57
Re: Assembly Language Routines
OK. Tell me how to do 32h * 5h / FFFh?
22nd May 2012, 13:55
Re: Assembly Language Routines
You do know that equals zero, right? or 0.xxx... if you're not constrained to integers.
If that's a calculation you need to do often, then you can pre-calculate and store in a look-up table
to whatever accuracy you desire.
For example, if you need to multiply numbers in the range of 0 to 100 by 5 and then divide by FFF,
then do that for all 100 numbers, and store the results. Then you look them up.
If you don't have space for a look-up table for all your calculations, then you could use external
flash, you can get it in small packages with SPI interface for example.
Or, use C and use a math library.
22nd May 2012, 14:05
Re: Assembly Language Routines
I get my adc values from 0000h to 0FFFh. I have to multiply it by 0005h and divide by 4095h. I get some value, which is the resistance of a device. Depending upon the resistance, I've to
calculate its temperature using 2 polynomial equations. I've done that in mikroC, but I am implementing it in pic asm.
22nd May 2012, 14:15
Re: Assembly Language Routines
Right.. you're just scaling by a factor (i.e. 5/FFFh), nothing non-linear, so this is easier.
Technically you don't even need a lookup table. But here's the concept if you used one.
Perform the calculation for values from 0 to FFF, and store them (i.e. 0.0012, 0.0024, etc)
but scaled up, e.g. 1, 2 if you want it to 3 decimal places, i.e. I've multiplied by 1000.
Make sure the table results are scaled up so that you're not dealing with decimal points.
If you want to continue doing further calculations, then you can use the scaled values.
If you want to display the results, then you know to scale back down by moving the decimal point.
The benefit of a look-up table is that it will save you compute time if you've got nonlinear
values. For the linear case, note that you could just multiply the single scaled value.
And a multiply is just 'add' instructions in a loop, if you're using an assembler.
22nd May 2012, 15:09
Re: Assembly Language Routines
But when I use the polynomial with the resistance, I will be taking squares and square roots of some constant coefficients.
22nd May 2012, 16:21
Re: Assembly Language Routines
So when you're using square roots, then this is absolutely a good use-case for a look-up table. Squares can be composed of multiple additions,
so you could use a loop and 'ADD' instruction in assembler. It will perform the square. Or to save time, use a look-up table for that too.
If you don't want to use a look-up table for square roots, then you need to find an algorithm (math book) and implement in assembler.
At some stage, you'll possibly decide that a math library is easier, and just move to C, or you'll dedicate memory for a lookup table.
Since your original problem is to use a temp probe and convert the value to a temperature, you'd save an awful lot of time just getting the
table of resistance values from the probe vendor and sticking it (or a conversion) in a lookup table. But it's your choice.
This thread is much like your other thread on this probe issue.
22nd May 2012, 18:05
Re: Assembly Language Routines
My equation is (-a + (sqrt(a^2 - 4b(1-x/xo)))/2b and const1 * x + const2 * x^2 -const3 * x^3 - const4 * x^4 + const5 * x^5. I've to implement it in asm. In mikroC double data type is 4 bytes, but
I don't know how to perform math operations in asm. Should I use variables of 4 bytes in mpasm and perform the math operations. I know that operations will be performed in hex. I have to convert
the resulting hex value to decimal. But how do I get values like 8.4168. How to get the decimal part from the hex value. I'm using pic18. the registers are 1 byte. If I have 3 operands of 4 bytes
size, how to move the value of 4 byte to i byte w reg for performing operations?
22nd May 2012, 19:45
Re: Assembly Language Routines
Honestly, I think it won't be possible, unless you can find by luck some floating point math libraries for your device.
It is a lot of work to implement it from scratch, especially if you're not familiar with this.
I say give up, and store a lookup table of the ADC value and the corresponding temperature.
If you can't do that due to lack of space, then change to a different PIC with more space, or external memory, or
switch to C and use a math library. Simple! | {"url":"http://www.edaboard.com/printthread252968.html","timestamp":"2014-04-20T23:28:33Z","content_type":null,"content_length":"20681","record_id":"<urn:uuid:67aa9597-efce-488e-8045-fc05e9338651>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hey guys just looking for help with this question dealing with population variance.
March 15th 2013, 07:09 PM #1
Mar 2013
Hey guys just looking for help with this question dealing with population variance.
I don't quite know how to write this problem down here, so I'll just do it in paint and post up a link.
imgur: the simple image sharer
Thanks, it would be helpful if you could explain it to me through paint also, as I don't really understand math displayed on a computer. I just don't know how to proceed on the question, and i
don't even know what exactly it is asking me to do.
Last edited by Joyeux; March 15th 2013 at 08:09 PM.
Re: Hey guys just looking for help with this question dealing with population varianc
I don't quite know how to write this problem down here, so I'll just do it in paint and post up a link.
imgur: the simple image sharer
Thanks, it would be helpful if you could explain it to me through paint also, as I don't really understand math displayed on a computer. I just don't know how to proceed on the question, and i
don't even know what exactly it is asking me to do.
Your equality is not true. The mean $\mu$ is missing an exponent of 2. See below (by the way, this is "math displayed on a computer", and I'm not quite sure how this is worse than "numbers by
$\sigma^2=\sum_{i=1}^n \frac{(x_i-\mu)^2}{n} = \sum_{i=1}^n \frac{x_i^2}{n} - 2\sum_{i=1}^n \frac{x_i \mu}{n} + \sum_{i=1}^n \frac{\mu^2}{n}$
$= \sum_{i=1}^n \frac{x_i^2}{n} - 2\mu\left(\sum_{i=1}^n \frac{x_i}{n}\right) + n\frac{\mu^2}{n}$
$= \sum_{i=1}^n \frac{x_i^2}{n} - 2\mu^2 + \mu^2$
$= \sum_{i=1}^n \frac{x_i^2}{n} - \mu^2$
Re: Hey guys just looking for help with this question dealing with population varianc
Oops sorry I edited it, I still don't know how to solve it though. I don't think it's asking to change from one form to another. Your solution didn't have the substituted version of mu. Still
really confused.
Re: Hey guys just looking for help with this question dealing with population varianc
I think this is the question it's asking, imgur: the simple image sharer. Still not 100% sure though.
March 15th 2013, 07:52 PM #2
Mar 2013
BC, Canada
March 15th 2013, 08:11 PM #3
Mar 2013
March 15th 2013, 08:16 PM #4
Mar 2013 | {"url":"http://mathhelpforum.com/statistics/214848-hey-guys-just-looking-help-question-dealing-population-variance.html","timestamp":"2014-04-18T09:49:27Z","content_type":null,"content_length":"40462","record_id":"<urn:uuid:c331eeaa-a64e-4b01-baa5-1464d628f2c5>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
6.2.2 Properties & Equivalent Expressions
6.2.2 Properties & Equivalent Expressions
Benchmark: 6.2.2.1 Properties & Equivalent Expressions
Big Ideas and Essential Understandings
Standard 6.2.2 Essential Understandings
Students at this level begin to develop the ability to generalize numerical relationships and express mathematical ideas concisely using expressions and equations (e.g., three more than a number as x
+ 3, doubling as 2n, commutativity as a + b = b + a). Concrete models and pictorial representations of algebraic expressions are used to develop understanding that the commutative, associative, and
distributive properties and order of operations apply in the same way that they did for numeric expressions. Students use these properties and the order of operations to generate equivalent
expressions and evaluate expressions that involve positive rational numbers.
All Standard Benchmarks
6.2.2.1 Apply the associative, commutative and distributive properties and order of operations to generate equivalent expressions and to solve problems involving positive rational numbers.
Benchmark Cluster
Benchmark Group A
6.2.2.1 Apply the associative, commutative and distributive properties and order of operations to generate equivalent expressions and to solve problems involving positive rational numbers.
What students should know and be able to do [at a mastery level] related to this benchmark:
• Understand that algebraic expressions behave in the same way as numerical expressions;
• Apply the order of operations to generate equivalent numeric expressions involving rational numbers;
Examples: [math]12.6-(5.1+4.2)=12.6-9.3=3.3[/math];
• Apply commutative, associative, and distributive properties to generate equivalent expressions;
Examples: [math]9 \times 52 = 9 \times (50 + 2) = (9 \times 50) + (9 \times 2) = 450 + 18 = 468[/math];
[math]12x + 2x = 2x + 12x = 14x[/math];
[math]5x \cdot 3 = 3(5x) = 15x[/math];
[math](x + 2) \cdot 5 = 5(x + 2 ) = 5x + 10[/math];
• Identify commutative, associative, and distributive properties used to generate equivalent numeric and algebraic expressions;
Examples: [math]3 \cdot (x+5) = (x+5) \cdot 3[/math]; Commutative Property of Multiplication;
• Evaluate algebraic expressions when given positive rational numbers as values for variables.
Work from previous grades that supports this new learning includes:
Apply the commutative, associative and distributive properties and order of operations to generate equivalent numerical expressions and to solve problems involving whole numbers;
• Determine whether an equation or inequality is true or false for a given value of the variable;
• Represent real-world situations using equations and inequalities involving variables. Create real-world situations corresponding to equations and inequalities;
• Evaluate expressions and solve equations involving variables when values are given for the variables.
NCTM Standards
Understand meanings of operations and how they relate to one another
● Understand the meaning and effects of arithmetic operations with fractions, decimals, and percents.
● Use the associate and commutative properties of addition and multiplication and the distributive property of multiplication over addition to simplify computations with integers, fractions, and
Common Core State Standards (CCSS)
6 EE (Expressions and Equations) Apply and extend previous understandings of arithmetic to algebraic expressions.
● 6.EE.3 Apply the properties of operations to generate equivalent expressions. For example, apply the distributive property to the expression 3(2 + x) to produce the equivalent expression 6 + 3x;
apply the distributive property to the expression 24x + 18y to produce the equivalent expression 6(4x + 3y); apply properties of operations to y + y + y to produce the equivalent expression 3y.
Student Misconceptions
Student Misconceptions and Common Errors
• Students incorrectly apply the order of operations;
• Students may think that 3 x 5 is equivalent to 3 x 3 + 2, not recognizing the need for parentheses;
• Students misinterpret exponents (e.g., 4^2 as 4 x 2);
• Students may be confused by the differences between the commutative and associative properties and will incorrectly identify them;
• Since [math]2+\frac{1}{2}=2\frac{1}{2}[/math], students misinterpret 2x as 2 + x;
• Students do not recognize that x + x can be simplified to 2x;
• Students may misinterpret x + x + x as x^3, rather than 3x;
• Students do not recognize that [math]x \cdot 5[/math] and [math]5x[/math] are equivalent expressions, resulting in the inability to generate the equivalent expression [math]8x[/math] for [math]x
\cdot 5 + 3x[/math];
• When given x = 3, students incorrectly interpret 5x as 53.
In the Classroom
In this vignette, students use associative, commutative, and distributive properties to generate equivalent algebraic expressions for the area of a rectangle.
Teacher: Today's task is to generate as many expressions as you can to represent the area of this rectangle.
Teacher: To help us think about this, let's start by using algebra tiles to represent this rectangle.
Student1: I already drew a picture to show what that might look like.
Teacher: What algebraic expression would you use to represent the area?
Student1: The picture shows x + x + x + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1+ 1 + 1 + 1.
Teacher: It certainly does. Is there another way to write that expression that may be a little simpler?
Student1: Sure. You could add all the (x)s together and add all the (1)s together. Then you'd get 3x + 15.
Teacher: So what you did was grouped the "like" terms. You put together all the (x)s in one group and all the (1)s in another group. We often use parentheses in mathematics to show groups, so you
could write your work like this: (x + x + x) + (1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1+ 1 + 1 + 1) = 3x + 15.
Student2: I know an easier way to do the problem.
Teacher: What would that be?
Student2: Well, I remember that the fastest way to find the area of a rectangle is to multiply the length by the width. That's why I wrote the area as x + 5 x 3.
Student3: I agree that the area of a rectangle can be found by multiplying the length by the width, but I think there's something wrong with your expression.
Teacher: Tell me more.
Student3: The algebra tiles show that there are 3 (x)s and 15 units. When you use the order of operations to simplify x + 5 x 3, you get x + 15. Somehow you lost two (x)s, but I don't know how.
Teacher: According to the order of operations, multiplication does come before addition so your observation is correct. How did the two (x)s get lost?
Student3: It's because you only multiplied the 5 by 3, and didn't multiply the x by 3.
Teacher: Oh, you're saying that 3 only got distributed over the 5 but not the x. How can we rewrite the expression to make it accurate and show that the 3 needs to get distributed, or multiplied by
both 5 and the x?
Student3: You need to put parentheses around the x + 5.
Teacher: Show me what you mean.
Student3: Like this. (x + 5) x 3. That means that both the x and the 5 get multiplied by 3.
Teacher: Time out. I'm having a little trouble making sense of what you wrote, because you've used an x to represent the variable and an x to indicate multiplication. Is there another way to write
the expression that may be a little less confusing.
Student3: I see what you mean. I suppose I could write the expression as (x + 5)٠3, since sometimes a dot is used to represent multiplication.
Student1: Why do you have to write any symbol at all?
Teacher: What are you suggesting?
Student1: That we write it like this: (x + 5)3.
Teacher: It is true that in algebra, when we write quantities right next to each other without any symbol in between, multiplication is implied. For example, 6y means 6 times whatever value the
variable y has.
Student3: I understand that you're supposed to multiply when there's no symbol, but that expression looks confusing to me.
Teacher: What part is confusing you?
Student3: Writing the 3 at the end of the expression instead of the beginning.
Teacher: How do you suggest we write the expression?
Student3: I'd write 3(x + 5).
Teacher: Why is that less confusing to you?
Student3: Because we usually write the numbers first.
Teacher: Give me an example.
Student3: Like we usually write 3x, not x3.
Teacher: You do have a point. It is our practice in algebra to write the number that's being multiplied before the variable. But are those two expressions equivalent? Is 3x equivalent to x3?
Student3: I think so. If you substitute a number in for x, like 2, you get 6 for both expressions. In the first expression you multiply 3 times 2, and in the second expression you multiply 2 times
three. You get the same thing. It's just that you multiply in a different order.
Teacher: Exactly. In mathematics we have a property that says that when you multiply, the order of the factors does not matter. You'll get the same result. It's the commutative property. Let's go
back to our rectangle problem now. How are the expressions (x + 5)3 and 3(x + 5) different?
Student3: The only difference is the order that the factors are written down.
Teacher: Are the two expressions equivalent?
Student3: They must be.
Teacher: And how do you know?
Student3: Because of the commutative property.
Teacher: Yes, but remember - the commutative property only works for multiplication and addition. Subtraction and division are not commutative. If you change the order of the numbers when performing
those operations you'll get different answers. Now I'm curious. Earlier we wrote the area of the rectangle as 3x + 15. How do we know that 3x + 15 and 3(x + 5) are equivalent?
Student1: Oh, that's another property. I think it's called the distributive property. You use that property to get rid of the parenthesis and simplify 3(x + 5).
Teacher: How does the distributive property work?
Student1: First you multiply 3 by x, which is 3x. Then you add 3 times 5, or 15. The answer is 3x + 15.
Teacher: The distributive property gets its name because it "distributes" the factor outside the parentheses over the terms within the parentheses. I know that properties are somewhat like laws; they
always work. But why does the distributive property work?
Student1: I think it's sort of like using partial products.
Teacher: That's an interesting thought. The distributive property lets you "distribute," or multiply a number over each addend of a sum and then add the products. Let me give you an example of the
distributive property using numbers.
In this example the 5 is "distributed" to both the 10 and the 2. That means that 5 is multiplied by both 10 and 2, resulting in the partial products 50 (5 x 10) and 10 (5 x 2). When I add 50 and 10
together I get 60, which is the same result I get for 5 x 12 using any strategy. In our rectangle product, our partial products are 3x and 15. However, we're not able to combine them like we did with
50 and 10 because they're not "like" terms. But back to our original task of generating equivalent expressions for the area of the rectangle. So far we have:
x + x + x + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1+ 1 + 1 + 1
(x + x + x) + (1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1+ 1 + 1 + 1)
3x + 15
(x + 5)3
3(x + 5)
We've also talked about how the commutative and distributive properties can be used to generate equivalent expressions. But I haven't heard anyone talk about the associative property. What is that
property and how might it be used to generate an equivalent expression for the area of our rectangle?
Student1: The associative property is about grouping. It says that we can change the grouping of what's being added or multiplied without changing the results.
Teacher: Yes, and we often use parentheses to show those groups. Remember, this property does not apply to subtraction or division, just like the commutative property. So how can we use the
associative property to find more equivalent expressions for the area of our rectangle?
Student1: I have an idea. I'm going to use the associative property to change the grouping of what's being added. Here's my thinking:
(x + x + x) + (1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1+ 1 + 1 + 1) =
([x + x] + x) + ([1 + 1 + 1 + 1] + [1 + 1 + 1 + 1 + 1 + 1 + 1 + 1+ 1 + 1 + 1]) =
2x + x + 4 + 11
That means that 2x + x + 4 + 11 is another expression for the area of the rectangle.
Teacher: Let's go back to our picture to verify that. I see 2 (y)s, another x, 4 ones, and 11 ones. The picture supports your idea. Now I'd like you to work with your partners to generate 3 more
equivalent expressions for the area of the rectangle. Please identify which properties justify your thinking.
Instructional Notes
Teacher Notes
• The mnemonic Please Excuse My Dear Aunt Sally is helpful for students who cannot remember the order of operations (Parentheses, Exponents, Multiply, Divide, Add, Subtract). Remind students that
calculations are always done left to right, and that groups must follow the rules working from innermost to outer.
• Students will benefit from a discussion of what the word equivalent means. For example, a dozen and 12 items, or $1.00 and 4 quarters are equivalent because they have the same value. In
mathematics, we have equivalent fractions ([math]\frac{1}{2}[/math] and [math]\frac{2}{4}[/math]) and equivalent measures (1 foot and 12 inches). Equivalent expressions, (12 + 7 and 7 + 12 or x +
x + 2 and 2x + 2) also have the same value.
• Students who believe that 3 x 5 is equivalent to 3 x 3 + 2 need additional opportunities to see how parentheses are necessary to represent 5 as 3 + 2 and change the typical order of moving left
to right when performing computations to maintain equivalence. They will benefit from activities where they are asked to insert parentheses into expressions to generate a given number. For
example, how can parentheses be inserted into the expression 5 + 3 x 4 - 2 to make it equivalent to 30? Equivalent to 11? Equivalent to 15?
• Students' previous experiences with exponents has most likely been connected to place value using base 10; e.g. 10^2 = 10 x 10 = 100; 10^3 = 10 x 10 x 10 = 1000; 10^4 = 10 x 10 x 10 x 10 =10,000.
Use this connection to help students see the base as a repeated factor, with the exponent telling how many times the base is repeated.
• The key words to remember are order for the Commutative Property and grouping for the Associate Property. Remind students that the Associative Property moves the parentheses but does not change
the position of the numbers. The Commutative Property changes the positions of the numbers;
• Provide graphic organizers, such as those shown below, to help students understand the properties and how they are used to simplify numeric and algebraic expressions;
• Be sure to give students plenty of opportunity to use concrete materials and pictorial representations of algebraic expressions before moving to the symbolic. This will help students see that 2 +
x is not equivalent to 2x , although [math]2+\frac{1}{2}=2\frac{1}{2}[/math].
• Without extensive concrete and semi-concrete experiences, it is also difficult for students to understand that x + x = 2x. Because the coefficient 1 is implied, students will benefit from the
teacher writing the coefficient 1 whenever it is implied to make it explicit. Writing 1x + 1x + 1x may help students visualize three distinct quantities that can be added together and represented
as 3x.
• The use of concrete models and pictorial representations will also help students who have the misconception that x + x + x can be expressed by the expression x^3 rather than 3x. Remind students
that in exponential notation, the base is a repeated factor, and does not indicate repeated addition.
• Students struggle to understand that x5 and 5x are equivalent expressions, because it conflicts with their previous experiences with numbers (53 and 35 are not equivalent). This provides an
opportunity to show that the properties actually do act the same way with numbers and algebraic expressions. For example, 50 + 3 = 3 + 50 and 50 [math]\cdot[/math] 3 = 3 [math]\cdot[/math] 50
demonstrate the commutative property of addition and multiplication. In the algebraic expression x5, the multiplication is implied. Therefore, x5 means x [math]\cdot[/math] 5. According to the
commutative property, x [math]\cdot[/math] 5 is equivalent to 5 [math]\cdot[/math] x, which is usually written as 5x. Therefore, x5 + 3x can be rewritten as 5x + 3x = 8x. In the number 53, the
addition 50 + 3 is implied. According to the commutative property 53 = 50 + 3 = 3 + 50, not 35.
• Students that interpret 5x as 53 for x = 3 need to be reminded that in algebraic expressions when two quantities are written next to each other without any symbol, multiplication is implied.
These students may benefit from discussion about how x is not usually used to represent multiplication in algebraic expressions to avoid confusion with the variable x, and that a dot is often
used when a multiplication symbol is needed to prevent confusion.
Instructional Resources
Instructional Resources
Distributing and Factoring Using Area (NCTM Illuminations)
Additional Instructional Resources
This interactive website includes print activities with 5 different levels of difficulty.
This website is a game for two users where each player tries to connect four game pieces in a row before his or her opponent. Players can also choose game difficulty.
New Vocabulary
New Vocabulary
base (of an exponent): the number used as the factor in exponential notation base^exponent
Example: In 6^4 = 6 x 6 x 6 x 6, 6 is the base used as a factor 4 times.
exponent: in exponential notation (base^exponent ), the exponent is the number that tells how many times the base is used as a factor
Example: In 8 x 8 x 8 = 8^3, the exponent is 3 with base 8.
evaluate: to find the value. To evaluate algebraic expressions, particular numbers are substituted for variables before calculating.
Example: To evaluate 7x for x = 5, x is replaced with 5, resulting in 35
order of operations: the rules describing what sequence to use in evaluating expressions. All calculations are done left to right, in the following order:
1. Parentheses and other grouping symbols; work from the innermost set using rules 2-4;
2. Exponents
3. Multiply or divide in the order the operations occur.
4. Add or subtract in the order the operations occur.
Example: [math]{5}^{2}+(3\times 4-2)\div 5 = {5}^{2}+(12-2)\div 5[/math]
[math]\; ={5}^{2}+\; \; \; \; 10\; \; \; \; \div 5[/math]
[math]\; =25+\; \; \; \; 10\; \; \; \; \div 5[/math]
[math]\; =25+\; \; \; \; \; \; \; \; \; \; 2[/math]
[math]\; =27[/math]
rational number: any number that can be expressed in the form [math]\frac{a}{b}[/math], where a and b are integers and b ≠0. A rational number can always be represented by either a terminating or a
repeating decimal. Examples: [math]\frac{2}{3}[/math]; 4 (which can be expressed as [math]\frac{4}{1}[/math]); 2.25 (which can be expressed as [math]\frac{225}{100}[/math].)
simplify (an expression): to rewrite by removing parentheses and by combining like terms. Examples: 3x + 4 + 2 + 2x can be simplified as 5x + 6 and 3(2y + 4) - y can be simplified as 5y + 12
simplify (a fraction): to express in simplest form, or lowest terms. The numerator and denominator of proper fractions in simplest form have no common factor other than 1. Improper fractions and
mixed numbers are in simplest form when the fraction part is proper and in simplest form. Examples: The numerator and denominator of [math]\frac{4}{8}[/math] share the common factor 4, so must be
rewritten as [math]\frac{1}{2}[/math] to be in simplest form; [math]\frac{19}{3}[/math] written in simplest form is [math]6\frac{1}{3}[/math].
variable: a quantity that changes or that can have different values; a letter if often used to represent a variable quantity. Example: In the expression 5n, n is a variable because it can have
different values.
Professional Learning Communities
Reflection - Critical questions regarding the teaching and learning of this benchmark:
How can concrete models and pictorial representations be used to move students to abstract representations?
• What activities will help students gain the understanding that algebraic representations work in the same way as numerical expressions?
• What evidence shows that students can apply and identify properties used to generate equivalent expressions?
• What evidence shows that students can apply the order of operations to generate equivalent expressions?
• What student misconceptions need to be addressed?
Materials - suggested articles and books
● Welder, R.M., Improving Algebra Preparation: Implications from Research on Student Misconceptions and Difficulties. Web. 01 May 2011.
● This article from NCTM's Principles and Standards for School Mathematics discusses the importance of using informal explorations with physical models, data, graphs, and other mathematical
representations rather than facility with formal algebraic manipulation at the middle school level.
Minnesota's K-12 Mathematics Frameworks. (1998). St. Paul, MN: SciMathMN.
Focus in Grade 6 Teaching with Curriculum Focal Points. (2010). Reston, VA: National Council of Teachers of Mathematics, Inc.
Developing Essential Understanding of Ratios, Proportions & Proportional Reasoning Grades 6-8. (2010). Reston, VA: National Council of Teachers of Mathematics, Inc.
Principles and Standards for School Mathematics. (2000). Reston, VA: National Council of Teachers of Mathematics, Inc.
(DOK Level 1)
1. Evaluate [math]\frac{2}{3}x+\frac{1}{2}[/math] for [math]x=\frac{3}{4}[/math].
Answer: 1
(DOK Level 1)
● Tell which property is illustrated by each statement.
a) [math]2.1 \cdot (m + 3) = (m + 3) \cdot 2.1[/math]
b) [math]2.1 \cdot (m \cdot 3) = (2.1 \cdot m ) \cdot 3[/math]
c) [math]2.1 \cdot (m + 3) = 2.1 \cdot m + 2.1 \cdot 3[/math]
a) commutative property of addition
b) associative property of multiplication
c) distributive property
(DOK Level 2)
● Use the indicated property to complete each statement.
(DOK Level 2)
● Simplify: 5m + 12 + 3(m + 2)
Answer: 5m + 12 + 3(m + 2) = 5m + 12 + 3m + 6 = 8m + 18
(DOK Level 3)
● Draw models for 2y + 4 and 2(y + 4). Explain how they are different.
Sample Answer:
The model for 2y + 4 shows 2 (y)s and 4 ones. The model for 2(y + 4) shows two groups of y and 4 ones, which is the same as 2 (y)s and 8 ones.
(DOK Level 4)
● Sam says that 5m + 12 + 3(m + 2) is equivalent to 8m + 18. Prove Sam's statement by using the commutative, associative, and distributive properties. Identify each property used.
Sample answer:
5m + 12 + 3(m + 2) = 5m + 12 + 3m + 6 (Distributive Property)
= 5m + 3m + 12 + 6 (Commutative Property of Addition)
= (5m + 3m) + (12 + 6) (Associative Property of Addition)
= 8m + 18
Struggling Learners
Struggling Students
• Always provide algebra tiles or other manipulatives for students to make concrete representations of algebraic expressions.
• Provide graphic organizers, such as those shown below, to help students understand the properties and how they are used to simplify numeric and algebraic expressions.
This website is an interactive game where students practice order of operations and fluency with whole number facts.
This website provides students with practice identifying four properties that involve addition: commutative, associative, additive identity, and distributive.
This website provides students with practice identifying four properties that involve multiplication: commutative, associative, multiplicative identity, and distributive.
English Language Learners
• Numbers always behave in the same predictable way. Help students understand that the sets of rules that describe how numbers behave are referred to as Properties in mathematics. How the numbers
are behaving can help identify which property is being used. Make students aware that the word property has multiple meanings, such as land or possessions.
• The root word of Commutative Property is commute, which means to move around. Students may have heard others talk about how long it takes them to commute from home to work, or move around from
home to work. Help students connect the Commutative Property to the idea that they're moving around, or changing the order of the numbers being added or multiplied.
• The root word of Associative Property is associate, which means to put things together. For example, students may associate dogs with barking. In mathematics we often use parentheses to put
things together. Help students connect the Associative Property to the idea that they're using parentheses to put together groups of numbers that are being added or multiplied;
• The root word of Distributive Property is distribute, which means to spread about among several. Help students connect the Distributive Property of Multiplication over Addition or Subtraction to
the idea that you're spreading out the multiplying over the quantities being added or subtracted;
• Use graphic organizers such as the Frayer model shown below, for vocabulary development.
Extending the Learning
In this unit, students create a shape sorter and consider all possibilities that will return its shape to its original position. To learn about the commutative and associate properties, they
investigate the results when two of these moves are performed consecutively.
This website uses a game format for students to practice fluency and estimation skills for mental calculations involving whole numbers.
This website uses a game format for students to practice fluency for mental calculations involving negative integers.
Classroom Observation
Administrative/Peer Classroom Observation
│ Students are: (descriptive list) │ Teachers are: (descriptive list) │
│using the commutative, associative, and distributive properties for mental math │asking students to explain their thinking when calculating mentally with whole numbers, and helping them │
│calculations with whole numbers │see how the properties are being used │
│using concrete and pictorial representations to model algebraic expressions │scaffolding learning to move students from concrete and semi-concrete to abstract representations of │
│ │algebraic expressions │
│applying the order of operations and properties to generate equivalent numeric and │connecting students' experiences with order of operations and properties involving numeric expressions to │
│algebraic expressions │experiences with algebraic expressions │
│evaluating algebraic expressions by replacing variables with positive rational numbers │using algebraic expressions from real-life, such as [math]\pi {r}^{2}[/math] and lwh, as a source of rich │
│ │problems for students to evaluate using positive rational numbers │
│using mathematical language in verbal and written communication │requiring students to explain their reasoning and justify their solutions │
│solving problems involving positive rational numbers by applying the order of operations │posing questions that require students to apply the order of operations and various properties in order to│
│and commutative, associative, and distributive properties │solve │
Parent Resources
● Commutative, Associative, and Distributive Laws
This website provides an explanation of the properties, and includes practice questions and an activity.
This website is an interactive game where students practice order of operations and fluency with whole number facts.
This website provides students with practice identifying four properties that involve addition: commutative, associative, additive identity, and distributive.
● Properties of Multiplication
This website provides students with practice identifying four properties that involve addition: commutative, associative, multiplicative identity, and distributive.
● Order of Operations I
This website uses a game format for students to practice fluency and estimation skills for mental calculations involving whole numbers.
Framework Feedback
Love this Framework? Have thoughts on how to improve it? We want to hear your Feedback.
Give us your feedback
What grade level do you work with? | {"url":"http://scimathmn.org/stemtc/frameworks/622-properties-equivalent-expressions","timestamp":"2014-04-20T13:25:52Z","content_type":null,"content_length":"75864","record_id":"<urn:uuid:84d55921-5526-4cb5-b02c-c530adceb8d1>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
C++ code for Kruskal Algorithm
Newbie Poster
4 posts since Aug 2006
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
I wanted to know the implementation code for krushkal algorithm in C++.
Anyone can help me to give the code.
Junior Poster
147 posts since Jul 2006
Reputation Points: 19 [?]
Q&As Helped to Solve: 12 [?]
Skill Endorsements: 0 [?]
Posting Whiz in Training
273 posts since Jun 2005
Reputation Points: 25 [?]
Q&As Helped to Solve: 29 [?]
Skill Endorsements: 0 [?]
I don't think that someone will provide you the source code. Try at google. I bet that you will find something
Failure as a human
10,399 posts since Jun 2006
Reputation Points: 2,496 [?]
Q&As Helped to Solve: 992 [?]
Skill Endorsements: 72 [?]
• Featured
Post your attempt and we will definately help you out on topics on where you got stuck. Just giving out the source code would do you more harm than good.
Newbie Poster
1 post since Oct 2007
Reputation Points: -2 [?]
Q&As Helped to Solve: 1 [?]
Skill Endorsements: 0 [?]
C++ code for kruskal algorithm...
will you provide me..
Nearly a Posting Maven
2,463 posts since Jun 2007
Reputation Points: 115 [?]
Q&As Helped to Solve: 55 [?]
Skill Endorsements: 0 [?]
• Featured
smart_pallav....welcome aboard.I reccommend taking a look at the Rules & FAQ for the daniweb forum. Please do not re-open old threads...You must start your own thread and post what you've come up
with for us to help...we don't do assignments for students!
Newbie Poster
1 post since Mar 2008
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
plz mail me d java codes for prim's , kruskal's nd single source shortest path algorithms.
my id is << email id snipped >>
Newbie Poster
2 posts since Jan 2010
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
hi below is the krushkal algorithm code
it uses UNIONFIND data structure
code starts :
/*Krushkal Algorithm
Input defined as follow
First give number of nodes n
starting end of edge and then ending end of edge and then cost of edge
graph is as follows
1-2 2-5 3-4 4-2 2-3
input is
0 to end the input
#define N 100 //N is max nodes
#define M 10000 // M is max edges
#include <stdlib.h>
using namespace std;
// UNION FIND DATA STRUCURE STARTS
struct data{
int name;
int size;
struct data *home;
typedef struct data mydata;
class makeunionfind
public :
mydata S[N];
makeunionfind(int n)
for(int i=0;i<n;i++)
void myunion(int a, int b)
int sizea,sizeb;
int myfind(int a)
mydata *temp,*temp2,*stoppoint;
int result;
//path compression
return result;
//UNION FIND DATA STRUCURE ends
//Krushkal Algo starts
struct node{
int name;
typedef struct node mynode;
class edge
public :
mynode *start,*end;
double cost;
struct edges{
edge e;
int compare(const void *a,const void *b )
edge *a1,*b1;
return -1;
else if (a1->cost>b1->cost)
return 1;
return 0;
void *kruskal(edge *e,int n,int m,int *size,edge *ans)
makeunionfind list(n);
int (*comp)(const void *a,const void *b );
int k=0;
for(int i=0;i<m;i++)
int s,f;
return ans;
int main()
mynode nodes[N];
edge e[M];
int n,m; // n is the number of nodes , m is no. of nodes
for(int i=0;i<n;i++){
// temp1 is starting node temp2 is ending node temp3 is cosr
int temp1,i;
for (i=0;temp1!=0;i++)
int temp2;
double temp3;
edge ans[M];
int size;
for(int p=0;p<size;p++)
cout<<p+1<<") start "<<(ans[p].start)->name<<" end "<<(ans[p].end)->name<<endl;
return 0;
Above works perfectly fine :)
And is good implementation
If you want to know about unionfind data structure search google please
Newbie Poster
3 posts since Sep 2008
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
I know of two links you may be interested in.
1. An ordinary C++ implementation of Kruskal's algorithm
2. A C++/MFC tool with graphical user interface to add network nodes and links etc
and calculate the Kruskal minimal spanning tree via the Boost libraries
Newbie Poster
3 posts since Oct 2013
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
hi all... you have asked question 7 years ago.. back then i did not even know what programing is.. today I do. Today I am going to give you exact answer as you were expacting at the time of posting
this question... simply visit the following link and have code for Kruskel Algorithm in C++ | {"url":"http://www.daniweb.com/software-development/cpp/threads/54253/c-code-for-kruskal-algorithm","timestamp":"2014-04-16T22:03:19Z","content_type":null,"content_length":"55454","record_id":"<urn:uuid:dc1aa0ce-2127-4a3f-ba15-6110960e775e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: SIMPLICITY OF NONCOMMUTATIVE DEDEKIND DOMAINS
K. R. Goodearl and J. T. Stafford
Abstract. The following dichotomy is established: A nitely generated, complex
Dedekind domain that is not commutative is a simple ring. Weaker versions of this
dichotomy are proved for Dedekind prime rings and hereditary noetherian prime
When the classical concept of a Dedekind domain was extended to noncommu-
tative rings, the natural examples that arose were either classical orders (and hence
nitely generated modules over their centres) or simple rings such as the Weyl alge-
bra A 1 (C ). Indeed, among nitely generated Dedekind domains over algebraically
closed elds, classical orders and simple rings are the only known examples. This
dichotomy in the examples suggests that an actual dichotomy might exist among
general Dedekind domains, although we are not aware that any such conjecture has
been formulated in the literature. The main goal in this paper is to establish just
such a result as well as give similar dichotomies for Dedekind prime rings and HNP
Before stating the rst theorem we need some denitions. An HNP ring is simply
a (nonartinian) hereditary noetherian prime ring, while a Dedekind prime ring is
an HNP ring for which each nonzero ideal I is invertible in the sense that there | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/662/3828926.html","timestamp":"2014-04-19T02:13:02Z","content_type":null,"content_length":"8426","record_id":"<urn:uuid:c5d9035f-bae9-47b5-aef8-47ba1553bf4e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the number of blocks required to access the coalition structure core
Béal, Sylvain and Rémila, Eric and Solal, Philippe (2011): On the number of blocks required to access the coalition structure core.
Download (272Kb) | Preview
This article shows that, for any transferable utility game in coalitional form with nonempty coalition structure core, the number of steps required to switch from a payoff configuration out of the
coalition structure core to a payoff configuration in the coalition structure core is less than or equal to (n*n+4n)/4, where n is the cardinality of the player set. This number considerably improves
the upper bound found so far by Koczy and Lauwers (2004).
Item Type: MPRA Paper
Original Title: On the number of blocks required to access the coalition structure core
Language: English
Keywords: coalition structure core; excess function; payoff configuration; outsider independent domination.
Subjects: C - Mathematical and Quantitative Methods > C7 - Game Theory and Bargaining Theory > C71 - Cooperative Games
Item ID: 29755
Depositing User: Sylvain Béal
Date Deposited: 04. Apr 2011 20:50
Last Modified: 15. Feb 2013 06:24
[1] R. J. Aumann, "Some non-superadditive games, and their Shapley value, in the Talmud", International Journal of Game Theory 39 (2010), pp. 3–10.
[2] S. Béal, E. Rémila and P. Solal, "On the number of blocks required to access the core", MPRA Paper No. 26578, 2010.
[3] D. B. Gillies, "Some theorems on n-person games", Ph.D. dissertation, Princeton University, Department of Mathematics, 1953.
[4] J. Greenberg, "Coalition structures", ch. 37 in Handbook of Game Theory with Economic Applications, vol II, (R.J. Aumann and S. Hart eds.), pp. 1305–1307, Elsevier, Amsterdam,
[5] L. Kóczy, "The core can be accessed with a bounded number of blocks", Journal of Mathematical Economics 43 (2006), pp. 56–64.
[6] L. Kóczy and L. Lauwers, "The coalition structure core is accessible", Games and Economic Behavior 48 (2004), pp. 86–93.
[7] A. Sengupta and K. Sengupta, "Viable proposals", International Economic Review 35 (1994), pp. 347–359.
[8] A. Sengupta and K. Sengupta, "A property of the core", Games and Economic Behavior 12 (1996), pp. 266–273.
[9] L. S. Shapley, "Cores of convex games", International Journal of Game Theory (1971), 1, pp. 11–26.
[10] P. P. Shenoy, "On coalition formation: a game-theoretical approach", International Journal of Game Theory 8 (1979), pp. 133–164.
[11] Y.-Y. Yang, "On the accessibility of the core", Games and Economic Behavior 69 (2010), pp. 194–199.
URI: http://mpra.ub.uni-muenchen.de/id/eprint/29755 | {"url":"http://mpra.ub.uni-muenchen.de/29755/","timestamp":"2014-04-16T22:35:22Z","content_type":null,"content_length":"19745","record_id":"<urn:uuid:77f2451d-d742-427b-881f-5f522bc729ad>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: CS 6890 Homework 2 (20 points)
Written homework provides an excellent framework for achieving the goals of this course. Because
assignments are done as a group and any questions are discussed in class or during office hours,
written solutions to the homework will not be provided. These are typed (or printed clearly) exercises,
but you are certainly encouraged to actually program some of them. Be sure to show your work for all
the problems.
Note, these exercises may be done in groups of one or two (or with instructor approval, three). If
more than one person is involved, list all the names on ONE set of answers. Groups may change
throughout the semester. Answers should not be compared with others not in your group.
1. In the following strategic-form game, what strategies survive iterated elimination of strictly-
dominated strategies? What are the pure-strategy Nash equilibria?
L C R
T 2,0 1,1 4,2
M 3,4 1,2 2,3
B 1,3 0,2 3,0
2. Agents 1 and 2 are bargaining over how to split a dollar. Each agent simultaneously name shares
they would like to have (s1 and s2) where 0 s1 1 and 0 s2 1. If s1+s2 1 then both agents
receive the shares they named; if s1+s2 >1, then both agents receive zero. What are the pure strategy
equilibrium of this game?
3. Prove that if strategies (s1,s2,s3...sn) are a Nash equilibrium in a strategic form game, then they | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/451/1560030.html","timestamp":"2014-04-19T09:44:58Z","content_type":null,"content_length":"8528","record_id":"<urn:uuid:e3fc2270-3f72-4926-8fa3-8ef83e41013b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/joemath314159/asked","timestamp":"2014-04-18T13:59:50Z","content_type":null,"content_length":"107631","record_id":"<urn:uuid:89a7bd13-be2f-4f7f-b61b-d637d345de83>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hidden Markov Models – Utility functions
Hidden Markov Models – Utility functions
class sage.stats.hmm.util.HMM_Util
Bases: object
A class used in order to share cdef’s methods between different files.
initial_probs_to_TimeSeries(pi, normalize)
This function is used internally by the __init__ methods of various Hidden Markov Models.
○ pi – vector, list, or TimeSeries
○ normalize – if True, replace negative entries by 0 and rescale to ensure that the sum of the entries in each row is equal to 1. If the sum of the entries in a row is 0, replace them
all by 1/N.
sage: import sage.stats.hmm.util
sage: u = sage.stats.hmm.util.HMM_Util()
sage: u.initial_probs_to_TimeSeries([0.1,0.2,0.9], True)
[0.0833, 0.1667, 0.7500]
sage: u.initial_probs_to_TimeSeries([0.1,0.2,0.9], False)
[0.1000, 0.2000, 0.9000]
normalize_probability_TimeSeries(T, i, j)
This function is used internally by the Hidden Markov Models code.
Replace entries of T[i:j] in place so that they are all nonnegative and sum to 1. Negative entries are replaced by 0 and T[i:j] is then rescaled to ensure that the sum of the entries in each
row is equal to 1. If all entries are 0, replace them by 1/(j-i).
○ T – a TimeSeries
○ i – nonnegative integer
○ j – nonnegative integer
sage: import sage.stats.hmm.util
sage: T = stats.TimeSeries([.1, .3, .7, .5])
sage: u = sage.stats.hmm.util.HMM_Util()
sage: u.normalize_probability_TimeSeries(T,0,3)
sage: T
[0.0909, 0.2727, 0.6364, 0.5000]
sage: u.normalize_probability_TimeSeries(T,0,4)
sage: T
[0.0606, 0.1818, 0.4242, 0.3333]
sage: abs(T.sum()-1) < 1e-8 # might not exactly equal 1 due to rounding
state_matrix_to_TimeSeries(A, N, normalize)
This function is used internally by the __init__ methods of Hidden Markov Models to make a transition matrix from A.
○ A – matrix, list, list of lists, or TimeSeries
○ N – number of states
○ normalize – if True, replace negative entries by 0 and rescale to ensure that the sum of the entries in each row is equal to 1. If the sum of the entries in a row is 0, replace them
all by 1/N.
sage: import sage.stats.hmm.util
sage: u = sage.stats.hmm.util.HMM_Util()
sage: u.state_matrix_to_TimeSeries([[.1,.7],[3/7,4/7]], 2, True)
[0.1250, 0.8750, 0.4286, 0.5714]
sage: u.state_matrix_to_TimeSeries([[.1,.7],[3/7,4/7]], 2, False)
[0.1000, 0.7000, 0.4286, 0.5714] | {"url":"http://sagemath.org/doc/reference/stats/sage/stats/hmm/util.html","timestamp":"2014-04-21T07:03:50Z","content_type":null,"content_length":"18067","record_id":"<urn:uuid:7c9793a2-54a3-4b31-b768-124fb5c45184>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Evaluate. 3(5-7)^2-6(3)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/512565c1e4b01e1862060e2e","timestamp":"2014-04-20T16:22:34Z","content_type":null,"content_length":"49823","record_id":"<urn:uuid:4ba371a4-587e-4abb-a282-526e981a2a78>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving Games: Nash Equilibrium
Solving Games: Nash Equilibrium
... Perfect ... strategy profile is a subgame-perfect equilibrium of a game G if ... Games with Perfect Information. Games with Uncertain Outcomes involve a new ... – PowerPoint PPT presentation
Number of Views:179
Avg rating:3.0/5.0
Slides: 41
Added by: Anonymous
more less
Transcript and Presenter's Notes | {"url":"http://www.powershow.com/view/2c6ad-NzQ0N/Solving_Games_Nash_Equilibrium_powerpoint_ppt_presentation","timestamp":"2014-04-18T16:42:21Z","content_type":null,"content_length":"115219","record_id":"<urn:uuid:4ffa9601-44a5-4bcc-a120-a11992458b89>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with full rank factorization
I've been tasked with proving the existence of a full rank factorization for an arbitrary m x n matrix, namely:
Let [tex]\textit{A}[/tex] [tex]\in[/tex] [tex]\textbf{R}^{m x n}[/tex] with [tex]\textit{rank(A) = r}[/tex] then there exist matrices [tex]\textit{B}[/tex] [tex]\in[/tex] [tex]\textbf{R}^{m x r}[/
tex] and [tex]\textit{C}[/tex] [tex]\in[/tex] [tex]\textbf{R}^{r x n}[/tex] such that [tex]\textit{A = BC}[/tex]. Furthermore [tex]\textit{rank(A) = rank(B) = r}[/tex].
I think I can prove the second property if I assume the first using [tex]\it{rank(AB)}[/tex] [tex]\leq[/tex] [tex]\it{rank(A)}[/tex] and [tex]\it{rank(AB)}[/tex] [tex]\leq[/tex] [tex]\it{rank(B)}[/
I'd appreciate a push in the right direction. Thanks.
EDIT: I just realized I posted this in the wrong forum. Could a mod move this? My apologies. | {"url":"http://www.physicsforums.com/showthread.php?t=246540","timestamp":"2014-04-19T02:13:16Z","content_type":null,"content_length":"19818","record_id":"<urn:uuid:6cac9c01-45c7-4b4e-9b98-61621009b48a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
IntroductionThe Least Squares Matching Algorithm and Its Significance in Mass Movement AnalysisThe Least Squares Matching AlgorithmThe Least Squares Matching in Mass Movement AnalysisMethodsImage DatasetsImage MatchingEstimation of Initial ParametersLeast Squares IterationComputation of Displacement and DeformationHorizontal Surface DisplacementSurface Strain and Rotation RatesPerformance EvaluationResultsHorizontal Surface DisplacementsPrecision and Validity of the Displacement DataSurface Strain and Rotation RatesThe Muragl RockglacierThe Nigardsbreen GlacierThe La Clapière LandslidePrecision and Validity of the Deformation DataDiscussionComputed Displacements and DeformationsOn the Precision of the Algorithm and Sources of ErrorConclusions and OutlookReferencesFigures and Tables
Although no formal quantitative comparison is conducted, the velocities obtained for the mass movements using the LSM are in agreement with those obtained in other studies using similar and other
methods. For example, similar velocities are registered for the same section of the Nigardsbreen glacier during the same period [32]. The average and maximum velocities of the Muragl rockglacier are
similar to what is reported in [30] which is validated using different approaches, including ground measurements. The surface velocity data in [30] agrees well with that of borehole data [40]. The
limited surface change during the 13 years period contributes to the success of optical image matching on this rockglacier, and rockglaciers in general. The spatial pattern of the La Clapière
landslide velocity variation is in agreement with that computed by other studies, particularly [3]. The velocity magnitudes show a slight slowdown from earlier records (e.g., [33,41]). The decreased
magnitudes are in agreement with the general observation of the slowdown of the landslide since the year 2000 (http://gravitaire.oca.eu/spip.php?rubrique15). Errors in the orthorectification and the
presence of vegetation on the surface leading to radiometric noises imply the presence of blunders in the image matching.
Realistic values of longitudinal, transverse and shear strain rates together with rotation rate are also obtained. The technique computes strain rates at higher resolution than the conventional
technique of computing them from velocity gradients after the matching. When computing strain rates of a template from the velocity gradients, two neighboring templates are used for each orthogonal
dimension. Therefore, the computed negative total sum of strain rate is in a way averaged over neighboring templates. Additionally, such strain rates are simply measures of velocity changes between
the central pixels of the neighboring templates especially when the NCC is used for the matching. Thus it can appear smoothed even before filtering. Changes in the size and shape of the masses are
not directly computed as is done when using the LSM algorithm as presented here. As a result micro-scale deformations are detected with LSM.
The spatial patterns of the strain rates and elevation changes of the Muragl rockglacier, previously computed from velocity gradients by [8], agree with those of the present study. The negative total
of the horizontal strain rates of the present study (Figure 7) agrees with Figure 3 of [8]. Notice that the symbols in the present study and [8] are inverses of each other and scaled differently.
Areas of vertical compression (horizontal extension) in our study correspond with negative elevation changes presented in Figure 2 of [8], indicating dynamic loss of mass from the vertically
compressed areas. For the Nigardsbreen glacier, the negative sum of the horizontal strain rates shows that the glacier is dominantly extended vertically, which is in agreement with the thickening of
the glacier as reported in [32], and with the general ice compression in glacier ablation areas. High shear strain rate is registered at the margins of flow, especially where the moving glacier
borders with stable ground. The deformation maps of the La Clapière landslide require cautious interpretation as more sources of error can influence the reliability compared to the other two mass
movement types as discussed in the following section. Specifically, propagated orthorectification error and surface changes such as vegetation cover contribute to major blunders in the strain rate
data. Computation of strain rates on its stable ground shows that the computed strain rates are not outside the error margin, i.e., not statistically significant.
The results of the study show that the LSM computes horizontal displacements in Earth surface mass movements with significantly higher precision (level of detail of measurement) and accuracy
(truthiness of the estimated values) compared to the NCC. The mean precision of the LSM algorithm in locating the matching position is found to be between 0.06 and 0.15 pixel; whereas, the matching
precision of NCC without sub-pixel extensions is generally ±0.5 pixel. In addition to the precision of matching, the accuracy of the computed displacements is also higher when computed using LSM than
using NCC as evaluated on test images and stable grounds of the bi-temporal mass movement images. The better performance of the LSM is in agreement with theoretical claims and earlier findings in
photogrammetry on image pairs of shorter temporal baselines [20,21,42].
When computing deformation and displacement of mass movements from repeat images using a precise algorithm such as LSM, the sources of error are basically related to either the image (noise,
orthorectification and co-registration) or the ground itself (deformation and temporal surface changes). Both major error sources can technically be grouped into three, i.e., geometric errors,
radiometric errors and propagated sensor or processing errors. The possible sources that cause geometric error are: (1) the formation of crevasses or micro-topography, (2) the boundary effect where
the velocity gradient between the moving body and the stable ground is so large that it creates outlying deformation parameter values, (3) vegetation cover can also create geometric change that is
not actually ground deformation in addition to its contribution to intensity noise. Signal- (in fact the SNR-) related causes include the presence of shadows, surface changes, illumination
differences, presence of dirt, vegetation cover, etc., that can lead to false and outlying convergence parameter values in the least squares iteration. Propagated errors can be attributed to the
sensor or image preprocessing, such as orthorectification and co-registration. If the images are not perfectly orthorectified and co-registered, the geometric adjustment includes the mass
deformation, sensor projective distortion and change of geometry between the two images [17,43].
Due to the high resolution of strain computation, the maps of the strain rates look noisy when visually observed, suggesting the application of noise filters. However, in the case of the strain rate
maps, it might well be that high-resolution deformations actually are somewhat noisy due to real local deformation of the masses, such as from crevasse formation, or due to the error sources
mentioned above. Filtering would lead to smoothing of the map. In so doing it affects both the realistic values and the blunders. The use of larger template sizes also leads to a more smoothed strain
rate map. Recall that the criteria for the right template size in the NCC is the presence of adequate SNR and constant displacement within the template. In the LSM, constant displacement is not
anymore a criteria but rather a constant displacement gradient, at least for the affine model. Thus, for very large templates, as the parameters of the transformation model are forced to be constant
within the template, the computed strain rates visually look like as if they are filtered. Such smoothed or filtered strain rate map may be sufficient or even wanted for some geoscientific
applications such as numerical models. However, the detailed variability may be needed for other current and future applications, and provide new insights into the mechanics of mass movements.
A better approach towards removing noises than filtering or the use of much larger template sizes would therefore be further restricting the least squares iteration process. Pixel-based constraining
such as data snooping or template-based constraining such as raising convergence precision can be used [18,21]. This would affect only the highly noisy (and maybe the highly deformed) templates,
leaving the well-converging templates unaffected. An initial test shows that increasing the demanded precision of the parameter adjustment during the LSM iteration discards more templates, resulting
in more data voids. However, the strong spatial variability is not substantially smoothed, implying that it stems from real strain rate variations. | {"url":"http://www.mdpi.com/2072-4292/4/1/43/xml","timestamp":"2014-04-19T04:45:37Z","content_type":null,"content_length":"125778","record_id":"<urn:uuid:bfee1e2f-0162-467e-affb-6b623cceb1f3>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving Problems Involving Inequalities
September 22nd 2008, 07:39 PM
Solving Problems Involving Inequalities
(Headbang) I don't know what it is about this prob. but every time I read it I get all crossed and confused and can't fig. it out :(
"FOR EACH OF THE FOLLOWING:
a. CHOOSE A VARIABLE TO REPRESENT THE NUMBER INDICATED IN PARENTHESES.
b. USE THE VARIABLE TO WRITE AN INEQUALITY BASED ON THE GIVEN INFORMATION
(DO NOT SOLVE.)"
A coin bank containing only nickels, dimes, and quarters has twice as many nickles as dimes and one third as many quarters as nickles. The total value of the coins does not exceed $2.80. (the
number of dimes)
September 22nd 2008, 08:19 PM
mr fantastic
(Headbang) I don't know what it is about this prob. but every time I read it I get all crossed and confused and can't fig. it out :(
"FOR EACH OF THE FOLLOWING:
a. CHOOSE A VARIABLE TO REPRESENT THE NUMBER INDICATED IN PARENTHESES.
b. USE THE VARIABLE TO WRITE AN INEQUALITY BASED ON THE GIVEN INFORMATION
(DO NOT SOLVE.)"
A coin bank containing only nickels, dimes, and quarters has twice as many nickles as dimes and one third as many quarters as nickles. The total value of the coins does not exceed $2.80. (the
number of dimes)
$(10)(x) + (5)(2x) + (25) \left(\frac{2}{3} x \right) \leq 280$. | {"url":"http://mathhelpforum.com/algebra/50202-solving-problems-involving-inequalities-print.html","timestamp":"2014-04-20T14:42:08Z","content_type":null,"content_length":"5356","record_id":"<urn:uuid:8941100b-e4be-44ba-9b2e-767a994570b3>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
Milmont Park, PA Math Tutor
Find a Milmont Park, PA Math Tutor
...Each teacher received special training on how to aide students with a variety of differences, including ADD and ADHD. There and since I have worked with several students with ADD and ADHD both
in their math content areas and with executive skills to help them succeed in all areas of their life. I have tutored test taking for many tests, including the Praxis many times.
58 Subjects: including calculus, differential equations, biology, algebra 2
...I love kids and can handle very well the formative years. I am available on-line or off-line. I graduated in health sciences from Stanley Medical College in 1985 with kudos.
8 Subjects: including prealgebra, algebra 1, reading, elementary math
...Using a Canon DSLR, the main parameters to control for a shot are the aperture, shutter speed and ISO. I am a former certified personal trainer, with thorough knowledge in anatomy and massage
therapy. I have experience teaching group exercise, yoga and qi gong principles.
35 Subjects: including prealgebra, biology, calculus, elementary (k-6th)
I am an certified elementary teacher who has also been teaching Chinese for more than five years. I am currently teaching in a public high school and I also teach heritage langauge programs on
weekends. My stduents come from different backgound, abilities and age, and I can tailor the lesson to meet your langauge learning goals.
7 Subjects: including trigonometry, ESL/ESOL, algebra 1, algebra 2
...I received my BA from Gettysburg College in Health Sciences and I will be receiving my Masters of Science from Drexel University College of Medicine in May 2012. I have worked with Big
Brothers/Big Sisters as a volunteer who worked with elementary school children to assist them with homework and provide socialization activities. I have also worked as a tutor for high school
13 Subjects: including algebra 1, anatomy, prealgebra, biology
Related Milmont Park, PA Tutors
Milmont Park, PA Accounting Tutors
Milmont Park, PA ACT Tutors
Milmont Park, PA Algebra Tutors
Milmont Park, PA Algebra 2 Tutors
Milmont Park, PA Calculus Tutors
Milmont Park, PA Geometry Tutors
Milmont Park, PA Math Tutors
Milmont Park, PA Prealgebra Tutors
Milmont Park, PA Precalculus Tutors
Milmont Park, PA SAT Tutors
Milmont Park, PA SAT Math Tutors
Milmont Park, PA Science Tutors
Milmont Park, PA Statistics Tutors
Milmont Park, PA Trigonometry Tutors
Nearby Cities With Math Tutor
Briarcliff, PA Math Tutors
Crum Lynne Math Tutors
Feltonville, PA Math Tutors
Folsom, PA Math Tutors
Garden City, PA Math Tutors
Lester, PA Math Tutors
Nether Providence, PA Math Tutors
Parkside Manor, PA Math Tutors
Primos Secane, PA Math Tutors
Primos, PA Math Tutors
Ridley Park Math Tutors
Ridley, PA Math Tutors
Secane, PA Math Tutors
Tinicum, PA Math Tutors
Woodlyn Math Tutors | {"url":"http://www.purplemath.com/Milmont_Park_PA_Math_tutors.php","timestamp":"2014-04-17T07:27:29Z","content_type":null,"content_length":"24175","record_id":"<urn:uuid:ac688efd-a3ca-4ebe-b8f9-ee018fe0a790>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: July 2005 [00347]
[Date Index] [Thread Index] [Author Index]
Diagonalizing a non-Hermitian Matrix
• To: mathgroup at smc.vnet.net
• Subject: [mg58741] Diagonalizing a non-Hermitian Matrix
• From: "Ituran" <isturan at gmail.com>
• Date: Sun, 17 Jul 2005 03:03:56 -0400 (EDT)
• Sender: owner-wri-mathgroup at wolfram.com
I have the following problem. There is a given non-Hermitian matrix M.
Let's take an example in 2x2 dimension M={{1,2+4*I},{2-5*I,1}}.
To diagonalize it we need 2x2 unitary matrices U and V such that
U.M.Vdag = Md =Diag(3.8304,6.0273).
Here Vdag = Transpose[Conjugate[V]]. To find U and V,
we can work with the absolute square of the above eq, i.e,
V.(Mdag.M).Vdag = U.(M.Mdag).Udag = M_D^2 = Diag(14.672,36.328).
Again Mdag = Transpose[Conjugate[M]] and similarly for Udag.
To find V, I can write the equation
for the ith row of V;
V_ij(Mdag.M)_jk = (M_D^2)_ii V_ik (no sum over i), k=1,2 in this case.
(Mdag.M)_11 V_i1 + (Mdag.M)_21 V_i2 = (M_D^2)_ii V_i1, k=1
(Mdag.M)_12 V_i1 + (Mdag.M)_22 V_i2 = (M_D^2)_ii V_i2, k=2
Thus, to find the elements of V in the ith row(for both i=1 and 2)
{valV,vecV} = Eigensystem[Transpose[Mdag.M]]
where V = vecV (No transpose!).
similarly for U
Of course, both vecV[[]] and vecU[[]] need to be normalized. I found
V = {{-0.21954 + 0.49397*I,0.84130},{0.34169 - 0.76879I, 0.54056}}
U ={{-0.34169 + 0.76879I*I,0.54056},{0.21954 - 0.49397*I,0.84130}}
and they perfectly satisfy
V.(Mdag.M).Vdag = U.(M.Mdag).Udag = Diag(14.672,36.328).
However, when I check U.M.Vdag, I am getting
U.M.Vdag = Diag(-3.8250 + 0.2031*I,6.0240 - 0.2031*I)
where the absolute value of the entries are equal to
the corresponding eigenvalues. The minus sign in
front of 3.8250 is not a problem since U and and V
are not unique(they contains some arbitrary phases
which be used to get Md with nonnegative entries).
So, the problem here is the existence of imaginary parts and
I have no idea why I am getting such an answer. For example, for a
Hermitian M (take 2-4*I instead of 2-5*I in (2,1) element of M),
there is no such problem. What am I missing? Any idea?
I am using Mathematica 4.0 in Windows 2K.
Thanks a lot in advance,
• Follow-Ups: | {"url":"http://forums.wolfram.com/mathgroup/archive/2005/Jul/msg00347.html","timestamp":"2014-04-20T11:16:02Z","content_type":null,"content_length":"36253","record_id":"<urn:uuid:8d7e9d9c-d2c7-44a0-bdd7-9440bbdb3a8e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
the loan amount $750.00 @ 6% rate for 135 days
• 1the loan amount $750.00 @ 6% rate for 135 days
the loan amount $750.00 @ 6% rate for 135 days. what would the interest be.
• 2the total amount on a loan of 6,000 for 150 days is 210.50
the total amount on a loan of 6,000 for 150 days is 210.50. using the ordinary interest metnod. what is the rate on this loan? rounded to nearest hundredeth.
• 3The total amount of interest on a loan of $6,000 for 150 days is $210.50
The total amount of interest on a loan of $6,000 for 150 days is $210.50. Using the ordinary interest method, what is the rate of interest on this loan? Round to the nearest hundredth.
• 4The tooal amount of interest on a loan of 6,000 for 150 days is 210.50
The tooal amount of interest on a loan of 6,000 for 150 days is 210.50.using the ordinary interest(360 days)method,what is the rate of interest on the loan.Round off answer to the nearest
hundredth.I think it is 8.42%Darrell owns a consulting business and has an estimated annual income of 63,000.His Social Security tax is 12.4%,Medicare is 2.9%,and his estimated federal income tax
rate is 22%.How much quarterly estimated tax must Darrell send to the IRS for the first quarter?I have 5,874.75Could someone check these to see if I am right.
• 5the total amount of interest on this loan of $6000 for 150 days is $210.50
the total amount of interest on this loan of $6000 for 150 days is $210.50. what is the rate of interest on this loan?If not compounded, or simple interest, then Interest=Principal(rateinterest)
*time Here time is 5/12 of a year, you are given the interest and principle. Figure the rate of interest.
• 6Loan #1Year Amount owed1 $37962 $39423 $4088Loan # 2Year Amount owed1 $977.532 $1036.183 1098.35For loan #1 is simple interest
Loan #1Year Amount owed1 $37962 $39423 $4088Loan # 2Year Amount owed1 $977.532 $1036.183 1098.35For loan #1 is simple interest. Loan #2 is compound interestHow much was each loan
originallyDetermine the future value of each loan after 10 years
• 7Jill Ley took out a loan to pay for her child's education for $60,000 the loan wouls be repaid at the end of 8 years in one payment with an interest of 6 percent the total amount Jill has to pay
back at the end of the loan is
Jill Ley took out a loan to pay for her child's education for $60,000 the loan wouls be repaid at the end of 8 years in one payment with an interest of 6 percent the total amount Jill has to pay
back at the end of the loan is.
• 8A $9,000 loan is to be repaid in three equal payments occurring 60, 180, and 300 days, respectively, after the date of the loan
A $9,000 loan is to be repaid in three equal payments occurring 60, 180, and 300 days, respectively, after the date of the loan. Calculate the size of these payments if the interest rate on the
loan is 7 1/4%. Use the loan date as the focal date.
• 9Calculate the amount of interest on a loan of $3,200 at 6% interest for 60 days using theordinary interest method
Calculate the amount of interest on a loan of $3,200 at 6% interest for 60 days using theordinary interest method.A. $3.16 C. $32.00B. $31.56 D. $384.003200x6% divided by 60=32.00is this correct
• 10When a bank borrows 550,000 from the FED snd received s short-term ajustment credit for 3 days with the loan promised to be paid on the 6th day how much total reserves would rise with this loan
When a bank borrows 550,000 from the FED snd received s short-term ajustment credit for 3 days with the loan promised to be paid on the 6th day how much total reserves would rise with this loan.
• 11You'd also need to know the interest rate, the amount of the monthly payments, and how long you'll have to pay off the loan
You'd also need to know the interest rate, the amount of the monthly payments, and how long you'll have to pay off the loan. The shorter period of time for the loan, the less you'll pay in
interest, but the monthly payments will be higher.What factors are the most important when considering a loan for purchasing a new car? For starters i know that its the individiual credit if one
has good credit they don't have to pay as high interest rates if one has bad credit the interest are very high. I don't know what else would be important?
• 121,500 personal loan, bank is going to charge a fee of 2% of loan amount as well as take out the interest upfront
1,500 personal loan, bank is going to charge a fee of 2% of loan amount as well as take out the interest upfront. The bank is offering 15% APR for six months. Calculate the effective interest | {"url":"http://homework.boodom.com/q84701-s-the_loan_amount_$750_00_@_6_rate_for_135_days","timestamp":"2014-04-16T16:03:29Z","content_type":null,"content_length":"21354","record_id":"<urn:uuid:bbadce19-701e-41a6-bd5e-f4ffa88f410b>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
An example Stand Alone Solar Design
Stand Alone (or Off Grid) Solar Design is a very large and complex area. It is also the one area of solar design that most people try to do themselves, with that said I thought I might go through the
design process step by step to help anybody who may be having difficulties.
I thought the best way to tackle this is to approach it in the same way I approach the system design:
1. Estimate Energy Requirements.
2. Size a battery bank.
3. Size a solar array.
But before we dig into the nuts and bolts, lets look at a block diagram showing the key components to a Stand Alone Solar power supply.
Based on the daigram above we can see the following components.
1. The electrical appliances that are using the electricity.
2. The Inverter. The inverter has the job of converting DC and AC electricity. Some inverters are smart enough to run a generator and charge batteries, etc. But the main role of this piece of
equipment is the converter the DC energy stored in a battery bank to AC electricity that gets used in a house.
3. The Battery Charger. The charger has the job of taking the AC electricity created from a generator (or it could be from the Mains Power) and storing that energy in the batteries. In the systems
we design using Outback equipment the Battery charger and the inverter are the same piece of equipment, hence their name Inverter/Charger.
4. The solar charge regulator. The Solar Charge Regulator has the job of taking the electricity generated from an array of solar panels and storing it into the batteries. This seems like a simple
task but there are many challenges in this process. All our systems use the Outback FM60/80 charge regulators. ( )
5. The Solar panels. The solar panels are fairly self explanatory, this is where the renewable energy comes from.
1. Estimating Energy Requirements
Before you can do anything you need to understand how much energy is required each day, and how quickly that energy needs to be delivered. I always work in kWh (kilowatt Hours) for the amount of
energy needed and kW (kilowatts) for the rate at which the energy needs to be delivered.
LOAD: this refers to something that uses electricity. For example a toaster is an electrical load, its quite a big load drawing close to 10A at full swing. An iphone charger is also a load, it is a
small electrical load.
Lets use an example:
In my holiday cabin I want to run the following electrical loads.
│ │ kWh │ kW │
│ 2 x 100W incandescant light bulbs. I will run them between 6pm and 11:30pm every day. │ │ │
│ │ │ │
│ 2 x 100W = 200W = 0.2kW │ 1.1 │ 0.2 │
│ │ │ │
│ 0.2kW x 5.5hrs = 1.1kWh │ │ │
│ 1 x Electric jug. I am a big coffee fan so I would boil that jug at least 5 times a day. I have a tendancy to only boil enough water for the coffee I am about to drink so it only │ │ │
│ takes 60 seconds to bring the water to boil. │ │ │
│ │ 0.08 │ 1 │
│ 1 x 1000W = 1kW │ │ │
│ │ │ │
│ 0.08hr (5min) x 1kW = 0.08kWh │ │ │
│ 1 x 300W plasma TV. I know plasma TV's are really inefficient TV's but I dont want to get rid of it. I would normally like to watch about 5hrs of TV in the evenings because I like to │ │ │
│ have it on as background noise when I am working. │ │ │
│ │ 1.5 │ 0.3 │
│ 1 x 300W = 0.3kW │ │ │
│ │ │ │
│ 5hrs x 0.3kW = 1.5kWh │ │ │
│ TOTALS │ 2.68 │ 1.5 │
So for the purposes of our example I am going to assume all of the electrical loads can and probably will be running at the same time. So this means my Off Grid Solar system needs to be able to
supply 2.7kWh (round up from 2.68) worth of energy during the day and it needs to be able to supply a peak load of 1.5kW.
Our key design parameters:
Daily Energy Requirement: 2.7kWh
Peak Demand: 1.5kW
2. Size a Battery Bank
OK.. this topic is not for the faint hearted, but I will do my best to keep it simple...
Based on our estimated energy requirements we will use 2.7kWh of energy each day (and we can assume its all going to be used at night).
Based on the diagram above we can see the inverter has the job of converting stored battery energy to AC electricity. Like any conversion of power from one form to another, this cannot be done
without some sort of loss. In the example of the outbacks its safe to assume a loss of 10%. There are many inverters that claim a better than 90% conversion rate, but I have not seen better than 90%
efficiency, so dont always believe the marketing material.
Back to the battery sizing. If the Inverter needs to deliver 2.7kWh of energy to the loads it must have used 3kWh of battery power...
Quick Check... 3kWh in. 10% lost = 0.3kWh, so total energy out is 3kWh - 0.3kWh = 2.7kWh.
OK at this point we know we will be drawing 3kWh of energy from the batteries every day, but before we can continue lets look at a couple of concepts. The first will be Days of Autonomy, the second
is Daily Depth of Discharge (DoD).
Days of Autonomy
We know from our previous calculations, we need 3kWh every day from our batteries.. But what happens if we get 3 days of rain.. how much energy do we need stored in our batteries... The answer is
simple (kind of), we need 3 days x 3kWh = 9kWh. But what happens if we get 5 days of rain and misery? then we will need 5 days x 3kWh = 15kWh.
The accepted practise is to provide enough stored energy for 3 days of rain and misery, if there is no sunshine at all after 3 days then a generator or some other source of power needs to be found.
In some climates 3 days is not enough, yet in others 3 days is too much, however the general rule is 3 days, and that is what we are going to base our calculations on.
So based on 3 days of autonomy we will need a battery bank that can store 3 days x 3kWh = 9kWh of energy.
Daily Depth of Discharge
This normally takes a bit to get your head around.. in the world of solar batteries we talk about how much energy has been taken, not how much energy is remaining. The Depth of Discharge refers to
what percentage of the batteries stored energy has already been taken. For example if the a battery is capable of storing 30kWh of energy, and I have been told the Depth of Discharge (DoD) is 10%, I
know that 3kWh has already been taken from the batteries.
For standalone solar systems the Daily Depth of Discharge is an important metric, it tells us what percentage of the batteries available capacity the system was designed to use each day.
Lets assume we are using Gel Cell batteries. I know that 100% of the C100 rated energy can be extracted from the battery without damaging it; so this means at the end of 3 days of no sun the
batteries can be safely discharged to 100%, so each day I can take 30% of the batteries capacity. therefore the Daily Depth of discharge will be 30%.
The above calculation is different for flooded cell solar batteries because if you discharge them past the 80% mark you will damage the batteries.
So what does all of this mean... well it means our daily energy requirement (3kWh) can only represent 30% of the total batteries capacity.
So as before if we are using Gel Cells we need a battery bank that can store 9kWh of energy; if we had of chosen flooded cell batteries, our battery bank would need to be a lot bigger because our
3kWh could only represent 25% of the batteries capacity.
ok, to summarise so far.. We have to be able to delivery 2.7kWh of electrical energy to my holiday cabin; because of the inverter efficiency we need to draw 3kWh from our batteries each day. We have
planned to allow 3 days of autonomy in our energy storage, and we are going to use Gel Cell batteries, so this means out total battery capacity needs to be 9kWh.
Now, you are probably scratching your head wondering why you cant find a kWh rating on any batteries, thats because there isnt one. Batteries will have a voltage and Amp Hour(Ah) rating, Im not going
to go into the details of that in this discussion, if you want more information read this The ins and outs of Batteries for Solar Systems.
Battery Amp Hours (Ah) can be converted to kWh by multiplying Volts by Amp Hours. For example if my battery bank is a 24V battery bank and my batteries are 200Ah batteries then there is 24Volts x
200Ah = 4,800Wh = 4.8kWh of energy storage.
So back to my holiday cabin.. I personally do not like 12V battery systems because there is to much current draw, I like to design 24V and 48V battery banks, for my holiday cabin lets assume we will
use a 24V battery bank. So the required Amp Hour capacity of my batteries is (9kWh x 1000) / 24V = 375Ah.
So the conclusion to this very long process is I need a 24V, 375Ah battery bank.
Just a few notes... from an engineering stand point it is very poor form not to build in some safety margin, the above battery bank (24V@375Ah) is the absolute minimum required, in reality I would
want to add at least another 20% to the 375Ah to ensure the system will function as expected.
Another reason why we try and only take 30% out of the batteries is because it extends their life. The life of a battery is extended the less you discharge out of it. For example, if you discharged
your Gel Cells to 100% each day you will only be able to do that about 1400 times (assume 1 dischatge each day) thats only 4yrs of life. If you discharge them only 30% of the way down you will be
able to do that well over 3500 times, which give a battery life of 10+ years.
The other important note is most peoples battery capacity is dictated by their wallet, not by what is required. Having said this it is still important to know what the actual solar battery capacity
should be to support the loads, then you can work back from there; there is a lot that can be done to reduce the battery size and still get the job done, but you need to know what the storage
discrepency is before starting on this road.
3. Size a Solar Array
Before you begin on the solar array take a look at the info on estimating solar yields ( Estimating Solar System Yields ), particularly the part on Peak Sunshine Hours (PSH).
In summer you will typically see around 5.5 Peak Sunshine Hours (PSH), and in winter you will get roughly 3 PSH (remember Peak Sunshine Hours are the number of hours of sun assuming every hour is
1000W/m2 of solar power, so you may have 8-10 hours of actual daylight but those hours can be compressed into 5.5hrs during summer and 3 hours during winter).
As a general rule of thumb we always size our solar systems to function on the worst possible days of the year, knowing that it will easily function during the best. So in our case we want to use the
figure of 3 PSH.
Remember in the last section we decided we needed to have a battery bank 24V@375Ah, this equates to 24 x 375 = 9kWh of stored energy, of that 9kWh we are designing the system to only use 30% of that
each day which is 3kWh of energy. So very simply we need a solar array that will store 3kWh of energy into our batteries; this calculation is simple 3kWh / 3PSH = 1kW. So we need a 1kW solar system.
Unfortunately life is not that straight forward. We need a solar array that can store 3kWh of energy into the batteries, but lets look at the steps needed to store the energy (remember every step
will represent a loss of energy).
The diagram above shows the typical charging losses through the system.
• Solar Loss (15%) : a solar panel will loose (derate) up to 20% of its power due to heat , dirty, etc. Because we are designing for winter (the worst solar days) we will assume a derating of 15%.
• Cable Loss (1%) : If the solar system is not designed properly power lost through the transmission of current through the cables can be a lot, for our design we will assume a 1% loss over the
entire run of cables.
• Charge Regulator (5%): every time power is converted from one form to another there are losses, the charge regulator are no exception, a loss of 5% is a conservative design figure.
• Battery Charging Loss (15%): The technical term for this is Watt Hour efficiency, it refers to how much energy needs to be put into the batteries in order to get a certain amount out.. and you
guessed it we are converting electrical energy to chemical energy so there are losses. A typical figure for batteries is 15% loss.
The above means we need to add 32% more solar panels to our array than we initially calculated. Previously we mention we would need 1kW of solar in order to put 3kWh into the batteries; as we can now
see, we need to add 32% more, so we actually need 1.32kW of solar panels to charge our batteries.
As you can see accurate solar design can be a bit of a challenge; but the key to the whole thing is to accurately predict what kind of loads you are going to run, unfortunately most people will
wildly under estimate the amount of energy they are going to need... so be realistic. In my holiday cottage example we only wanted to run 2 lights, a TV and a kettle, now that is what I call
The other important thing to finish on is to mention all of the above is a generalised view of how things run; for example I have sized enough solar panels just to charge the batteries, but what
about if I want to do some washing during the day, so the solar array will not only have to charge the batteries, but it will also have to run 2 loads of washing during the day. Every system is a
little bit different and careful consideration needs to be given the each of the components. | {"url":"http://www.letitgo.com.au/stand-alone-off-grid-solar/11--a-example-stand-alone-solar-design.html","timestamp":"2014-04-18T15:38:31Z","content_type":null,"content_length":"35240","record_id":"<urn:uuid:93e8ca89-9f5d-49ac-8d20-f53741ada2bf>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |