content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Luck and Skill Untangled: The Science of Success | Science Blogs | WIRED
The world around us is a capricious and often difficult place. But as we have developed our mathematical tools with increased sophistication, we have in turn improved our ability to understand the
world around us.
And one of the seemingly simple places where this occurs is in the relationship between luck and skill. We have little trouble recognizing that a chess grandmaster’s victory over a novice is skill,
as well as assuming that Paul the octopus’s ability to predict World Cup games is due to chance. But what about everything else?
Michael Mauboussin is Chief Investment Strategist at Legg Mason Capital Management who thinks deeply about the ideas that affect the world of investing and business. His previous books have explored
everything from psychological biases and how we think to the science of complex systems. In his newest book The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing he
tackles the problem of understanding skill and luck. It is a delightful read that doesn’t shy away from the complexity, and thrill, of understanding how luck and skill combine together in our
everyday experience.
Mauboussin, a friend of mine (and the father of one of my collaborators), was kind enough to do a Q&A via e-mail.
Samuel Arbesman: First of all, skill and luck are slippery things. In the beginning of the book, you work to provide operational definitions of these two features of life. How would you define them?
Michael Mauboussin: This is a really important place to start, because the issue of luck in particular spills into the realm of philosophy very quickly. So I tried to use some practical definitions
that would be sufficient to allow us to make better predictions. I took the definition of skill right out of the dictionary, which defines it as “the ability to use one’s knowledge effectively and
readily in execution or performance.” It basically says you know how to do something and can do it when called on. Obvious examples would be musicians or athletes — come concert or game time, they
are ready to perform.
Luck is trickier. I like to think of luck as having three features. First, it happens to a group or an individual. Second, it can be good or bad. I don’t mean to imply that it’s symmetrically good
and bad, but rather that it does have both flavors. Finally, luck plays a role when it is reasonable to believe that something else may have happened.
People often use the term luck and randomness interchangeably. I like to think of randomness operating at a system level and luck at an individual level. If I gather 100 people and ask them to call
coin tosses, randomness tells me that a handful may call five correctly in a row. If you happen to be one of those five, you’re lucky.
Arbesman: Skill and luck are very important in the world of investing. And the many sports examples in your book make the reader feel that you’re quite the sports fan. But how did the idea for this
book come about? Was there any specific moment that spurred you to write it?
Mauboussin: This topic lies at the intersection of a lot of my interests. First, I have always loved sports both as a participant and fan. I, like a lot of other people, was taken with the story
Michael Lewis told in Moneyball – how the Oakland A’s used statistics to better understand performance on the field. And when you spend some time with statistics for athletes, you realize quickly
that luck plays a bigger role in some measures than others. For example, the A’s recognized that on-base percentage is a more reliable indicator of skill than batting average is, and they also noted
that the discrepancy was not reflected in the market price of players. That created an opportunity to build a competitive team on the cheap.
Second, it is really hard to be in the investment business and not think about luck. Burt Malkiel’s bestselling book, A Random Walk Down Wall Street, pretty much sums it up. Now it turns out that
markets are not actually random walks, but it takes some sophistication to distinguish between actual market behavior and randomness.
Third, I wrote a chapter on luck and skill in my prior book, Think Twice, and felt that I hadn’t given the topic a proper treatment. So I knew that there was a lot more to say and do.
Finally, this topic attracted me because it spans across a lot of disciplines. While there are pockets of really good analysis in different fields, I hadn’t really seen a comprehensive treatment of
skill and luck. I’ll also mention that I wanted this book to be very practical: I’m not interested in just telling you that there’s a lot of luck out there; I am interested in helping you figure out
how and why you can deal with it to make better decisions.
Arbesman: You show a ranking of several sports on a continuum between pure luck and pure skill, with basketball the most skillful and hockey the closest to the luck end:
And the ranking is not entirely obvious, as you note that you queried a number of your colleagues and many were individually quite off. (I in fact remember you asking me about this and getting it
wrong.) How did you arrive at this ranking and what are the structural differences in these sports that might account for these differences?
Mauboussin: I think this is a cool analysis. I learned from Tom Tango, a respected sabermetrician, and in statistics it’s called “true score theory.” It can be expressed with a simple equation:
Observed outcome = skill + luck
Here’s the intuition behind it. Say you take a test in math. You’ll get a grade that reflects your true skill — how much of the material you actually know — plus some error that reflects the
questions the teacher put on the test. Some days you do better than your skill because the teacher happens to test you only on the material you studied. And some days you do worse than your skill
because the teacher happened to include problems you didn’t study. So you grade will reflect your true skill plus some luck.
Of course, we know one of the terms of our equation — the observed outcome — and we can estimate luck. Estimating luck for a sports team is pretty simple. You assume that each game the team plays is
settled by a coin toss. The distribution of win-loss records of the teams in the league follows a binomial distribution. So with these two terms pinned down, we can estimate skill and the relative
contribution of skill.
To be more technical, we look at the variance of these terms, but the intuition is that you subtract luck from what happened and are left with skill. This, in turn, lets you assess the relative
contribution of the two.
Some aspects of the ranking make sense, and others are not as obvious. For instance, if a game is played one on one, such as tennis, and the match is sufficiently long, you can be pretty sure that
the better player will win. As you add players, the role of luck generally rises because the number of interactions rises sharply.
There are three aspects I will emphasize. The first is related to the number of players. But it’s not just the number of players, it’s who gets to control the game. Take basketball and hockey as
examples. Hockey has six players on the ice at a time while basketball has five players on the court, seemingly similar. But great basketball players are in for most, if not all, of the game. And you
can give the ball to LeBron James every time down the floor. So skillful players can make a huge difference. By contrast, in hockey the best players are on the ice only a little more than one-third
of the time, and they can’t effectively control the puck.
In baseball, too, the best hitters only come to the plate a little more frequently than one in nine times. Soccer and American football also have a similar number of players active at any time, but
the quarterback takes almost all of the snaps for a football team. So if the action filters through a skill player, it has an effect on the dynamics.
The second aspect is sample size. As you learn early on in statistics class, small samples have larger variances than larger samples of the same system. For instance, the variance in the ratio of
girls to boys born at a hospital that delivers only a few babies a day will be much higher than the variance in a hospital that delivers hundreds a day. As larger sample sizes tend to weed out the
influence of luck, they indicate skill more accurately. In sports, I looked at the number of possessions in a college basketball game versus a college lacrosse game. Although lacrosse games are
longer, the number of possessions in a basketball game is approximately double that of a lacrosse game. So that means that the more skillful team will win more of the time.
Finally, there’s the aspect of how the game is scored. Go back to baseball. A team can get lots of players on base through hits and walks, but have no players cross the plate, based on when the outs
occur. In theory, one team could have 27 hits and score zero runs and another team can have one hit and win the game 1-0. It’s of course very, very unlikely but it gives you a feel for the influence
of the scoring method.
Basketball is the game that has the most skill. Football and baseball are not far from one another, but baseball teams play more than 10 times the games that football teams do. Baseball, in other
words, is close to random — even after 162 games the best teams only win about 60 percent of their games. Hockey, too, has an enormous amount of randomness.
One interesting thought is that the National Basketball Association and National Hockey League have had lockouts in successive seasons. Both leagues play a regular schedule of 82 games. The NHL
lockout hasn’t been resolved, and there is hope that they will play a shortened season as did the NBA last year. But there’s the key point: Even with a shortened season, we can tell which teams in
the NBA are best and hence deserve to make the playoffs. If the NHL season proceeds with a fraction of the normal number of games, the outcomes will be very random. Perhaps the very best teams will
have some edge, but you can almost be assured that there will be some surprises.
Arbesman: You devote some attention to the phenomenon of reversion to the mean. Most of us think we understand it, but are often wrong. What are ways we go wrong with this concept and why does this
happen so often?
Mauboussin: Your observation is spot on: When hearing about reversion to the mean, most people nod their heads knowingly. But if you observe people, you see case after case where they fail to account
for reversion to the mean in their behavior.
Here’s an example. It turns out that investors earn dollar-weighted returns that are less than the average return of mutual funds. Over the last 20 years through 2011, for instance, the S&P 500 has
returned about 8 percent annually, the average mutual fund about 6 to 7 percent (fees and other costs represent the difference), but the average investor has earned less than 5 percent. At first
blush it seems hard to see how investors can do worse than the funds they invest in. The insight is that investors tend to buy after the market has gone up — ignoring reversion to the mean — and sell
after the market has gone down — again, ignoring reversion to the mean. The practice of buying high and selling low is what drives the dollar-weighted returns to be less than the average returns.
This pattern is so well documented that academics call it the “dumb money effect.”
I should add that any time results from period to period aren’t perfectly correlated, you will have reversion to the mean. Saying it differently, any time luck contributes to outcomes, you will have
reversion to the mean. This is a statistical point that our minds grapple with.
Reversion to the mean creates some illusions that trip us up. One is the illusion of causality. The trick is you don’t need causality to explain reversion to the mean, it simply happens when results
are not perfectly correlated. A famous example is the stature of fathers and sons. Tall fathers have tall sons, but the sons have heights that are closer to the average of all sons than their fathers
do. Likewise, short fathers have short sons, but again the sons have stature closer to average than that of their fathers. Few people are surprised when they hear this.
But since reversion to the mean simply reflects results that are not perfectly correlated, the arrow of time doesn’t matter. So tall sons have tall fathers, but the height of the fathers is closer to
the average height of all fathers. It is abundantly clear that sons can’t cause fathers, but the statement of reversion to the mean is still true.
I guess the main point is that there is nothing so special about reversion to the mean, but our minds are quick to create a story that reflects some causality.
Arbesman: If we understand reversion to the mean properly, can this even help with parenting, such as responding to our children’s performance in school?
Mauboussin: Exactly, you’ve hit on another one of the fallacies, which I call the illusion of feedback. Let’s accept that your daughter’s results on her math test reflect skill plus luck. Now say she
comes home with an excellent grade, reflecting good skill and very good luck. What would be your natural reaction? You’d probably give her praise — after all, her outcome was commendable. But what is
likely to happen on the next test? Well, on average her luck will be neutral and she will have a lower score.
Now your mind is going to naturally associate your positive feedback with a negative result. Perhaps your comments encouraged her to slack off, you’ll say to yourself. But the most parsimonious
explanation is simply that reversion to the mean did its job and your feedback didn’t do much.
The same happens with negative feedback. Should your daughter come home with a poor grade reflecting bad luck, you might chide her and punish her by limiting her time on the computer. Her next test
will likely produce a better grade, irrespective of your sermon and punishment.
The main thing to remember is that reversion to the mean happens solely as the result of randomness, and that attaching causes to random outcomes does not make sense. Now I don’t want to suggest that
reversion to the mean reflects randomness only, because other factors most certainly do come into play. Examples include aging in athletics and competition in business. But the point is that
randomness alone can drive the process.
Arbesman: In your book you focus primarily on business, sports, and investing, but clearly skill and luck appear more widely in the world. In what other areas is a proper understanding of these two
features important (and often lacking)?
Mauboussin: One area where this has a great deal of relevance is medicine. John Ioannidis wrote a paper in 2005 called “Why Most Published Research Findings Are False” that raised a few eyebrows. He
pointed out that medical studies based on randomized trials, where there’s a proper control, tend to be replicated at a high rate. But he also showed that 80 percent of the results from observational
studies are either wrong or exaggerated. Observational studies create some good headlines, which can be useful to a scientist’s career.
The problem is that people hear about, and follow the advice of, these observational studies. Indeed, Ioannidis is so skeptical of the merit of observational studies that he, himself a physician,
ignores them. One example I discuss in the book is a study that showed that women who eat breakfast cereal are more likely to give birth to a boy than a girl. This is the kind of story that the media
laps up. Statisticians later combed the data and concluded that the result is likely a product of chance.
Now Ioannidis’s work doesn’t address skill and luck exactly as I’ve defined it, but it gets to the core issue of causality [Editor's shameless plug: for more about this in science, check out The
Half-Life of Facts!]. Wherever it’s hard to attribute causality, you have the possibility of misunderstanding what’s going on. So while I dwelled on business, sports, and investing, I’m hopeful that
the ideas can be readily applied to other fields.
Arbesman: What are some of the ways that sampling (including undersampling, biased sampling, and more) can lead us quite astray when understanding skill and luck?
Mauboussin: Let’s take a look at undersampling as well as biased sampling. Undersampling failure in business is a classic example. Jerker Denrell, a professor at Warwick Business School, provides a
great example in a paper called “Vicarious Learning, Undersampling of Failure, and the Myths of Management.” Imagine a company can select one of two strategies: high risk or low risk. Companies
select one or the other and the results show that companies that select the high-risk strategy either succeed wildly or fail. Those that select the low-risk strategy don’t do as well as the
successful high-risk companies but also don’t fail. In other words, the high-risk strategy has a large variance in outcomes and the low-risk strategy has smaller variance.
Say a new company comes along and wants to determine which strategy is best. On examination, the high-risk strategy would look great because the companies that chose it and survived had great success
while those that chose it and failed are dead, and hence are no longer in the sample. In contrast, since all of the companies that selected the low-risk strategy are still be around, their average
performance looks worse. This is the classic case of undersampling failure. The question is: What were the results of all of the companies that selected each strategy?
Now you might think that this is super obvious, and that thoughtful companies or researchers wouldn’t do this. But this problem plagues a lot of business research. Here’s the classic approach to
helping businesses: Find companies that have succeeded, determine which attributes they share, and recommend other companies seek those attributes in order to succeed. This is the formula for many
bestselling books, including Jim Collins’s Good to Great. One of the attributes of successful companies that Collins found, for instance, is that they are “hedgehogs,” focused on their business. The
question is not: Were all successful companies hedgehogs? The question is: Were all hedgehogs successful? The second question undoubtedly yields a different answer than the first.
Another common mistake is drawing conclusions based on samples that are small, which I’ve already mentioned. One example, which I learned from Howard Wainer, relates to school size. Researchers
studying primary and secondary education were interested in figuring out how to raise test scores for students. So they did something seemingly very logical – they looked at which schools have the
highest test scores. They found that the schools with the highest scores were small, which makes some intuitive sense because of smaller class sizes, etc.
But this falls into a sampling trap. The next question to ask is: which schools have the lowest test scores? The answer: small schools. This is exactly what you would expect from a statistical
viewpoint since small samples have large variances. So small schools have the highest and lowest test scores, and large schools have scores closer to the average. Since the researchers only looked a
high scores, they missed the point.
This is more than a case for a statistics class. Education reformers proceeded to spend billions of dollars reducing the sizes of schools. One large school in Seattle, for example, was broken into
five smaller schools. It turns out that shrinking schools can actually be a problem because it leads to less specialization—for example, fewer advanced placement courses. Wainer calls the
relationship between sample size and variance the “most dangerous equation” because it has tripped up some many researchers and decision makers over the years.
Arbesman: Your discussion of the paradox of skill—that more skillful the population, the more luck plays a role—reminded me a bit of the Red Queen effect, where in evolution, organisms are constantly
competing against other highly adapted organisms. Do you think there is any relationship?
Mauboussin: Absolutely. I think the critical distinction is between absolute and relative performance. In field after field, we have seen absolute performance improve. For example, in sports that
measure performance using a clock—including swimming, running, and crew—athletes today are much faster than they were in the past and will continue to improve up to the point of human physiological
limits. A similar process is happening in business, where the quality and reliability of products has increased steadily over time.
But where there’s competition, it’s not absolute performance we care about but relative performance. This point can be confusing. For example, the analysis shows that baseball has a lot of
randomness, which doesn’t seem to square with the fact that hitting a 95-mile-an-hour fastball is one of the hardest things to do in any sport. Naturally, there is tremendous skill in hitting a
fastball, just as there is tremendous skill in throwing a fastball. The key is that as pitchers and hitters improve, they improve in rough lockstep, offsetting one another. The absolute improvement
is obscured by the relative parity.
This leads to one of the points that I think is most counter to intuition. As skill increases, it tends to become more uniform across the population. Provided that the contribution of luck remains
stable, you get a case where increases in skill lead to luck being a bigger contributor to outcomes. That’s the paradox of skill. So it’s closely related to the Red Queen effect.
Arbesman: What single concept or idea do you feel is most important for understanding the relationship between skill and luck?
Mauboussin: The single most important concept is determining where the activity sits on the continuum of all-luck, no-skill at one end to no-luck, all-skill at the other. Placing an activity is the
best way to get a handle on predicting what will happen next.
Let me share another angle on this. When asked which was his favorite paper of all-time, Daniel Kahneman pointed to “On the Psychology of Prediction,” which he co-authored with Amos Tversky in 1973.
Tversky and Kahneman basically said that there are three things to consider in order to make an effective prediction: the base rate, the individual case, and how to weight the two. In luck-skill
language, if luck is dominant you should place most weight on the base rate, and if skill is dominant then you should place most weight on the individual case. And the activities in between get
weightings that are a blend.
In fact, there is a concept called the “shrinkage factor” that tells you how much you should revert past outcomes to the mean in order to make a good prediction. A shrinkage factor of 1 means that
the next outcome will be the same as the last outcome and indicates all skill, and a factor of 0 means the best guess for the next outcome is the average. Almost everything interesting in life is in
between these extremes.
To make this more concrete, consider batting average and on-base percentage, two statistics from baseball. Luck plays a larger role in determining batting average than it does in determining on-base
percentage. So if you want to predict a player’s performance (holding skill constant for a moment), you need a shrinkage factor closer to 0 for batting average than for on-base percentage.
I’d like to add one more point that is not analytical but rather psychological. There is a part of the left hemisphere of your brain that is dedicated to sorting out causality. It takes in
information and creates a cohesive narrative. It is so good at this function that neuroscientists call it the “interpreter.”
Now no one has a problem with the suggestion that future outcomes combine skill and luck. But once something has occurred, our minds quickly and naturally create a narrative to explain the outcome.
Since the interpreter is about finding causality, it doesn’t do a good job of recognizing luck. Once something has occurred, our minds start to believe it was inevitable. This leads to what
psychologists call “creeping determinism” – the sense that we knew all along what was going to happen. So while the single most important concept is knowing where you are on the luck-skill continuum,
a related point is that your mind will not do a good job of recognizing luck for what it is.
Top image:David Eccles/Flickr/CC | {"url":"http://www.wired.com/2012/11/luck-and-skill-untangled-qa-with-michael-mauboussin/all/1","timestamp":"2014-04-19T09:43:24Z","content_type":null,"content_length":"129782","record_id":"<urn:uuid:78ed7102-fac3-4b0d-8526-a3df62ac9a1a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bob Proctor
Professor of Mathematics
University of North Carolina
Go to the page for:
Let's Expand Rota's Twelvefold Way For Counting Partitions!
(This proposed American Mathematical Monthly article is the reference for my remarks in the On-Line Encyclopedia of Integer Sequences on partition counts. These appear in the OEIS after you scroll to
the bottom of the 'Par' screen of the OEIS alphabetical index.)
Chapel Hill Poset Atlas (leaves this site and goes to a separate site)
d-Complete Posets Generalize Young Diagrams
The following two theorems extend the realms of the hook length and jeu de taquin properties from the non-trivial classes of posets historically known to possess these properties (shapes, shifted
shapes, and rooted trees) to one much larger unifying class of posets.
Theorem 1:
Every d-complete poset has the jeu de taquin property.
Theorem 2: (Dale Peterson - B.P.)
Every d-complete poset has the hook length property.
Corollary to Theorem 2:
The number of order extensions of a d-complete poset is given by a hook length product formula which generalizes the famous Frame-Robinson-Thrall formula for the number of standard Young tableaux on
a given shape.
Click here for as little or as much of an Exposition of These Results as you desire,
or click here for a list of selected publications.
Contact Information for Students
Office: Phillips Hall 390 - Please come by during my office hours!
(These are posted on my door and they also appear in the 'Course Info' document
that is posted on Blackboard.) If these are not good times for you, please talk
to me after class to set some other time between 4:00 and 6:00.
Office Phone: 919-962-9623
Electronic mail address: rap =at= email.unc.edu
Research Interests
I primarily work in an area of overlap between combinatorics and representations of Lie algebras. From the combinatorial side, some of the objects which arise include Young tableaux, plane
partitions, posets, generating functions, and enumeration formulas. From the representation side, some of the objects which arise include roots of Lie algebras, weights and characters of
representations, and explicit actions of Lie algebras in representations, sometimes realized with posets.
Mailing Address
Math Dept, CB #3250 /// Univ of No Carolina /// Chapel Hill, NC 27599 /// USA | {"url":"http://www.unc.edu/math/Faculty/rap/","timestamp":"2014-04-17T07:32:47Z","content_type":null,"content_length":"4371","record_id":"<urn:uuid:769ce1a8-56cd-4c6b-b940-00bff9fbf303>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
hyperbolic geometry
Hyperbolic geometry is an example of a geometry where the parallel postulate fails in the sense that given a line L and a point P not on that line there are infinitely many lines through P that are
parallel to L. The parallel postulate stood out from Euclid's other postulates for geometry in that it was less elementary than the others.
For almost 2000 years people attempted to resolve this by showing that it followed from the other axioms of geometry. This finally came to an end in the beginning of the 19th century. A model of
geometry where the parallel postulate does not hold was claimed by Lobachevsky, though later it was learned that Karl Friederich Gauss had come up with one earlier. | {"url":"http://everything2.com/title/hyperbolic+geometry","timestamp":"2014-04-17T22:23:12Z","content_type":null,"content_length":"30238","record_id":"<urn:uuid:2b3f6067-5bdb-4239-bfef-c67ee1f4ba94>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pittsburg, CA Statistics Tutor
Find a Pittsburg, CA Statistics Tutor
...I specialize in tutoring high school mathematics, such as geometry, algebra, precalculus, and calculus, as well as AP physics. In addition, I have significant experience tutoring students in
lower division college mathematics courses such as calculus, multivariable calculus, linear algebra and d...
25 Subjects: including statistics, physics, algebra 1, calculus
...In my 1978 and 1979 collaborations with Barbara Henker and Carol Whalen, I used analyses of variance and correlational analysis of observer ratings and behavioral observations (using a
behavioral observation system we devised) of hyperactive children. In 1984 Lois Mintz and I used analyses of va...
6 Subjects: including statistics, writing, SPSS, psychology
...I can help you with a variety of topics in Excel. If you are completely new to Excel, I will teach you the Excel environment and elementary concepts and neat features of Excel, including
entering data versus formulas, absolute versus relative cell referencing, populating cells easily, formatting...
23 Subjects: including statistics, calculus, geometry, algebra 2
...I completed the examinations to become an Associate of the Society of Actuaries in 1974. I am currently a Member of the American Academy of Actuaries and an Enrolled Actuary with the U.S.
Departments of Labor and Treasury.
10 Subjects: including statistics, calculus, geometry, algebra 1
...I understand where the problems are, and how best to get past them and onto a confident path to math success. My undergraduate degree is in mathematics, and I have worked as a computer
professional, as well as a math tutor. My doctoral degree is in psychology.
20 Subjects: including statistics, calculus, geometry, biology | {"url":"http://www.purplemath.com/pittsburg_ca_statistics_tutors.php","timestamp":"2014-04-19T07:11:24Z","content_type":null,"content_length":"24137","record_id":"<urn:uuid:d5ec72b9-fd00-47ab-894a-5a6f7b9a609c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Merrionette Park, IL Calculus Tutor
Find a Merrionette Park, IL Calculus Tutor
...I teach by asking the student prompting questions so the student can practice the thought processes that lead them to determining correct answers on their own. This increases the success rate
on examinations and enhances the critical thinking skills necessary in the 'real world'. I also provide...
13 Subjects: including calculus, chemistry, geometry, biology
...Since graduating with a dissertation from the University of Chicago, I have started a life-long learning network and I teach students of all ages. I believe that World History, Art History,
and Archaeology are important because they tell the story of our world. World History develops critical thinking and asks students to think through issues unique to each time and place.
10 Subjects: including calculus, geometry, algebra 1, algebra 2
...I also had minor in Chemistry and took several classes in General, Organic Chemistry, Chemical Kinetics, and Chemical Thermodynamics. Ph.D. in physics with the GPA 4.0. Worked as a teaching
assistant for 2 and a half years, taught at summer camps.
18 Subjects: including calculus, chemistry, physics, geometry
...Learning is personal, so my goal is to connect with each and every student in whatever way is most helpful to them. I look forward to working with you and your children!I was an advanced math
student, completing the equivalent of Algebra 1 before high school. I continued applying algebraic skills in high school, where I was a straight A student and completed calculus as a junior.
13 Subjects: including calculus, statistics, algebra 2, geometry
...After graduating from Loyola University, I began tutoring in ACT Math/Science at Huntington Learning Center in Elgin. I took pleasure in helping students understand concepts and succeed. My
best students are those that desire to learn and I seek to cultivate that attitude of growth and learning through a zest and enthusiasm for learning.
26 Subjects: including calculus, Spanish, chemistry, writing
Related Merrionette Park, IL Tutors
Merrionette Park, IL Accounting Tutors
Merrionette Park, IL ACT Tutors
Merrionette Park, IL Algebra Tutors
Merrionette Park, IL Algebra 2 Tutors
Merrionette Park, IL Calculus Tutors
Merrionette Park, IL Geometry Tutors
Merrionette Park, IL Math Tutors
Merrionette Park, IL Prealgebra Tutors
Merrionette Park, IL Precalculus Tutors
Merrionette Park, IL SAT Tutors
Merrionette Park, IL SAT Math Tutors
Merrionette Park, IL Science Tutors
Merrionette Park, IL Statistics Tutors
Merrionette Park, IL Trigonometry Tutors
Nearby Cities With calculus Tutor
Alsip calculus Tutors
Argo, IL calculus Tutors
Blue Island calculus Tutors
Calumet Park, IL calculus Tutors
Chicago Ridge calculus Tutors
Crestwood, IL calculus Tutors
Dixmoor, IL calculus Tutors
Evergreen Park calculus Tutors
Hometown, IL calculus Tutors
Posen, IL calculus Tutors
Riverdale, IL calculus Tutors
Robbins, IL calculus Tutors
Summit Argo calculus Tutors
Summit, IL calculus Tutors
Worth, IL calculus Tutors | {"url":"http://www.purplemath.com/Merrionette_Park_IL_Calculus_tutors.php","timestamp":"2014-04-21T07:17:54Z","content_type":null,"content_length":"24603","record_id":"<urn:uuid:756e5cb1-cba0-4d5c-a727-ac3202e24097>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
Projects for Multicultural Mathematics
Project 1
due Wednesday, September 29th, 2004
Projects should be about 5 pages typewritten or neatly handwritten, including diagrams. You may do the projects in groups of 3 or 4. Ninety percent of your grade will be the overall grade I give the
project; ten percent will be based on a peer evaluation. Students who do not participate in the project will receive a zero for the assignment. Projects will be graded on completeness, mathematical
content, and creativity.
Fantasy Math. Create your own number system, as J.R.R. Tolkein did in the Lord of the Rings. You'll need
• spoken names for your numbers
• a positional written system for your numbers
• a multiplication table for 1x 1 through 12 x 12 (that is, however you write 12[ten] in your system), plus comments on interesting patterns in the table
• a translation of the numbers 100, 157, 517, and 1000 into your number system
• a sketch or model of a mathematical ``artifact'' produced by your fictional culture
• a history for your number system, including a description of the (fictional) culture that produced it and the reason why the base was chosen
The only restriction is that your system can't be base 10, 12, 5, or 20, since we've already studied examples like these. I recommend that you use a base smaller than 12.
IMPORTANT: Do not split the project up into pieces, with each person doing only one piece. It is imperative that everyone in the group is familiar with the number system. I will give you some class
time to work on the project; however, you should also schedule meetings outside of class. | {"url":"http://people.sju.edu/~rhall/Multi/project1.html","timestamp":"2014-04-16T05:05:44Z","content_type":null,"content_length":"2563","record_id":"<urn:uuid:4bca0c3a-71f1-4fdb-a259-9faef2854826>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Fourier Analysis on Lewko's blog
Let ${\chi_{B}(x)}$ denote the characteristic function of the unit ball ${B}$ in ${d}$ dimensions. For a smooth function of rapid decay, say ${f}$, we can define the linear operator ${S_{1}}$ by the
$\displaystyle \widehat{S_{1}f}(\xi) = \chi_{B}(\xi)\hat{f}(\xi)$
where ${\hat{f}(\xi)}$ denotes the Fourier transform of ${f}$, as usual. This operator naturally arises in problems regarding the convergence of Fourier transforms (which we discuss below). A
fundamental problem regarding this operator is to determine for which values of ${p}$ and ${d}$ we can extended ${S_{1}}$ to a bounded linear operator on ${L^{p}(\mathbb{R}^d)}$. The ${1}$
-dimensional case of this problem was settled around 1928 by M. Riesz, however the higher dimensional cases proved to be much more subtle. In 1954 Herz showed that $2d/(d+1) <p< 2d/(d-1)$ was
a necessary condition for the boundedness of $S_{1}$, and sufficient in the special case of radial functions. It was widely conjectured that these conditions were also sufficient in general (this was
known as the disc conjecture). However, in 1971 Charles Fefferman proved, for ${d\geq2}$, that ${S_{1}}$ does not extend to a bounded operator on any ${L^p}$ space apart from the trivial case when $
{p=2}$ (which follows from Parseval’s identity). Recently, I needed to look at Fefferman’s proof and decided to spend some time trying to figure out what is really going on. I will attempt to give a
motivated account of Fefferman’s result, in a two post presentation. In this (the first) post I will describe the motivation for the problem, as well as develop some tools needed in the proof. The
problems discussed here were first considered in the context of Fourier series (i.e. functions on the ${d}$-dimensional torus ${\mathbb{T}^d}$). It turns out, however, that these problems are
slightly easier to address on Euclidean space, and are equivalent thanks to a result of de Leeuw. In light of this, we will work exclusively on ${\mathbb{R}^d}$. (more…) | {"url":"http://lewko.wordpress.com/tag/fourier-analysis/","timestamp":"2014-04-20T13:19:14Z","content_type":null,"content_length":"26510","record_id":"<urn:uuid:2dcf036c-a0e0-4360-bf33-e56dcced8775>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof in a Metric Space
January 17th 2006, 07:36 AM #1
Junior Member
Nov 2005
Proof in a Metric Space
Let (X, d) be a metric space, and let V,W contained in X be disjoint (ie the intersection of V and W equals zero), nonempty and closed. Prove that there exist disjoint open V' contained in X and
W' contained in X such that V is contained in V' and W is contained in W'.
Let (X, d) be a metric space, and let V,W contained in X be disjoint (ie the intersection of V and W equals zero), nonempty and closed. Prove that there exist disjoint open V' contained in X and
W' contained in X such that V is contained in V' and W is contained in W'.
Sketch of a proof (you may have to fill in detail):
1. There exist a $\delta >0$ such that for all $v\epsilon V$ and $w\epsilon W$$d(v,w)>\delta$.
For if this were not the case we could construct a sequence $v_i \epsilon V, i=1,2,3..$ with
limit $w \epsilon W$, but this contradicts $V$ closed; as closed means the limit must be in $V$.
(A closed set contains all its limit points)
2. Let $W'$ be the union of all open balls centred on points in $W$ of diameter $<\delta/2$,
and similarly for $V'$. Then $V'$ and $W'$ have the required properties.
Many thanks!
Sketch of a proof (you may have to fill in detail):
1. There exist a $\delta >0$ such that for all $v\epsilon V$ and $w\epsilon W$$d(v,w)>\delta$.
For if this were not the case we could construct a sequence $v_i \epsilon V, i=1,2,3..$ with
limit $w \epsilon W$, but this contradicts $V$ closed; as closed means the limit must be in $V$.
(A closed set contains all its limit points)
I think there is a hidden assumption here that the space is complete.
I will have to look at this again to see if I can get around the problem.
A possible way around this problem is to work with the completion of X?
Further thought shows that this cannot be adapted to non-complete
metric spaces, sorry
Last edited by CaptainBlack; January 19th 2006 at 02:29 AM.
Not really, the proof is fine.
(we don't need completeness. but spoke too fast, rgep is right.
Last edited by Rebesques; January 19th 2006 at 10:46 PM.
1. There exist a $\delta >0$ such that for all $v\epsilon V$ and $w\epsilon W$$d(v,w)>\delta$.
No: consider the disjoint closed sets in $\mathbf{R}^2$ given by $V=\{(x,y) : x = 0\}$ and $W=\{(x,y) : xy=1\}$. You're implicitly assuming the space is compact.
Define $d(x,V) = \inf\{d(x,v) : v \in V\}$, observe that this is non-zero for $x ot\in V$ and then consider the set $\{ x : d(x,V) > d(x,W)\}$.
No: consider the disjoint closed sets in $\mathbf{R}^2$ given by $V=\{(x,y) : x = 0\}$ and $W=\{(x,y) : xy=1\}$. You're implicitly assuming the space is compact.
Compactness may suffice, but since I know what I was assuming
I can assure you that my assumption was in fact completness.
I had in fact constructed your counter example while investigating
the problems with the proof.
Define $d(x,V) = \inf\{d(x,v) : v \in V\}$, observe that this is non-zero for $x ot\in V$ and then consider the set $\{ x : d(x,V) > d(x,W)\}$.
Yes, I had got to this point, but had not yet had time to show that this
last set is open. I'm probably missing something obvious here
Last edited by CaptainBlack; January 19th 2006 at 11:28 PM.
January 18th 2006, 05:10 AM #2
Grand Panjandrum
Nov 2005
January 18th 2006, 06:04 AM #3
Junior Member
Nov 2005
January 18th 2006, 07:36 AM #4
Grand Panjandrum
Nov 2005
January 19th 2006, 10:07 PM #5
January 19th 2006, 10:21 PM #6
January 19th 2006, 11:17 PM #7
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/advanced-algebra/1654-proof-metric-space.html","timestamp":"2014-04-18T07:09:20Z","content_type":null,"content_length":"57332","record_id":"<urn:uuid:487bdd65-62b9-48a6-ac43-c9f1ec5f2c51>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
SymMath Applications
Kevin Lehmann
Gaussian The author generates a set of data from a Gaussian distribution to illustrate the properties of the distribution such as the mean of a set of data, the confidence level for the
Distributions mean, the chi-squared function, and the Student t Distribution The document concludes with a discussion of the mean absolute deviation and a survey of the Moments of a distribution
Kevin Lehmann, and how the mean absolute deviation and the moments are used to characterize the shape of the distribution. Variance, skew, and kurtosis are the three moments discussed in this
Princeton document.
Mean Versus This worksheet provides a comparison of the mean and median values for both theoretical distributions and for data sets sampled from Gaussian and Lorentzian distribution functions.
Median The document shows that the mean value provides a moderately better estimate of the central value than the median for the case of a Gaussian. However, in the case of a Lorentzian,
Kevin Lehmann, due to its slow fall off for large displacements from the central value, the mean is almost useless as a statistic, while the median functions quite well. The document also
Princeton introduces the idea of finding the optimal estimate by using the method of maximum likelihood. This document requires Mathcad 6.0+ including upgrade through patch 'e' .
Rejection of The document provides a detailed presentation to the theory of rejection of data using a Gaussian distribution. The document discusses the conditions under which the Q-test is used.
Data The exercises in the document give students opportunities to practice the concepts. The document provides a numerical example of how statistical methods can reduce the errors in
Kevin Lehmann, information extracted from measurements with real, as oppose to Gaussian, noise characteristics.
The Morse In this worksheet, we find a presentation of the vibrational motion of a diatomic molecule held together with a potential function of a special form known as the Morse Potential.
Oscillator Both the classical and quantum motion of the oscillator will be studied, and explicit expressions for eigenenergies and wavefunctions are given The effect of rotation is also
Kevin Lehmann, discusses. The document contains embedded 20 exercises and 4 advanced problems for users to test their mastery of the topic. | {"url":"http://www.chemeddl.org/alfresco/service/org/chemeddl/symmath/apps?author_id=23&guest=true","timestamp":"2014-04-19T22:21:50Z","content_type":null,"content_length":"5071","record_id":"<urn:uuid:04d8d7a7-aaa0-4e79-94cd-eb1f1843140e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what is the least common multiple of 11,44?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
the multiples of 11 are: 11, 22, 33, 44, 55... because 44 is a multiple of 11, it is also the least common multiple. so the answer is 44
Best Response
You've already chosen the best response.
but what are the LCM of 44 ?
Best Response
You've already chosen the best response.
the multiples of 44 are 44, 88, 132, 176... the multiples of 11 are 11, 22, 33, 44, 55, ... the Least Common Multple (LCM) is the smallest multiple that they both have ... the smallest one in
both lists (the LCM) is 44
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50363a84e4b09d6877142404","timestamp":"2014-04-16T19:42:19Z","content_type":null,"content_length":"32538","record_id":"<urn:uuid:4ec52e92-d9f3-45af-a722-826705babaaf>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
Annihilating Tate-Shafarevic groups
Seminar Room 1, Newton Institute
We describe how main conjectures in non-commutative Iwasawa theory lead naturally to the (conjectural) construction of a family of explicit annihilators of the Bloch-Kato-Tate-Shafarevic Groups that
are attached to a wide class of p-adic representations over non-abelian extensions of number fields. Concrete examples to be discussed include a natural non-abelian analogue of Stickelberger's
Theorem (which is proved) and of the refinement of the Birch and Swinnerton-Dyer Conjecture due to Mazur and Tate. Parts of this talk represent joint work with James Barrett and Henri Johnston.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/NAG/seminars/2009073015301.html","timestamp":"2014-04-18T04:15:52Z","content_type":null,"content_length":"6070","record_id":"<urn:uuid:7f13967c-15c3-4153-971f-0142f2b7b5e8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
November 18th 2010, 11:52 AM #1
What does the rearranging lead to?
$\frac{\partial a}{\partial x} \partial y = \frac{\partial b}{\partial y} \partial x$
Dividing through by $\partial x$, which would it lead to?
$\frac{\partial a}{\partial x^2} \partial y = \frac{\partial b}{\partial y}$
$\frac{\partial ^2 a}{\partial x^2} \partial y = \frac{\partial b}{\partial y}$
Equation (1) or (2)?
Hello, Simplicity!
What does the rearranging lead to?
. . $\displaystyle \frac{\partial a}{\partial x} \partial y \:=\: \frac{\partial b}{\partial y} \partial x$
Dividing through by $\partial x$, which would it lead to?
$\displaystyle (1)\;\frac{\partial a}{\partial x^2} \partial y \:=\: \frac{\partial b}{\partial y}$
. . . . . . . or
$\displaystyle (2)\;\frac{\partial ^2 a}{\partial x^2} \partial y \:=\: \frac{\partial b}{\partial y}$
We have: . $\displaystyle \frac{\partial a}{\partial x}\partial y \;=\;\frac{\partial b}{\partial y}\partial x$
Dividing through by $\partial x$, we have: . $\displaystyle \frac{\partial a}{\partial x}\cdot \frac{\partial y}{\partial x} \;=\;\frac{\partial b}{\partial y}\cdot\frac{\partial x}{\partial x}$
Then we have: . $\displaystyle\frac{\partial a}{\partial x}\cdot\frac{\partial y}{\partial x} \;=\;\frac{\partial b}{\partial y}$
. . which can be written: . $\displaystyle \frac{\partial a\partial y}{\partial x^2} \;=\;\frac{\partial b}{\partial y}$ . . . answer (1)
November 18th 2010, 01:00 PM #2
Super Member
May 2006
Lexington, MA (USA) | {"url":"http://mathhelpforum.com/calculus/163699-rearranging.html","timestamp":"2014-04-19T07:29:05Z","content_type":null,"content_length":"36491","record_id":"<urn:uuid:dec3dcd5-f1db-4b88-998e-a8937de0cb7e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
Minnesota Supercomputing Institute
Research Abstracts Online
January 2009 - March 2010
Main TOC ....... College TOC ....... Next Abstract
University of Minnesota Twin Cities
Institute of Technology
Department of Electrical and Computer Engineering
PI: John C. Kieffer
Perturbation Theory of Source Code Design
Convolutional source codes are considered for which the generating matrix consists of two linearly independent binary rows of fixed length such that the first row begins and ends with one; these
codes are used to compress a long binary data sequence into a binary sequence half as long. A perturbation class consists of all such codes for which the first row of the generating matrix is the
vector of coefficients of a fixed primitive polynomial over the binary field. Each code in the perturbation class is viewed as a nonzero element of the Galois field generated by the primitive
polynomial, and lies in a certain conjugacy class. In the perturbation theory of source code design, one selects a code from a perturbation class in two steps: (1) a conjugacy class is selected by a
certain rule; (2) a code within that conjugacy class is selected by a certain rule.
These researchers are testing the efficacy of a choice of selection rules (1)-(2). A long pseudorandom binary data sequence is generated, compressed/decompressed via the code in the perturbation
class determined by rules (1)-(2), and then the fraction of errors in the decompressed sequence is computed; the primitive polynomial for which this fraction of errors is minimal is then found.
Group Member
John Marcos, Graduate Student | {"url":"https://www.msi.umn.edu/about/publications/annualreport/AR2009-10/abstracts/KiefferJC.html","timestamp":"2014-04-20T10:59:15Z","content_type":null,"content_length":"10243","record_id":"<urn:uuid:16a17532-d878-45e1-bafc-f2add41b3725>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Test Vector Types in R
R contains a set of functions that allow you to test for the type of a vector. All these functions have the same syntax: is, a dot, and then the name of the type.
You can test whether a vector is of type foo by using the is.foo() function. This test works for every type of vector; just replace foo with the type you want to check.
To test whether baskets.of.Granny is a numeric vector, for example, use the following code:
> is.numeric(baskets.of.Granny)
[1] TRUE
You may think that baskets.of.Granny is a vector of integers, so check it, as follows:
> is.integer(baskets.of.Granny)
[1] FALSE
R disagrees with the math teacher here. Integer has a different meaning for R than it has for us. The result of is.integer() isn’t about the value but about the way the value is stored in memory.
R has two main modes for storing numbers. The standard mode is double. In this mode, every number uses 64 bits of memory. The number also is stored in three parts. One bit indicates the sign of the
number, 52 bits represent the decimal part of the number, and the remaining bits represent the exponent.
This way, you can store numbers as big as 1.8 × 10^308 in only 64 bits. The integer mode takes only 32 bits of memory, and the numbers are represented as binary integers in the memory. So, the
largest integer is about 2.1 billion, or, more exactly, 2^31 – 1. That’s 31 bits to represent the number itself, 1 bit to represent the sign of the number, and –1 because you start at 0.
You should use integers if you want to do exact integer calculations on small integers or if you want to save memory. Otherwise, the mode double works just fine.
You force R to store a number as an integer by adding L after it, as in the following example:
> x <- c(4L,6L)
> is.integer(x)
[1] TRUE
Whatever mode is used to store the value, is.numeric() returns TRUE in both cases. | {"url":"http://www.dummies.com/how-to/content/how-to-test-vector-types-in-r.html","timestamp":"2014-04-17T10:37:23Z","content_type":null,"content_length":"52709","record_id":"<urn:uuid:45abb0e4-b014-4d11-8918-b226e0fa3184>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: 'Divide the Will' Brain Teaser
Divide the Will
Math brain teasers require computations to solve.
Puzzle ID: #897
Category: Math
Submitted By: Michelle
An old woman died leaving an estate of $1,000,000. Her estate was divided up among her surviving relatives. Her relatives: Jennie, Bobbie, Ellie, Ginnie, Candie, Dannie, Frannie, Annie, Hollie and
Kathie were each given an amount of money. The eccentric woman's will stated that each of her relatives was to be given an amount based on the alphabetic order of their first name. The heirs were
listed in the order of their names and thus the order of their gift. Annie received the most and Kathie received the least amount. The difference in the amounts given to each person was to be
constant. (i.e. The difference between the amount given to the first and second person on the list was the same as the difference in the amount given to the next-to-last and the last person on the
If Ellie received 108,000, how much did each other person receive?
Show Answer
What Next? | {"url":"http://www.braingle.com/brainteasers/897/divide-the-will.html","timestamp":"2014-04-18T13:27:02Z","content_type":null,"content_length":"22926","record_id":"<urn:uuid:a01222e0-bcbb-4475-980e-893428c42ab9>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Yahoo Groups
Carol/Kynea and PRP
Expand Messages
View Source
Consider a number of this type n which is composite.
suppose all the factors of n are known then a*b*c=n
a,b,c>1 and the number has only 3 factors for simplicity reasons.
gcd(a-1,b-1,c-1)= 2*m
m in most cases will be 1 but can differ.
if this is true then 2*m also divides n-1
With that said it can be proven that there is a number h such that
h^(2*m)-1 = 0 (mod n)
h^(n-1)-1=0 (mod n)
or n is a PRP for base h.
An open question related to above:
Given x,k and a prime p solve for a
a^x= k (mod p)
How would this be done? What algorithms can be used? What are there
run times?
Harsh Aggarwal
Your message has been successfully submitted and would be delivered to recipients shortly. | {"url":"https://groups.yahoo.com/neo/groups/primenumbers/conversations/topics/14448?o=1&d=-1","timestamp":"2014-04-19T04:22:12Z","content_type":null,"content_length":"39341","record_id":"<urn:uuid:31dbb050-df1c-4c1d-af2d-f3d8470f13b4>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
compare pointer and integer?
08-19-2010 #16
actually, ms. laserlight, i became more confused of the terms you used. i'm only on my first year at learning c, and still at 3rd year high school.
i don't know what it means to change command_words from an array of 50 pointers to char, to an array of 50 arrays of char.
i just wanted to know how to seperate strings into the different words, variables or numbers (which serves as the commands) it has, as long as it is seperated by a space, because later on i will
have to deal with strings with 4 commands such as:
which means that matrix A will be multiplied to matrix B and the answer will be stored to matrix C or
which means that a scalar(int), 2, will be multiplied to matrix A and the answer will be stored to matrix B or
which means that 1st row of matrix A will be added to its 2nd row and the answer will be stored to its 3rd row.
i really have no idea on how to do it, but i think i know the operations well. it's just that i have no idea how to deal with this kinds of strings. in fact, i only learned strtok while searching
the net, and i'm not even sure if its the correct procedure. so please, teach me how to seperate the strings so i can use them in the if statements properly.
thanks for the help!
Last edited by nyekknyakk; 08-19-2010 at 02:00 AM.
Divide the terms into tokens.
50 pointers to char -> 50 pointers, char. That is: char* example[50]
Array of 50 arrays of char -> Array, 50 arrays, char. That is: char Example[N][50]
Where N is a positive number.
For information on how to enable C++11 on your compiler, look here.
よく聞くがいい!私は天才だからね! ^_^
you mean to change char* cmd_words[50] to char cmd_words[50][50]?
ms. laserlight told me not to use global variables so if i combine your suggestions it will look like this:
#include <stdio.h>
#include <string.h>
void make_mat(void);
char get_cmd(void);
int main(void)
char cmd_words[50][50];
while (1)
if (strcmp(cmd_words[0], "MAKE") == 0)
else if (strcmp(cmd_words[0], "DISP") == 0)
printf("Invalid Command!");
/*Get Command Function*/
char get_cmd(void)
char cmd_str[50] = "\0";
char cmd_words[50][50];
int loop;
cmd_words[0] = strtok(cmd_str, " ");
if (cmd_words[0] == NULL)
for (loop = 1; loop < 50; loop++)
cmd_words[loop] = strtok(NULL, " ");
return *cmd_words[0];
/*Make Function*/
void make_mat(void)
int i, j;
char cmd_words[50][50];
if (strcmp(cmd_words[1], "A") == 0)
int mat_A[50][50];
int r1, c1;
printf("Enter rows: ");
scanf("%d", &r1);
printf("Enter columns: ");
scanf("%d", &c1);
for (i = 0; i < r1; i++)
for (j = 0; j < c1; j++)
printf("A[%d][%d] = ", i, j);
scanf("%d", &mat_A[i][j]);
else if (strcmp(cmd_words[1], "B") == 0)
int mat_B[50][50];
int r2, c2;
printf("Enter rows: ");
scanf("%d", &r2);
printf("Enter columns: ");
scanf("%d", &c2);
for (i = 0; i < r2; i++)
for (j = 0; j < c2; j++)
printf("A[%d][%d] = ", i, j);
scanf("%d", &mat_B[i][j]);
else if (strcmp(cmd_words[1], "C") == 0)
int mat_C[50][50];
int r3, c3;
printf("Enter rows: ");
scanf("%d", &r3);
printf("Enter columns: ");
scanf("%d", &c3);
for (i = 0; i < r3; i++)
for (j = 0; j < c3; j++)
printf("A[%d][%d] = ", i, j);
scanf("%d", &mat_C[i][j]);
but my compiler said that in function, char get_cmd(), lines 33 & 40 have incompatible types in assignment of `char *' to `char[50]'. what does that mean?
It simply means you cannot assign a pointer to an array.
An array is N elements successively from position 0, while a pointer simply points at some lone char. It doesn't make sense to be able to assign them.
strtok returns a pointer. The pointer points to the lone char at the start of a successive number of chars (an array).
I'm not sure what laserlight intended, but I'm sure laserlight can clarify further, if you wait a little.
For information on how to enable C++11 on your compiler, look here.
よく聞くがいい!私は天才だからね! ^_^
i just wanted to know how to seperate strings into the different words, variables or numbers (which serves as the commands) it has, as long as it is seperated by a space, because later on i will
have to deal with strings with 4 commands such as:
>>MMUL A B C
which means that matrix A will be multiplied to matrix B and the answer will be stored to matrix C or
>>SMUL 2 A B
which means that a scalar(int), 2, will be multiplied to matrix A and the answer will be stored to matrix B or
>>ADDR A 1 2 3
How about using "main(int argc, int * argv[])" to take in command line arguments?
08-19-2010 #17
08-19-2010 #18
08-19-2010 #19
08-19-2010 #20
Join Date
Aug 2010
somewhere in this universe | {"url":"http://cboard.cprogramming.com/c-programming/129351-compare-pointer-integer-2.html","timestamp":"2014-04-18T14:05:54Z","content_type":null,"content_length":"64805","record_id":"<urn:uuid:8dc9a11f-d8de-49f6-ac74-6cdbb37749e3>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
tangent bundle
tangent bundle
Let $M$ be a differentiable manifold. Let the tangent bundle $TM$ of $M$ be(as a set) the disjoint union $\coprod_{{m\in M}}T_{m}M$ of all the tangent spaces to $M$, i.e., the set of pairs
$\{(m,x)|m\in M,x\in T_{m}M\}.$
This naturally has a manifold structure, given as follows. For $M=\mathbb{R}^{n}$, $T\mathbb{R}^{n}$ is obviously isomorphic to $\mathbb{R}^{{2n}}$, and is thus obviously a manifold. By the
definition of a differentiable manifold, for any $m\in M$, there is a neighborhood $U$ of $m$ and a diffeomorphism $\varphi:\mathbb{R}^{n}\to U$. Since this map is a diffeomorphism, its derivative is
an isomorphism at all points. Thus $T\varphi:T\mathbb{R}^{n}=\mathbb{R}^{{2n}}\to TU$ is bijective, which endows $TU$ with a natural structure of a differentiable manifold. Since the transition maps
for $M$ are differentiable, they are for $TM$ as well, and $TM$ is a differentiable manifold. In fact, the projection $\pi:TM\to M$ forgetting the tangent vector and remembering the point, is a
vector bundle. A vector field on $M$ is simply a section of this bundle.
The tangent bundle is functorial in the obvious sense: If $f:M\to N$ is differentiable, we get a map $Tf:TM\to TN$, defined by $f$ on the base, and its derivative on the fibers.
VectorField, LieAlgebroids
Mathematics Subject Classification
no label found | {"url":"http://planetmath.org/TangentBundle","timestamp":"2014-04-19T04:22:37Z","content_type":null,"content_length":"55423","record_id":"<urn:uuid:4b48eef0-23eb-484d-98f1-487fd4787fb5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00647-ip-10-147-4-33.ec2.internal.warc.gz"} |
From OpenWetWare
The distance between two points $\textbf{x}=(x_1,x_2,\dots,x_n)$ and $\textbf{y}=(y_1,y_2,\dots,y_n)$ in the classical Euclidean sense is
$\| x - y \| = \sqrt{(x_1 - y_1)^2 + (x_2 - y_2)^2 + \dots + (x_n - y_n)^2}$
However, other measures are possible. On a grid (say, the distance to get from place to place in New York City), the distance is the lateral distance, since you can't go on diagonals:
$\| x - y \| = |x_1 - y_1| + \dots + |x_n - y_n|$
We can dream up scenarios in which we take the cube of each term, and the cube root overall, or any other power that we care to choose. We'll generalize, and define the p-norm of a vector $\mathbf{x}
$ to be
$\|x\|_p = (|x_1|^p + \dots + |x_n|^p)^{\frac{1}{p}}$
After you play with this for a while, you might ask, "what does the unit circle look like for some p"? We'll explore this in two dimensions where it's easy to see. Nothing new happens in higher
p = 2 is our usual norm, and the unit circle is our usual circle. If we go to p = 1, our grid norm above, then we are plotting the equation | x[1] | + | x[2] | = 1 which is just four line segments
connecting the points (1,0), (0,1), (-1,0), and (0,-1). Here is a plot of the unit circles for p = 1,2,3,4:
To see the more general behavior, note that we can take the norm of a fixed vector x as a function of p. If we fix x = (1,0), or any of the other points on the axes, then the norm doesn't vary with p
. It's always 1. If we go at forty five degrees between the axes ($x=(\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}})$), then we have
$\begin{matrix} \|x\|_p & = & (\frac{1}{\sqrt{2}^p} + \frac{1}{\sqrt{2}^p})^{\frac{1}{p}} \\ & = & (2\frac{1}{2^{\frac{p}{2}}})^{\frac{1}{p}} \\ & = & (2^{1-\frac{p}{2}})^{\frac{1}{p}} \\ & = & 2
^{(\frac{1}{p}-\frac{1}{2})} \end{matrix}$
Here is a plot of the function:
Remember, we have here a fixed vector: its value goes down, so in order to get a length of 1, we have to put in a longer vector. The unit circle thus heads steadily towards a square.
The limiting case is the $\infty$-norm, which is simply defined as $\|\mathbf{x}\|_\infty = \stackrel{max}{i=1,\dots,n} |x_i|$, the largest component of the vector. A few moments thought show that
the unit circle is simply the square centered at the origin with side length two, and all sides parallel to the axes.
Given these insights into the unit circle, we can actually use this to get another characteristic of a vector. How much does it lie on the axes? Are only two of its components in this basis
important? Three?
To measure this, consider our vector $\mathbf{x}$. Take
What is this horrible object? Consider a simple case, where $\mathbf{x}$ has k components which are 1 and n − k components which are 0. Then
$\begin{matrix} - \frac{d\|\mathbf{x}\|_p}{dp} & = & -\frac{d}{dp} k^{\frac{1}{p}} \\ & = & \frac{1}{p} k^{\frac{1}{p}} \log k^{\frac{1}{p}} \end{matrix}$
When we evaluate this at p = 1, we get $-\frac{d\|\mathbf{x}\|_p}{dp}|_{p=1} = k\log k$, which is at least monotonic in k, and gives us an estimate of the number of components.
I was led to this weirdness by looking simple and suitable random variables to use in the Lockless-Ranganathan Formalism, to whit, how do you measure the amount of variation in a given nucleotide? | {"url":"http://www.openwetware.org/wiki/Madhadron:UnitCircleDeformed","timestamp":"2014-04-17T20:57:03Z","content_type":null,"content_length":"19126","record_id":"<urn:uuid:10a5f8b1-f098-406a-a0d6-7579ffb90ef5>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by donna on Friday, September 23, 2011 at 11:59pm.
You have heard about but never took seriously the Old Farmer’s Almanac claim that you can tell the temperature of the air by counting the number of cricket chirps in 1 minute. You decide to test the
claim so one night when the temperature was 76 degrees you counted the number of chirps of a cricket and found it was 855 in one minute. Not knowing the relationship between degrees and chirps, you
decide to develop your own. You purchase the finest thermometer and chirp recording device and over the next few months you develop the following data:
Chirps in
Temperature 1 minute
a) What is the linear regression equation that expresses the relationship between Temperature and Cricket Chirps in one minute (Temperature is the dependent variable)?
b) Is the equation statistically significant at alpha = 0.05?
c) What percentage of the variation in temperature is explained by the number of cricket chirps in one minute?
d) If one night you counted 890 cricket chirps in one minute, what do you believe the temperature to be?
Related Questions
English - My assignment is to write an exlpication of a peom by emily dickinson...
English - What do you think Keats means in these lines from "When I Have Fears ...
Dr. Bob222 G-Chem - I agree with 0.0918 mols C and 0.137 mols H but not the ...
Statistics - Is is possible for the regression equation to have none of the ...
Algebra - Is there a shortcut to foiling an equation? I always thought that FOIL...
math - I don't qite understand this question has written my homework...I need ...
high school, english, - Answer the following questions, using never, once, ...
physics - meaning of graw I have never heard that term in physics.
math - I have to find how many brownies I MADE IF MY SISTER TOOK 1/2, MY BROTHER...
Math - 8 Children divided 32 apples. Kitty took 1 apple, Mary took 2, Sherrie ... | {"url":"http://www.jiskha.com/display.cgi?id=1316836756","timestamp":"2014-04-20T09:26:46Z","content_type":null,"content_length":"9085","record_id":"<urn:uuid:4595d65f-ce74-4cf7-a64e-28de6efca8a4>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
the 0th dimension
We exist in the third dimension. You have infinitely thin paper in the second dimension. In the first dimension all you have is an infinitely thin, infinitely short, infinitely wide line.
In the dimension, there can only be 1 point, that is infinitely thin, short, and cramped. However, because there is only 1 single point, and no other, everything must exist at that point, whether a
1 dimensional, 2 dimensional, or 3 dimensional being. Because of that, we all share at least 1 point in the universe, with everything.
Update: I have since realized that we would not share one point in common -- we are comprised of many different points, thus of many different 0th dimensions. If we were all forced to fit inside a
0th dimensional space; then we would all share a point in common. | {"url":"http://everything2.com/title/the+0th+dimension","timestamp":"2014-04-17T13:19:19Z","content_type":null,"content_length":"24359","record_id":"<urn:uuid:66c8f49e-f1f9-4431-a680-4ca441ec5d54>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
An O(n log log n)-time algorithm for triangulating simple polygons
Results 11 - 20 of 26
- In Proceedings of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA , 2006
"... Motivated by an application in computational topology, we consider a novel variant of the problem of efficiently maintaining dynamic rooted trees. This variant requires merging two paths in a
single operation. In contrast to the standard problem, in which only one tree arc changes at a time, a singl ..."
Cited by 5 (1 self)
Add to MetaCart
Motivated by an application in computational topology, we consider a novel variant of the problem of efficiently maintaining dynamic rooted trees. This variant requires merging two paths in a single
operation. In contrast to the standard problem, in which only one tree arc changes at a time, a single merge operation can change many arcs. In spite of this, we develop a data structure that
supports merges on an n-node forest in O(log 2 n) amortized time and all other standard tree operations in O(log n) time (amortized, worst-case, or randomized depending on the underlying data
structure). For the special case that occurs in the motivating application, in which arbitrary arc deletions (cuts) are not allowed, we give a data structure with an O(log n) time bound per
operation. This is asymptotically optimal under certain assumptions. For the evenmore special case in which both cuts and parent queries are disallowed, we give an alternative O(log n)-time solution
that uses standard dynamic trees as a black box. This solution also applies to the motivating application. Our methods use previous work on dynamic trees in various ways, but the analysis of each
algorithm requires novel ideas. We also investigate lower bounds for the problem under various assumptions. 1
- DEC Systems Research Center, Research Report , 1993
"... Geometric algorithms and data structures are often easiest to understand visually, in terms of the geometric objects they manipulate. Indeed, most papers in computational geometry rely on
diagrams to communicate the intuition behind the results. Algorithm animation uses dynamic visual images to expl ..."
Cited by 4 (0 self)
Add to MetaCart
Geometric algorithms and data structures are often easiest to understand visually, in terms of the geometric objects they manipulate. Indeed, most papers in computational geometry rely on diagrams to
communicate the intuition behind the results. Algorithm animation uses dynamic visual images to explain algorithms. Thus it is natural to present geometric algorithms, which are inherently dynamic,
via algorithm animation. The accompanying videotape presents a video review of geometric animations; the review was premiered at the 1992 ACM Symposium on Computational Geometry. The video review
includes single-algorithm animations and sample graphic displays from "workbench" systems for implementing multiple geometric algorithms. This report contains short descriptions of each video
segment. vi Preface This booklet and the accompanying videotape contain animations of a variety of computational geometry algorithms. Computational geometry has existed as a field for almost two
decades, and int...
- Pattern Recogn. Lett , 1991
"... The Graham scan is a fundamental backtracking technique in computational geometry which was originally designed to compute the convex hull of a set of points in the plane and has since found
application in several different contexts. In this note we show how to use the Graham scan to triangulate a s ..."
Cited by 4 (1 self)
Add to MetaCart
The Graham scan is a fundamental backtracking technique in computational geometry which was originally designed to compute the convex hull of a set of points in the plane and has since found
application in several different contexts. In this note we show how to use the Graham scan to triangulate a simple polygon. The resulting algorithm triangulates an n vertex polygon P in O(kn) time
where k-1 is the number of concave vertices in P. Although the worst case running time of the algorithm is O(n 2 ), it is easy to implement and is therefore of practical interest. 1. Introduction A
polygon P is a closed path of straight line segments. A polygon is represented by a sequence of vertices P = (p 0 ,p 1 ,...,p n-1 ) where p i has real-valued x,y-coordinates. We assume that no three
vertices of P are collinear. The line segments (p i ,p i+1 ), 0 i n-1, (subscript arithmetic taken modulo n) are the edges of P. A polygon is simple if no two nonconsecutive edges intersect. A simple
polygon part...
, 1996
"... This paper presents a brief survey of some problems and solutions related to the triangulation of surfaces. A surface (a two dimensional manifold, in the context of this paper) can be
represented as a three dimensional function on a planar disk. In that sense, the triangulation of the disk induces a ..."
Cited by 2 (0 self)
Add to MetaCart
This paper presents a brief survey of some problems and solutions related to the triangulation of surfaces. A surface (a two dimensional manifold, in the context of this paper) can be represented as
a three dimensional function on a planar disk. In that sense, the triangulation of the disk induces a triangulation of the surface. Hence the emphasis of this paper is on triangulation on a plane.
Apart from the issues in triangulation, this survey talks about the known upper and lower bounds on various triangulation problems. It is intended as a broad compilation of known results rather than
an intensive treatise, and the details of most algorithms are skipped. 1 Introduction This survey assumes familiarity with the fundamental concepts of computational geometry. We define the
triangulation problem as follows: Input: i. A set S of points, fp i g, such that each p i lies on the surface ii. A set of conditions, fC i g Output: A set S 0 of triples f(p i 1 ; p i 2 ; p i 3 )g
such that e...
"... al [x i-1 ,x i+1 ] that bridges x i lies entirely in P. We say that two ears x i and x j are non-overlapping if int[x i-1 ,x i ,x i+1 ] Ç int[x j-1 ,x j ,x j+1 ] = Æ. The following Two-Ears
Theorem was recently proved by Meisters [Me1]. Theorem 1: (the Two-Ears Theorem, Meis ..."
Cited by 2 (0 self)
Add to MetaCart
al [x i-1 ,x i+1 ] that bridges x i lies entirely in P. We say that two ears x i and x j are non-overlapping if int[x i-1 ,x i ,x i+1 ] Ç int[x j-1 ,x j ,x j+1 ] = Æ. The following Two-Ears Theorem
was recently proved by Meisters [Me1]. Theorem 1: (the Two-Ears Theorem, Meisters [Me1]) Except for triangles every simple polygon P has at least two non-overlapping ears. Meisters' proof by
induction is both elegant and concise. However, given that a simple polygon can always be triangulated allows a one-sentence proof [O'R]. Leaves in the dual-tree of the triangulated polygon
correspond to ears and every tree of two or more nodes m
- Pattern Recognition Letters , 1989
"... It remains as one of the major open problems in computational geometry, whether there exists a linear-time algorithm for triangulating a simple polygon P. Yet it is well known that a diagonal of
P can easily be found in linear time. In this note we show that an ear of P can be found in linear time. ..."
Cited by 1 (0 self)
Add to MetaCart
It remains as one of the major open problems in computational geometry, whether there exists a linear-time algorithm for triangulating a simple polygon P. Yet it is well known that a diagonal of P
can easily be found in linear time. In this note we show that an ear of P can be found in linear time. An ear is a triangle such that one of its edges is a diagonal of P and the remaining two edges
are edges of P. Applications of this result are indicated. 1. Introduction The triangulation of simple polygons has received much attention in the computational geometry literature because of its
many applications in such areas as pattern recognition, computer graphics, CAD and solid modeling. Nevertheless, it remains as one of the major open problems in computational geometry, whether there
exists a linear-time algorithm for triangulating a simple polygon P. The fastest algorithm to date is due to Tarjan & Van Wyk [TV] and runs in O(n log log n) time, where n is the number of vertices
of P. On th...
, 1995
"... Abstract. To computer circular visibility inside a simple polygon, circular arcs that emanate from a given interior point are classified with respect to the edges of the polygon they first
intersect. Representing these sets of circular arcs by their centers results in a planar partition called the c ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. To computer circular visibility inside a simple polygon, circular arcs that emanate from a given interior point are classified with respect to the edges of the polygon they first intersect.
Representing these sets of circular arcs by their centers results in a planar partition called the circular visibility diagram. An O(n) algorithm is given for constructing the circular visibility
diagram for a simple polygon with n vertices.
, 2006
"... This thesis introduces the concept of a heterogeneous decomposition of a balanced search tree and apply it to the following problems: • How can finger search be implemented without changing the
representation of a Red-Black Tree, such as introducing extra storage to the nodes? (Answer: Any degree-ba ..."
Add to MetaCart
This thesis introduces the concept of a heterogeneous decomposition of a balanced search tree and apply it to the following problems: • How can finger search be implemented without changing the
representation of a Red-Black Tree, such as introducing extra storage to the nodes? (Answer: Any degree-balanced search tree can support finger search without modification in its representation by
maintaining an auxiliary data structure of logarithmic size and suitably modifying the search algorithm to make use of this auxiliary data structure.) • Do Multi-Splay Trees, which is known to be O
(log log n)-competitive to the optimal binary search trees, have the Dynamic Finger property? (Answer: This is work in progress. We believe the answer is yes.)
"... Introduction. The one-dimensional channel compaction problem with automatic jog insertion may be described informally as follows. We are given n horizontal wire segments organized into t tracks.
Segments on the same track have the same y-coordinate and the ranges of their x-coordinates do not overla ..."
Add to MetaCart
Introduction. The one-dimensional channel compaction problem with automatic jog insertion may be described informally as follows. We are given n horizontal wire segments organized into t tracks.
Segments on the same track have the same y-coordinate and the ranges of their x-coordinates do not overlap. Each segment has a via at each end, connecting it to vertical wires on another layer. The
goal is to minimize channel height subject to design rule constraints on the distances between wires and between wires and vias. The relative vertical position of wires is not allowed to change
during compaction. Figure 1(a) shows an initial layout with 7 tracks. The horizontal wires to be compacted are shown as thin solid lines while vertical wires on another layer are thick dashed lines.
Dots represent vias. Relative vertical position must be maintained due to what are called vertical constraints (see e.g. [PL88]). The left part of net
, 1998
"... This paper describes two approaches to triangulate a simple polygon. Emphasis is on practical and easy to implement algorithms, especially the first algorithm is straightforward and intuitive
but, however, quite efficient. Further, it does not require the sorting or the use of balanced tree structur ..."
Add to MetaCart
This paper describes two approaches to triangulate a simple polygon. Emphasis is on practical and easy to implement algorithms, especially the first algorithm is straightforward and intuitive but,
however, quite efficient. Further, it does not require the sorting or the use of balanced tree structures. Its worst running time complexity is O(n 2 ), but for special classes of polygons it runs in
linear time. The second approach requires some more sophisticated concepts of computational geometry but yields a better worst running time complexity of O(n log n). Both algorithms do not introduce
new vertices and triangulate in a greedy fashion, that is they never remove edges once inserted. Further, they are designed to find an arbitrary triangulation and they do not optimize the result in
any way. Keywords: computational geometry, polygon, triangulation, computational complexity, monotone polygon, trapezoidation 1 Introduction The problem of triangulating a polygon can be stated as: | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=814446&sort=cite&start=10","timestamp":"2014-04-19T01:49:07Z","content_type":null,"content_length":"38571","record_id":"<urn:uuid:719ea913-b386-4b7e-9834-b712feab2df9>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Polynomial Ring Automorphisms,
Rational (w, )-Canonical Forms,
and the Assignment Problem
Dedicated to the memory of Manuel Bronstein
(1963 2005)
S. A. Abramov
Russian Academy of Sciences
Dorodnicyn Computing Centre
Vavilova 40, 119991, Moscow GSP-1, Russia
M. Petkovsek
Faculty of Mathematics and Physics
University of Ljubljana
Jadranska 19, SI-1000 Ljubljana, Slovenia
We investigate representations of a rational function R k(x) where k is a field of characteristic
zero, in the form R = K ·S/S. Here K, S k(x), and is an automorphism of k(x) which maps
k[x] onto k[x]. We show that the degrees of the numerator and denominator of K are simultane-
ously minimized iff K = r/s where r, s k[x] and r is coprime with n
s for all n Z. Assuming
existence of algorithms for computing orbital decompositions of R k(x) and semi-periods of ir- | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/588/4809127.html","timestamp":"2014-04-18T09:17:47Z","content_type":null,"content_length":"8075","record_id":"<urn:uuid:6cbd5db0-4feb-45c4-afaa-5c39a8566b5d>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/mailtoarko/asked","timestamp":"2014-04-21T02:37:09Z","content_type":null,"content_length":"75801","record_id":"<urn:uuid:ff64df28-df15-4f73-aa0a-e4d44986083f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
he Knewton
Why “N choose K”?
The name of the Knewton tech blog is “N choose K”. If it’s been a long time since you set foot in a math class, this may not have any meaning to you. However, it’s an interesting concept, and really
not all that difficult. In this post, I’ll go over what the term means in simple English and discuss how we use it here at Knewton.
Imagine you’re visiting the newest, trendiest lounge in New York. Like many such lounges, this one specializes in producing excellent cocktails from before the Prohibition era. You look over the menu
and see that they offer one drink for each type of alcohol: champagne, gin, rum, tequila, and vodka. You decide that you want to try all of them, in pairs rather than one at a time, to see how well
they go with each other. You want to know how many different pairs‚ (one for each hand) you can make with these 5 drinks.
One way of doing this would be to count each of them. To do this, we can label each of the drinks with their main ingredient:
and systematically count each of the pairs (note that order of ingredients is not important):
for a total of 10 different pairs of drinks. Mathematically, this is known as n choose k, and is written as
It’s not too difficult to count each of these by hand. But now imagine that you’re an octopus at a different bar, which has 20 drinks, and you want to find out how many ways you can choose 8 drinks
out of 20, i.e.
The great thing about n choose k is that there’s a simple formula to figure it out: you just plug in your values for n and k, and out comes the answer. Here’s the formula:
which is a lot less complicated than it seems. The exclamation point is known as a factorial, and you can find any factorial by multiplying together all the numbers from that number to 1. So, 3! is
Let’s try it on our original problem, how many ways can we choose 2 drinks out of 5?
Note how we can make the calculations easier by canceling the top and bottom of the fraction when they’re exactly the same.
Now with
It’s a good thing octopi have a strong tolerance for liquor.
Of course, “n choose k” pertains to more than just mixing drinks at bars. In fact, you can see this in all sorts of applications, and often in exactly the sort of work we do here at Knewton. For
example, let’s say we have a graph that has only paths to adjacent nodes:
The question we need to answer is how many shortest paths are there from A to Z? (These are called lattice paths). There are algorithms that allow you to determine this exactly, but they require
polynomial time to complete. Instead, let’s solve an easier problem: What is the upper bound of the number of shortest paths from A to Z? So, what would the graph would look like where the upper
bound is the same as the actual number?
Well, a fully-connected graph would fit this scenario:
We can count the length of the path from A to Z by counting the length of the path:
Clearly we cannot have a shorter path than this, and so all shortest paths must have exactly 6 steps. The only independent variable then is the choice of which steps we take to the right; the
dependent variable is which steps we take down (it’s dependent because it is completely determined by the other variables). So in the example above:
Independent: steps 1, 2, 3, 4 are to the right
Dependent: steps 5, 6 are to the bottom (whatever is left)
Starting to sound familiar? It’s the same as “n choose k”! Given the total number of steps, how many ways can we choose to go right? And so we get:
More generally, for an m x n lattice grid, we can use the formula:
where j is one of the dimensions of the grid and k is the other; which one you pick doesn’t matter, as you’ll see below.
Be aware, you might see this formula as
You might be interested in why we chose which to the right and which down; in other words, can we switch the independent variable? The answer is yes, and it doesn’t matter, and it’s pretty neat,
because as you could calculate:
In fact, if you calculate a number of these and group them into a triangle shape, it results in a pattern known as Pascal’s triangle:
To find
These values are also known as binomial coefficients; if you’re interested in why, you can easily find more information on the web.
There are a couple neat patterns that Pascal’s triangle has. First, each number in the triangle is the sum of the two numbers above (you can think of the entire triangle surrounded by zeros if that
helps). This is pretty miraculous if you recall our original formula for determining n choose k.
The second pattern has to do with its shape: if you take Pascal’s triangle and color only the odd numbers, you get a Sierpinski triangle, which you may be familiar with: it’s our n choose k blog
Our work at Knewton allows us to play with ideas all day and turn those ideas into a reality. We chose the binomial coefficient
In this entry we discussed what n choose k means and how you can use it to limit your alcohol consumption find the number of ways groups of things can be chosen. Then we found a simple formula to
calculate it. We saw a practical use of formula in finding an upper bound of the number of shortest paths in a particular graph. Finally we looked at Pascal’s Triangle and the relationships between
the binomial coefficients, aka the results of multiple n choose k computations. I hope you had some fun, learned a bit about math, and have something new to discuss at your next cocktail party.
Thanks to Kevin Wilson and Solon Gordon for their help with this post.
What's this? You're reading N choose K, the Knewton tech blog. We're crafting the Knewton Adaptive Learning Platform that uses data from millions of students to continuously personalize the
presentation of educational content according to learners' needs. Sound interesting? We're hiring.
Hmm, loved your post. I wonder why there is so little interaction . . .
Really good post! Analogy helps more than anything to explain theory! | {"url":"http://www.knewton.com/tech/blog/2013/02/why-n-choose-k/","timestamp":"2014-04-17T15:27:39Z","content_type":null,"content_length":"33238","record_id":"<urn:uuid:321a3757-b6db-40f2-90f2-f830daa2d0d6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Stata not recommended for exact 2x2 test
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Stata not recommended for exact 2x2 test
From rgutierrez@stata.com (Roberto G. Gutierrez, StataCorp)
To statalist@hsphsun2.harvard.edu
Subject Re: st: Stata not recommended for exact 2x2 test
Date Mon, 10 Nov 2008 16:15:41 -0600
David Airey <david.airey@Vanderbilt.Edu> writes regarding an article
discussing software for the analysis of 2x2 tables.
> I was looking at the following citation, and found the author explicitly
> argued against using Stata for a particular 2x2 exact test. Here is the
> citation:
> Int J Epidemiol. 2008 Aug 18. [Epub ahead of print]
> Analysis of 2 x 2 tables of frequencies: matching test to experimental
> design.
> Ludbrook J.
> Department of Surgery, The University of Melbourne, Parkville, Victoria,
> Australia.
The main issue at hand is Stata's use of Fisher's exact test and confidence
intervals based on that test in the commands for the analysis of 2x2
epidemiological tables, namely commands -cc- for case control studies and -cs-
for cohort studies.
In the above-cited article, Ludbrook recommends against using Stata for these
types of analyses. His contention is not that Stata's implementation of
Fisher's exact test is incorrect, but instead that Fisher's test itself is not
always appropriate. In other words, if you do wish to perform Fisher's exact
test, use Stata and do so with confidence. That stated, we can now discuss
the attributes of Fisher's test with respect to the experimental-design
considerations made in that article.
In a Fisher's test of association in a 2x2 table, one conditions on both sets
of marginal totals (rows and columns). Ludbrook refers to this as "double
conditioning" and argues that this rarely mirrors experimental conditions. He
is right; it is difficult for us to imagine a trial where, say, we fix both
the total smokers/non-smokers _and_ the total cases/non-cases. Ludbrook cites
an example of double conditioning in a tea-tasting experiment, the conditions
of which further emphasize that such would almost never happen in biomedical
This of course then raises the question of why the Fisher test double
conditions at the analysis stage if that does not match the experiment. One
reason is computational simplicity -- conditioning on both row and column
totals reduces the size of the sample space making it easier to enumerate.
Another reason is theoretical: the totals are approximately ancillary (Yates
1984, pp. 447-449), and there are general arguments for conditioning on
ancillary statistics.
Does the fact that the conditioning does not match experimental conditions
make the method invalid? We don't think so. There exist many examples in
statistics where one gives away information -- conditions that is -- in order
to gain some sort of advantage. As just one example, in conditional
fixed-effects logistic regression, -clogit-, you stipulate the total group
successes in order to avoid having to estimate the group-level fixed effects.
Does that match the experimental conditions? On occasion (for example a
single-choice model), but certainly not always, and the method remains valid
even when the conditioning does not match the experimental conditions.
Such examples are numerous. In the context of 2x2 tables, conditioning allows
one to derive Fisher's exact test from any of four models: the four-Poisson
model, the two-binomial model, the multinomial, and the model for randomized
experiments (Cox 2006, pp. 52-54, 190). The four models (together with the
hypergeometric) justify Fisher's exact test for any of the three 2x2 study
designs: doubly-conditioned, singly-conditioned (for example randomized
clinical trials, as pointed out by Joseph Coveney) and unconditioned.
The conditioning in Fisher's test introduces no bias, but you can lose some
efficiency. How much efficiency you lose is commensurate with how much
information about row/column association is contained in the four marginal
totals. If the margins were orthogonal to relative internal cell sizes, there
would be nothing lost, but unfortunately they are not strictly orthogonal.
The literature is somewhat divided on exactly how much information is lost
through double conditioning (Plackett 1977; Barnard 1984), but our feeling is
that given the contention involved, it can't be much.
As computers get faster and new computational methods come about, it is
becoming more feasible to have software to supplement Fisher's test with
other exact tests that condition on only one set of marginal totals. We are
looking into adding such methods, and you can look forward to using them in
Stata at some point in the future. Of course, when that time comes you will
still have the option of using Fisher's test as well.
We've discussed only the main point of Ludbrook's article: the use of
Fisher's exact test when both margins are not fixed by design. We are also
preparing an FAQ that discusses the article and Stata on a more point-by-point
basis. This FAQ will be available soon, at which time we will make an
announcement to the list.
-Bobby -Wes
rgutierrez@stata.com weddings@stata.com
Barnard, G. A. 1984. Comment on "Tests of significance for 2x2 contingency
tables" by F. Yates. Journal of the Royal Statistical Society Series A 147,
Cox, D. R. 2006. Principles of Statistical Inference. Cambridge: Cambridge
University Press.
Plackett, R. L. 1977. "The marginal totals of a 2x2 table." Biometrika 64,
Yates, F. 1984. "Tests of significance for 2x2 contingency tables." Journal
of the Royal Statistical Society Series A 147, 426-463.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-11/msg00388.html","timestamp":"2014-04-20T15:56:53Z","content_type":null,"content_length":"10338","record_id":"<urn:uuid:586ce3ad-d9b9-4f20-b5a1-66deb238cbda>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
Google Answers: Weight of a million dollars
Hello and thank you for your question.
According to the U.S. Treasury, "In $100 bills, the weight of $1
million is about 22 pounds." [that's 10 kg.]
"The size of a dollar bill is 6.6294 cm wide, by 15.5956 cm long, and
0.010922 cm in thickness."
But a more accurate figure for thickness is the actual US government
requirement for currency paper
"Thickness (Caliper). The thickness of the paper shall be 124 ± 7
micrometers when tested as specified in Section 4.2.3.5."
The same source puts the weight of the paper (without ink) or
"Grammage, grams per square meter" at 88.7 ± 4.0"
So, the area of a single bill is 6.6294 cm by 15.5956 cm which is
.066294 m by .155956 m = 0.0103389471 square meters
Since $1,000,000 requires 10,000 bills, the total area of the bills is
103.389471 square meters
and the total thickness is 124 micrometers * 10,000 = 1,240,000
micrometers = 1.24 meters.
So the height of a single stack is 1.24 meters.
And the volume of the stack is 103.389471 * 0.000124 = 0.01282 cubic meters
And since the total area of the bills is 103.389471 square meters, at
a weight of 88.7 grams per meter or .0887 * 103.389471 = 9.17 kg
[add another .83 kg for the ink and you're back to the Treasury's 10kg. figure.
So in summary, you can have one stack of bills 1.24 meters = 48.82
inches high, or you can have, say, 4 stacks a little over a foot high,
22 pounds all together, or (if you prefer the metric system) 6 stacks
a little over 20 cm high, 10kg. all together.
Search terms used:
"dollar bill" length width cm
treasury length width thickness site:.gov
and lots of Google arithmetic, for example
Thanks again for letting us help.
Google Answers Researcher | {"url":"http://answers.google.com/answers/threadview?id=441929","timestamp":"2014-04-21T02:01:20Z","content_type":null,"content_length":"11158","record_id":"<urn:uuid:3cf0e97a-0bcb-4b5f-8881-110242c9bae7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Formally Unknowability, or absolute Undecidability, of certain arithmetic
Replies: 22 Last Post: Jan 29, 2013 8:21 PM
Messages: [ Previous | Next ]
Re: Formally Unknowability, or absolute Undecidability, of
Posted: Jan 28, 2013 8:20 AM
Nam Nguyen wrote:
> I meant, what would "tomorrow", "today" have anything to to with
> _mathematical logic_ ?
Oh, a lot. Look up 'temporal logic'. In my day it was something of a
curiosity of interest only to philosophers (hiss, boo, etc) but now it
is of much interest to computer scientists among others.
When a true genius appears in the world, you may know him by
this sign, that the dunces are all in confederacy against him.
Jonathan Swift: Thoughts on Various Subjects, Moral and Diverting
Date Subject Author
1/27/13 Formally Unknowability, or absolute Undecidability, of certain arithmetic namducnguyen
1/27/13 Re: Formally Unknowability, or absolute Undecidability, of certain Frederick Williams
1/27/13 Re: Formally Unknowability, or absolute Undecidability, of certain namducnguyen
1/27/13 Re: Formally Unknowability, or absolute Undecidability, of Frederick Williams
1/27/13 Re: Formally Unknowability, or absolute Undecidability, of certainarithmeticformulas. namducnguyen
1/27/13 Re: Formally Unknowability, or absolute Undecidability, of certainarithmeticformulas. Jesse F. Hughes
1/27/13 Re: Formally Unknowability, or absolute Undecidability, of certainarithmeticformulas. namducnguyen
1/28/13 Re: Formally Unknowability, or absolute Undecidability, of certainarithmeticformulas. Jesse F. Hughes
1/28/13 Re: Formally Unknowability, or absolute Undecidability, of certainarithmeticformulas. namducnguyen
1/28/13 Re: Formally Unknowability, or absolute Undecidability, of certainarithmeticformulas. namducnguyen
1/28/13 Re: Formally Unknowability, or absolute Undecidability, of Frederick Williams
1/29/13 Re: Formally Unknowability, or absolute Undecidability, of certainarithmeticformulas. namducnguyen
1/29/13 Re: Formally Unknowability, or absolute Undecidability, of certainarithmeticformulas. fom
1/28/13 Re: Formally Unknowability, or absolute Undecidability, of certain Frederick Williams
arithmetic formulas.
1/29/13 Re: Formally Unknowability, or absolute Undecidability, of certain namducnguyen
arithmetic formulas.
1/28/13 Re: Formally Unknowability, or absolute Undecidability, of certainarithmeticformulas. ross.finlayson@gmail.com
1/29/13 Re: Formally Unknowability, or absolute Undecidability, of certain arithmeticformulas. Michael Stemper
1/29/13 Re: Formally Unknowability, or absolute Undecidability, of certain namducnguyen
1/28/13 Re: Formally Unknowability, or absolute Undecidability, of certain arithmetic formulas.
1/28/13 Re: Formally Unknowability, or absolute Undecidability, of certain fom
arithmetic formulas.
1/29/13 Re: Formally Unknowability, or absolute Undecidability, of certain namducnguyen
arithmetic formulas.
1/29/13 Re: Formally Unknowability, or absolute Undecidability, of certain fom
arithmetic formulas.
1/29/13 Re: Formally Unknowability, or absolute Undecidability, of certain Graham Cooper
arithmetic formulas. | {"url":"http://mathforum.org/kb/message.jspa?messageID=8180842","timestamp":"2014-04-19T12:59:26Z","content_type":null,"content_length":"43864","record_id":"<urn:uuid:6016afd1-34b2-4087-be64-9efa98da4490>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ray Casting / Game Programming Tutorial - Page 7
To do this, we need to check any grid intersection points that are encountered by the ray; and see if there is a wall on the grid or not. The best way is to check for horizontal and vertical
intersections separately. When there is a wall on either a vertical or a horizontal intersection, the checking stops. The distance to both intersection points is then compared, and the closer
distance is chosen. This process is illustrated in the following two figures.
Figure 15
Steps of finding intersections with horizontal grid lines:
1. Find coordinate of the first intersection (point A in this example).
2. Find Ya. (Note: Ya is just the height of the grid; however, if the ray is facing up, Ya will be negative, if the ray is facing down, Ya will be positive.)
3. Find Xa using the equation given above.
4. Check the grid at the intersection point. If there is a wall on the grid, stop and calculate te distance.
5. If there is no wall, extend the to the next intersection point. Notice that the coordinate of the next intersection point -call it (Xnew,Ynew) is Xnew=Xold+Xa, and Ynew=YOld+Ya.
As an example the following is how you can get the point A:
Note: remember the Cartesian coordinate is
increasing downward (as in page 3), and
any fractional values will be rounded down.
======Finding horizontal intersection ======
1. Finding the coordinate of A.
If the ray is facing up
A.y = rounded_down(Py/64) * (64) - 1;
If the ray is facing down
A.y = rounded_down(Py/64) * (64) + 64;
(In the picture, the ray is facing up, so we use
the first formula.
A.y=rounded_down(224/64) * (64) - 1 = 191;
Now at this point, we can find out the grid
coordinate of y.
However, we must decide whether A is part of
the block above the line,
or the block below the line.
Here, we chose to make A part of the block
above the line, that is why we subtract 1 from A.y.
So the grid coordinate of A.y is 191/64 = 2;
A.x = Px + (Py-A.y)/tan(ALPHA);
In the picture, (assume ALPHA is 60 degrees),
A.x=96 + (224-191)/tan(60) = about 115;
The grid coordinate of A.x is 115/64 = 1;
So A is at grid (1,2) and we can check
whether there is a wall on that grid.
There is no wall on (1,2) so the ray will be
extended to C.
2. Finding Ya
If the ray is facing up
If the ray is facing down
3. Finding Xa
Xa = 64/tan(60) = 36;
4. We can get the coordinate of C as follows:
C.x=A.x+Xa = 115+36 = 151;
C.y=A.y+Ya = 191-64 = 127;
Convert this into grid coordinate by
dividing each component with 64.
The result is
C.x = 151/64 = 2 (grid coordinate),
C.y = 127/64 = 1 (grid coordinate)
So the grid coordinate of C is (2, 1).
(C programmer's note: Remember we always round down,
this is especially true since
you can use right shift by 8 to divide by 64).
5. Grid (2,1) is checked.
Again, there is no wall, so the ray is extended
to D.
6. We can get the coordinate of D as follows:
D.x=C.x+Xa = 151+36 = 187;
D.y=C.y+Ya = 127-64 = 63;
Convert this into grid coordinate by
dividing each component with 64.
The result is
D.x = 187/64 = 2 (grid coordinate),
D.y = 63/64 = 0 (grid coordinate)
So the grid coordinate of D is (2, 0).
6. Grid (2,0) is checked.
There is a wall there, so the process stop.
(Programmer's note: You can see that once we have the value of Xa and Ya, the process is very simple. We just keep adding the old value with Xa and Ya, and perform shift operation, to find out the
grid coordinate of the next point hit by the ray.)
Steps of finding intersections with vertical grid lines:
1. Find coordinate of the first intersection (point B in this example).
The ray is facing right in the picture, so B.x = rounded_down(Px/64) * (64) + 64.
If the ray had been facing left B.x = rounded_down(Px/64) * (64) - 1.
A.y = Py + (Px-A.x)*tan(ALPHA);
2. Find Xa. (Note: Xa is just the width of the grid; however, if the ray is facing right, Xa will be positive, if the ray is facing left, Ya will be negative.)
3. Find Ya using the equation given above.
4. Check the grid at the intersection point. If there is a wall on the grid, stop and calculate te distance.
5. If there is no wall, extend the to the next intersection point. Notice that the coordinate of the next intersection point -call it (Xnew,Ynew) is just Xnew=Xold+Xa, and Ynew=YOld+Ya.
In the picture, First, the ray hits point B. Grid (2,2) is checked. There no wall on (2,2) so the ray is extended to E. Grid (3,0) is checked. There is a wall there, so we stop and calculate the
In this example, point D is closer than E. So the wall slice at D (not E) will be drawn | {"url":"http://www.permadi.com/tutorial/raycast/rayc7.html","timestamp":"2014-04-21T14:42:45Z","content_type":null,"content_length":"9188","record_id":"<urn:uuid:af233664-cf28-49ac-91c3-78271d168897>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fair Lawn Calculus Tutor
Find a Fair Lawn Calculus Tutor
...I have a diagnostic book for such purpose. I also review home work and take students a notch higher in their current math topic .I always give parents a feed back after the end of every
sessions.By the way, I also tutor for a semester in the Math Lab at Passaic County Community College. I will also help students with home other than Math at no extra cost during our math session.
7 Subjects: including calculus, algebra 1, algebra 2, trigonometry
...I would love to be given the opportunity to prove it to you. I can tutor almost any Math subject but my favorite discipline to tutor is Physics. I am very enthusiastic about learning HOW
things work.
15 Subjects: including calculus, chemistry, physics, geometry
...Practice makes Perfect. I tutor all my students to the best of my ability and I take pride in my tutoring and my future teachingI am currently receiving my master's from Montclair State
University in elem ed k-6. I am graduating this May.
12 Subjects: including calculus, statistics, algebra 2, SAT math
...For students whose goal is to learn particular subjects, I make sure that the student understands the basics prior to delving into the details. In a nutshell, I provide tutoring based on the
student's need. Thank you for your time reading this profile!
15 Subjects: including calculus, chemistry, geometry, statistics
...I'm now a PhD student at Columbia in Astronomy (have completed two Masters by now) and will be done in a year. I have a lot of experience tutoring physics and math at all levels. I have been
tutoring since high school so I have more than 10 years of experience, having tutored students of all ages, starting from elementary school all the way to college-level.
11 Subjects: including calculus, Spanish, physics, geometry
Nearby Cities With calculus Tutor
Elmwood Park, NJ calculus Tutors
Fairlawn, NJ calculus Tutors
Garfield, NJ calculus Tutors
Glen Rock, NJ calculus Tutors
Hackensack, NJ calculus Tutors
Hawthorne, NJ calculus Tutors
Lodi, NJ calculus Tutors
North Haledon, NJ calculus Tutors
Paramus calculus Tutors
Paterson, NJ calculus Tutors
Radburn, NJ calculus Tutors
Ridgewood, NJ calculus Tutors
Saddle Brook calculus Tutors
Woodcliff, NJ calculus Tutors
Woodland Park, NJ calculus Tutors | {"url":"http://www.purplemath.com/Fair_Lawn_calculus_tutors.php","timestamp":"2014-04-20T20:01:22Z","content_type":null,"content_length":"23990","record_id":"<urn:uuid:4c7554af-f904-4833-b99c-97fa7714b023>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nonnegative to Positive Curvature.
up vote 14 down vote favorite
This questions asks for your intuition and insight as I'm surprised by how little is known about the difference between nonnegative and positive curvature. I don't want to be completely vague, so I
could ask: What are the difficulties and currently blocked paths to solving the Hopf Conjecture? (Does $S^2\times S^2$ support a metric of positive curvature?). But in general, I would like to know
what others might know on why it's difficult to determine if a given closed simply-connected space of nonnegative curvature can also admit positive curvature. As far as I know, there are no
obstructions, how come? The amount of examples of nonnegative curvature compared to that of examples of positive curvature seem to suggest there should be something distinguishing the two.
riemannian-geometry dg.differential-geometry open-problem
Interesting question. Is there any simply-connected closed manifold which is known to have nonnegative but not positive sectional curvature? – Johannes Ebert Dec 1 '10 at 20:50
5 This is well-known open (and fundamental) problem. A recent survey is at front.math.ucdavis.edu/0701.5389. – Igor Belegradek Dec 1 '10 at 21:15
add comment
1 Answer
active oldest votes
Yau asked in 1982 if there is any compact simply connected manifold with nonnegative curvature for which one can prove that it does not admit a metric of positive curvature. This question
opens his list of unsolved problems in geometry (see "Seminar on Differential Geometry", p. 670.)
Let me quote from "A Panoramic View of Riemannian Geometry" by Berger (Springer 2003, p. 579):
It is not surprising that many people tried to address Yau’s remark, starting with the Hopf conjecture on $S^2 × S^2$, by trying to deform such a metric with $K ≥ 0$ into one with $K >
0$. This means considering some one parameter family $g(t)$ of metrics and computing the various derivatives at $t = 0$ of the sectional curvature. Technically it is very easy to
up vote 8 compute such a derivative for a given tangent plane, but what is difficult is to find a variation for which all the derivatives would be positive. Today this approach still does not
down vote work.
A major difficulty is that it is not clear how to find the critical set of the sectional curvature (as a function on the set of tangent planes).
The earlier short survey by Bourguignon contains a discussion of the reasons why some seemingly natural approaches fail.
4 More recent remarks on Hopf conjecture (about existence of positively curved metric on $S^2\times S^2$) can be found in the survey by Wilking: front.math.ucdavis.edu/0707.3091. This
problem is somewhat poisoned as many talented people thought hard of it with little or no progress. – Igor Belegradek Dec 1 '10 at 23:55
2 I forgot to say that the remarks are hidden on page 26. – Igor Belegradek Dec 1 '10 at 23:56
@Igor Belegradek: thank you for the comments and reference. – Andrey Rekalo Dec 2 '10 at 6:36
add comment
Not the answer you're looking for? Browse other questions tagged riemannian-geometry dg.differential-geometry open-problem or ask your own question. | {"url":"http://mathoverflow.net/questions/47942/nonnegative-to-positive-curvature?sort=newest","timestamp":"2014-04-21T07:44:20Z","content_type":null,"content_length":"59391","record_id":"<urn:uuid:75162f38-671b-46a1-bd3d-0d03d9c9396c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
The deterministic lasso
Seminar Room 1, Newton Institute
We study high-dimensional generalized linear models. and risk minimization using the Lasso. The risk is taken under a random probability measure P' and the target is an overall minimizer of the risk
under some other nonrandom probability measure P. We restrict ourselves to a set S where P' and P are close to each other, and present an oracle inequality under a so-called compatibility condition
between the L_2 norm and l_1 norm.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/SCH/seminars/2008010711301.html","timestamp":"2014-04-20T10:51:19Z","content_type":null,"content_length":"6169","record_id":"<urn:uuid:7828349e-b794-47f7-aa51-a8b73376a24a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Are laws of nature really the same in all reference frames?
As you must know, this question is incomplete.
Speed is measured between two points and you've only listed one point for each of the two speeds you are looking for.
We have “two point”...
We have a orbit and thereby a circumstance of the milkyway (MW).
You can bend the orbit to a straight line.
So you do have “two points”.
A staring point that also is the final point when the orbit is completed.
You need to specify what you are measuring the speeds relative to and who is doing the measuring.
This is done too.
The Sun travels relative to a points of no motion, which is the center of the Milkyway.
The 2 clocks are according to the example mentioned above following the exact same orbit as the Sun.
Does both observer A and B agree the speed is the same ?
Maybe. But again, this isn't really all that important. What is important is this:
If both observers faithfully follow the principle of Relativity as stated in your title and, as it requires, ensure they are clear and consistent about what frames of reference they are doing the
measurements from, or measure from one and properly transform to the other, they will agree on what is happening.
First at all notice we are only speaking about influence due to gravity (GR) not about SR
The orbits of the 2 clock’s are exactly the same (for all observers).
Observer A and B are doing the measurement from their own reference frame.
B's clock is really ticking slower as A's clock - because B is closer to the Sun as A and therefore comparable slower as A's clock.
Nothing prevent that A and B can compare time differences.
Think about; how do we determinate how long 1 meter is or what the speed of light.
Both obersver A and B would determinate that the exact same way, wouldn’t they?
Again: If they follow the principle of Relativity and are clear on the choices of reference frames (and the definition of a "meter") they most certainly will.
They might say: "From here, your meter looks smaller than mine,
If A's meter-stick is comparable shorter as B's it will not only "look" shorter.
B's reality is real, as well as A's (or our) reality also are real.
Therefore B's meter stick will really be shorter.
We are not speaking about "illusion" but about realities.
but since I know our relative speeds, I calculate that if I was to go over to you and measure your meter it would be the same as mine."
Now you are speaking SR
Both A and B is according to the example in the same SR-reference frame, since both exactly follows the motion of the Sun, hence SR do not apply, - only GR does.
Let me ask more simple and all-round.
Imaging you was orbiting the Sun with a meter stick 50 billion km from the Sun.
I was orbiting 150 billion km from the Sun also carry a meter stick.
Would both meter sticks comparable be the same length? .
What I am asking is 1 meter the exact same length if a observer far away ( not affected by gravity of the Sun) could se both meter stick and also was able to compare if our meter stick did have the
exact same length, so long we are different places in the gravitational field of the Sun ?
Now let say that time in your orbit is 1 billion part slower for you, compared to my time rate.
Would your meter stick then
to that be 1 billion-part longer ? (or shorter ) – or exactly the same as mine, - still seen from a observer C far away and not affected by the gravity of the Sun.
The wording of the question violates Relativity by mixing and matching observations from different reference frames without properly accounting for the differences.
Comparing relative differences, doesn’t matter whether we speak about time rate, speed or length, - is not necessary mixing these, and this is also not what I have done at all.
I am not mixing anything but asking what is the
and [B]distance /B]
between A’s and B’s reality according to the example, - if any ?
There must be a very simple answer to that question.
If you would say there is no difference between the reality of A and B (accept time), simple math would show you a mathematical contradiction, since
multiplied with
can impossible result to the same distance for A and B. (since time for B is shorter)
So I am in fact trying to
Mathematical either
distance cannot
comparable be the same.
So what is the
answer here?
Is speed comparable larger - or is it distance that is proportional and comparable shorter (and therefore the meter -stick propositional longer) ?
If nothing proves that (comparable) speed is affected (and hence comparable different), and you multiply
less time
(for B) with the same (comparable ) speed that is valid for A, -
you will get a shorter distances for B.
So if you have no objection that we assume that speed is (comparable) the same for A and B, - then it is mathematical proven that
(circumstance of the MW) for A and B
is comparable the same for A.
If you do not agree speed is comparable the same, what is the correct comparable speed for B, ?.
Yours assumption that I am mixing realities is not true. – I am ONLY comparing realities, and do in fact try to keep factors separated, by asking what the
except time .
Is speed and / or distance comparable different according to the very simple example mentioned?
Please try to keep it simple. | {"url":"http://www.physicsforums.com/showthread.php?p=3535158","timestamp":"2014-04-20T00:53:17Z","content_type":null,"content_length":"94520","record_id":"<urn:uuid:68b42e2a-9b39-425c-b17f-2ab561943883>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Writing Algebraic Equations(Help Me)
February 19th 2008, 06:58 AM #1
Feb 2008
Writing Algebraic Equations(Help Me)
An apple grower finds that an apple tree produces on an average 400 apples per years, if no more than 16 trees are planted in a limit area. For each additional tree planted per unit area, the
grower finds that the yield decreases by 20 apples per tree per years. How many trees should the grower plant per unit area so as to get the maximum yield?
Please teach me how to form the equitions... don't just give me answer... Thanks
any1 help me plz?
An apple grower finds that an apple tree produces on an average 400 apples per years, if no more than 16 trees are planted in a limit area. For each additional tree planted per unit area, the
grower finds that the yield decreases by 20 apples per tree per years. How many trees should the grower plant per unit area so as to get the maximum yield?
Please teach me how to form the equitions... don't just give me answer... Thanks
yield = trees x apples
trees = $(16 + x)$ since you start with 16 and add x trees.
apples = $(400 - 20x)$ because each tree lose 20 apples from the initial 400 for every new tree x.
yield = $(16 + x)(400 - 20x) = -20x^2+80x+6400$
I assume you don't know calculus, so you graph the yield function and you will see that it has a peak at $x = 2$. Therefore the grower should plant $16+2=18$ trees. His yield is then $-20(2)^2+80
(2)+6400=-80+160+6400=6460$ apples.
yield = trees x apples
trees = $(16 + x)$ since you start with 16 and add x trees.
apples = $(400 - 20x)$ because each tree lose 20 apples from the initial 400 for every new tree x.
yield = $(16 + x)(400 - 20x) = -20x^2+80x+6400$
I assume you don't know calculus, so you graph the yield function and you will see that it has a peak at $x = 2$. Therefore the grower should plant $16+2=18$ trees. His yield is then $-20(2)^2+80
(2)+6400=-80+160+6400=6460$ apples.
I appreciate what u had done for my question. But, according to my teacher, calculus is used to find the maximum yield. therefore, i need a cubical (X^3) equation in order to use d^2y/dx^2 to
find the maximum yield.
I am going to have an exam tomorrow.... there should be a same type of question in the exam paper, if some1 know how to form an equation out of this, please spend a little time on the question.
I appreciate what u had done for my question. But, according to my teacher, calculus is used to find the maximum yield. therefore, i need a cubical (X^3) equation in order to use d^2y/dx^2 to
find the maximum yield.
I am going to have an exam tomorrow.... there should be a same type of question in the exam paper, if some1 know how to form an equation out of this, please spend a little time on the question.
The equation for the yield is quadratic, not cubic, and you don't need a second derivative to find extrema. In this case, the second derivative is the constant $-40$, which guarantees that there
is only one critical point and it will be an absolute maximum.
$\frac{d(yield)}{dx}=-40x+80=0 \implies 80=40x \implies x=2$
$x=2$ is then a critical point. Again, it is the only critical point. You can tell it is a maximum without the second derivative test because if you plug in $x=1$ into $-40x+80$, you get a
positive number, and when you plug in $x=3$ into $-40x+80$, you get a negative number. This means the function is increasing before 2 and decreasing after 2. Once again, this means the farmer
should grow $16+2=18$ trees per unit area.
Last edited by xifentoozlerix; February 20th 2008 at 12:01 AM.
i had solved the question.
February 19th 2008, 02:24 PM #2
Feb 2008
February 19th 2008, 03:56 PM #3
Dec 2007
February 19th 2008, 08:59 PM #4
Feb 2008
February 19th 2008, 11:50 PM #5
Dec 2007
February 20th 2008, 01:29 AM #6
Feb 2008 | {"url":"http://mathhelpforum.com/algebra/28609-writing-algebraic-equations-help-me.html","timestamp":"2014-04-16T17:26:11Z","content_type":null,"content_length":"48714","record_id":"<urn:uuid:24f67a63-663d-41d0-bcd2-de036a9bac61>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum field theory
In a sigma-model quantum field theory a field configuration is a morphism $\phi : \Sigma \to X$ for $\Sigma$ an $n$-dimensional manifold or similar. One is to think of this as being the trajectory of
an $(n-1)$-brane propagating in the target space $X$.
For the case $n = 1$ (for instance the relativistic particle, a 0-brane) the term worldline for $\Sigma$ has a long tradition. Accordingly one calls $\Sigma$ the worldvolume of the given $(n-1)$-
brane when $n \gt 1$. For the case $n=2$ (the case of relevance in string theory) one also says worldsheet.
Revised on April 9, 2014 10:31:05 by
Urs Schreiber | {"url":"http://ncatlab.org/nlab/show/worldvolume","timestamp":"2014-04-18T23:15:38Z","content_type":null,"content_length":"45260","record_id":"<urn:uuid:77329d1d-930f-4f7c-a6d0-9e40c9c2d22e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computer Simulation Tests of Feedback Error Learning Controller with IDM and ISM for Functional Electrical Stimulation in Wrist Joint Control
Journal of Robotics
Volume 2010 (2010), Article ID 908132, 11 pages
Research Article
Computer Simulation Tests of Feedback Error Learning Controller with IDM and ISM for Functional Electrical Stimulation in Wrist Joint Control
^1Department of Biomedical Engineering, Graduate School of Biomedical Engineering, Tohoku University, Sendai 980-8579, Japan
^2Department of Electrical and Communication Engineering, Graduate School of Engineering, Tohoku University, Sendai 980-8579, Japan
Received 29 October 2009; Accepted 18 April 2010
Academic Editor: Noriyasu Homma
Copyright © 2010 Takashi Watanabe and Yoshihiro Sugi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Feedforward controller would be useful for hybrid Functional Electrical Stimulation (FES) system using powered orthotic devices. In this paper, Feedback Error Learning (FEL) controller for FES
(FEL-FES controller) was examined using an inverse statics model (ISM) with an inverse dynamics model (IDM) to realize a feedforward FES controller. For FES application, the ISM was tested in
learning off line using training data obtained by PID control of very slow movements. Computer simulation tests in controlling wrist joint movements showed that the ISM performed properly in
positioning task and that IDM learning was improved by using the ISM showing increase of output power ratio of the feedforward controller. The simple ISM learning method and the FEL-FES controller
using the ISM would be useful in controlling the musculoskeletal system that has nonlinear characteristics to electrical stimulation and therefore is expected to be useful in applying to hybrid FES
system using powered orthotic device.
1. Introduction
Functional electrical stimulation (FES), which applies electric current or voltage pulses to peripheral nerves and muscles, is a method of restoring or assisting motor functions lost by the spinal
cord injury or the cerebrovascular disease. FES has been found to be effective clinically, especially in controlling paralyzed upper limbs [1–3]. For restoring lower limb functions, the hybrid FES
system, which uses an orthosis with FES, has been accepted as one of practical methods [4, 5].
In the recent years, powered orthotic devices or robotic exoskeletons have been focused on an assist or rehabilitation of lower limb functions [6, 7]. Therefore, the hybrid FES system is also
expected to be realized with powered orthotic devices. In such system, cooperative control between FES and powered orthosis will be necessary. Feedforward control scheme would be useful for
controlling fast movements of lower limbs in tracking to movements developed by the powered orthosis because control performance of a feedback controller is limited by large time delay and time
constant in responses of electrically stimulated muscles. However, complex, time-consuming adjustment of many parameters of the feedforward controller such as creating stimulation data for a lot of
muscles and time-varying properties of the musculoskeletal system make it difficult to use practically the feedforward FES controller in clinical application.
The Feedback Error Learning (FEL) proposed by Kawato et al. [8, 9] can realize a feedforward controller by learning inverse dynamics of controlled object. The FEL will be useful in FES control
because it can learn nonlinear characteristics of the musculoskeletal system to electrical stimulation and can remove the problem of manual adjustment of controller parameters by medical staffs in
applying to various subjects that have different characteristics of the musculoskeletal system.
In order to apply the FEL, a feedback controller is required. The multichannel feedback FES controller has to solve the ill-posed problem in regulating stimulation intensities because the number of
stimulated muscles is larger than that of controlled joint angles. The feedback FES controller based on the Proportional-Integral-Derivative (PID) control algorithm that we developed could provide a
way of solving the ill-posed problem [10, 11]. In our previous work, the FEL controller for FES (FEL-FES controller) using the PID controller was found to be feasible in controlling
1-Degree-Of-Freedom (1-DOF) of wrist joint movement (dorsi- and palmar flexions) stimulating 2 muscles [12].
The FEL-FES controller makes it possible to use both the feedforward and feedback controllers, which is an advantage for the cooperative control between FES and powered orthosis in the hybrid FES
system. Therefore, we performed preliminary test to expand the previous FEL-FES controller into controlling 2-DOF movements stimulating 4 muscles through computer simulation. However, the previous
FEL-FES controller had a problem in learning the inverse dynamics. That is, learning the inverse dynamics model (IDM) in the previous FEL-FES controller sometimes failed.
Since a major problem in applying the FEL to FES is inappropriate learning of the IDM in FES control, a modification of the FEL-FES controller was discussed through computer simulation before testing
with human subjects and applying the controller to hybrid FES system in this paper. In the previous FEL-FES controller, the IDM was only used for the feedforward controller since learning an inverse
statics model (ISM) was not easy in clinical applications of FES because of difficulty in acquiring training data, while the FEL controller by Kawato was composed of the ISM and the IDM.
In this paper, in order to include the ISM into the feedforward controller, a simple measurement method of training data for the ISM was introduced considering FES applications. The ISM learning and
the modified FEL-FES controller including the ISM were examined in wrist joint movement control by computer simulations in order to be compared to our previous work.
2. Feedback Error Learning Controller for FES
2.1. Outline
A block diagram of the feedback error learning controller for FES examined in this study is shown in Figure 1. The sum of output stimulation intensities from feedforward controllers (ISM and IDM) and
a feedback controller is applied to each muscle after adding offset (threshold value of electrical stimulation intensity) and clipping out with the limiter to prevent excessive stimulation.
The PID controller outputs positive and negative values of stimulation intensity for each muscle to cancel out the difference between the desired joint angle () and the actual angle () during
movement control. The outputs were also used in IDM learning on line.
Two three-layered artificial neural networks (ANNS) were used for ISM and IDM. The IDM and the ISM output positive values of stimulation intensity to each muscle calculated from the desired joint
angle (), while the IDM uses the first and second derivatives of the desired angle. The ISM is trained off line before IDM learning, and then the IDM is done on line using outputs of the feedback
2.2. Feedforward Controller
The structure of ANN for the IDM is shown in Figure 2. The input data of the desired joint angle and its first and second derivatives at continuous 6 times, from to , (50ms interval) in the
directions of dorsi/palmar flexion () and radial/ulnar flexion () were given simultaneously. Outputs were stimulation intensities to 4 muscles. Therefore, the numbers of neurons in the IDM were 36
for the input layer and 4 for the output layer. That for the hidden layer was 18, which was determined based on our previous results [12].
The output of each neuron in the hidden and the output layers was defined as where represents outputs of the neurons in the previous layer, is the connection weight from neurons in the previous
layer, c is the bias term, and i is the index of the neuron in the previous layer. The output function of the neuron is the sigmoid function
The IDM was trained on line by the error backpropagation algorithm [13, 14] using outputs of the PID controller. ANN connection weights are changed to reduce total error, E, as follows: where and are
desired stimulation intensity and stimulation intensity of the IDM, respectively. is the learning speed coefficient that has effect on convergence speed of learning. is approximated by stimulation
intensity of the PID controller, .
The ISM was trained off line before the IDM learning by using the error backpropagation algorithm. The three-layered ANN that had 2 neurons for the input layer, 18 and 4 for the hidden and the output
layers, was used for the ISM. The ISM and the PID controller output stimulation during control for IDM learning, although outputs of the ISM were not used for IDM learning.
2.3. Feedback Controller
The following PID control algorithm was used in the FEL-FES controller as the feedback controller: where the error vector e(n) is defined as difference between desired and measured joint angle
vectors at time n. The PID parameter matrices , , and were determined by modifying the Chien, Hrones, and Reswick (CHR) method, and their elements were expressed as follows [10]: where L[i] and T[i]
are the latency and the time constant of the step response of muscle i, when the response is approximated to the first order delay with latency. t is the sampling period. In case that a muscle has
two or more functions (j shows index of the function), the delay time and the time constant obtained for every components in a movement were averaged, respectively. The coefficient corresponds to a
reciprocal of the steady state gain of the system, which is calculated as an element of a generalized inverse matrix of a transformation matrix M. The matrix M transforms change of stimulation
intensity vector into change of joint angle vector. Calculation method of the coefficient is shown in Appendix A.
3. Computer Simulation Tests
The FEL-FES controller including the ISM was tested in controlling 2-DOF movements of the wrist joint. The muscles to be stimulated were the extensor carpi radialis longus/brevis (ECRL/ECRB), the
extensor carpi ulnaris (ECU), the flexor crpi radialis (FCR) and the flexor crpi ulnaris (FCU). The ECRL and the ECRB were assumed to be one muscle group (ECR) because of difficulty in selective
stimulation to them in experiments using surface electrodes that we performed [10].
For computer simulation tests of learning the ISM and the IDM and of control performance, a musculoskeletal model of the upper limb was developed. In brief, muscle force produced by electrical
stimulation was described by the Hill type muscle model with nonlinear length-force relationship k(l) and nonlinear velocity-force relationship h(v), which included muscle activation level a[m](s)
determined by nonlinear recruitment characteristics with dynamics to applied electrical stimulation (refer to Appendix B for details). That is, where s, l, and v were normalized stimulation
intensity, muscle length and contraction velocity, respectively. showed a constant of maximum muscle force. Active torque produced by electrical stimulation was calculated by muscle force and moment
arm ). That is, Moment arm was represented by an approximated polynomial equation as a nonlinear function of joint angle for each movement developed by each muscle [15]. Six different subject models
were prepared, in which the difference between 6 subjects was represented by adjusting mainly parameters of recruitment characteristics based on step responses and input-output (stimulus
intensity-joint angle) relationships of the muscles measured on 6 neurologically intact subjects.
In this study, ISM learning was carried out off line using training data that consisted of stimulation intensities to 4 muscles and 2 joint angles. A set of training data was obtained by the tracking
control of very slow movements using the PID controller. Figure 3 shows target trajectories of the tracking controls to obtain the training data set. The cycle period was 30s for all trajectories.
In Figure 3(a), the training data set was obtained from 2 target trajectories which were ellipses on the joint angle plane with the major radius of 20deg in dorsi/palmar flexion and the minor radius
of 15deg in radial/ulnar flexion and those of 10deg and 7.5deg. Four target trajectories as shown in Figure 3(b) were also used for measurement of another training data set for ISM learning, in
which 2 trajectories with the radius of 15deg and 11.25deg and 5deg and 3.5deg were added to those in Figure 3(a). The ISM was trained off line applying training data in random order. Initial
values of the ANN connection weights were random values.
The IDM was trained on line for different 5 target trajectories shown in Figure 4, which were also ellipses on the joint angle plane with the radius of 20deg in dorsi/palmar flexion and that of
15deg in radial/ulnar flexion. The centers of those trajectories were 0deg, 5deg moved to the radial, ulnar, dorsi, and palmer directions. Three cycle periods, 2, 3, and 6s, were used for all
trajectories. Six cycles were included in one control trial for IDM learning. Three sets of initial values of ANN connection weights were prepared, which were random small values that did not have
effect on movements at the 1st control trial (before IDM learning). Therefore, a total of 45 learning tasks were tested on 6 subject models with all controllers (without ISM, using ISM trained with 2
trajectories, and using ISM trained with 4 trajectories). Iteration number of IDM learning was fixed at 50.
4. Results
The ISM was evaluated by feedforward control of positioning. Target position for the control was set by a pair of dorsi/palmar flexion and radial/ulnar flexion angles at every 2 deg in the range of
20deg in dorsi- and palmar flexions and in the range of 16deg in radial and ulnar flexions. An example of the evaluation result of the ISM is shown in Figure 5. In the case of using 2 target
trajectories for obtaining training data (ISM-2), the error did not reduce around the center of the target trajectory and at positions between training data. As for the 4 trajectories for training
data (ISM-4), the errors were small inside the largest target trajectory. Larger target joint angles outside the largest trajectory could not be controlled appropriately with both ISM-2 and ISM-4.
Figure 6 shows average errors in open loop control of the positioning for ISM-2 and ISM-4. There was no large difference in the error between 6 subject models. Positioning errors shown in Figure 6(a)
are for evaluation including targets outside the largest trajectory, and those in Figure 6(b) show those excluding targets outside the largest trajectory. Average positioning errors inside the
largest trajectory (Figure 6(b)) were smaller than those in Figure 6(a). Figure 6(b) suggests that positioning in the radial/ulnar flexion was not trained sufficiently with the ISM-2.
Figure 7 indicates an example of control result of the FEL-FES controller using the ISM with the IDM. The IDM was trained during the tracking control. The first cycle period of 5s, which was set for
moving to the start position of tracking control, was not used in the IDM learning. Before IDM learning (the 1st control trial), the ISM and the PID controller performed tracking control without the
IDM. After IDM learning (the 50th control trial), the FEL-FES controller could perform good tracking with very small outputs of the PID controller.
In order to evaluate performance of the FEL-FES controller, mean error (ME) and power ratio (PR) shown in the following equations were calculated in each learning task: where, e(n) represents the
error between target joint angle and the resulted one at time n. N is the number of sampled data. and represent the output power of the feedforward and the feedback controllers, respectively. The ME
was calculated for each movement direction, and the PR was done for each muscle.
Average values of ME are shown in Figure 8. The controllers using the ISM decreased the error at the 1st control trial (before IDM learning). Especially, the ME was very small for slow movement
control. All 3 controllers performed good tracking control after the IDM learning (the 50th control trial). There was no difference in ME after the IDM learning between ISM-2 and ISM-4 and also
between with and without the ISM.
The power ratio, PR, gives us information of IDM learning. Figure 9 shows average value, the minimum and the maximum values of the PR. The FEL-FES controller using the IDM and the ISM achieved larger
average value and larger minimum value of the PR than those of the previous controller before and after IDM learning. After IDM learning, the minimum value of PR was greatly improved by using the
ISM. There was no difference in those improvements between ISM-2 and ISM-4.
5. Discussion
The off line ISM learning was effectively achieved with the small number of measurements of training data. For practical clinical application, small number of measurements and short period of control
time for acquiring the training data are required to avoid muscle fatigue and burden to patients. Therefore, training data acquired from feedback FES control of very slow continuous movements can be
useful in ISM learning for FES.
Increasing the number of target trajectories to obtain training data may be required for learning the ISM of the musculoskeletal system that has nonlinear characteristics. However, if the ISM is
mainly used to improve learning performance of the IDM, it is possible to decrease the number of measurements of training data because there was no large difference between ISM-2 and ISM-4. On the
other hand, target positions that had larger joint angles outside the largest trajectory could not be controlled appropriately as seen in Figure 5. This was a natural result because those targets
were outside the training data. Since the control performance of the ISM was improved by adding target trajectories to obtain training data, the ISM is expected to perform properly in the range of
motion if the training data that cover the range of motion are added.
The output power of the feedforward controller, PR, was increased by using the ISM as shown in Figure 9. More than 84% of the number of muscle outputs showed the increase of PR for movements with the
cycle period of 2s. For movements with the cycle period of 3s and 6s, it was more than 65% and more than 40%, respectively. These results show that IDM learning was improved in most of learning
tasks. For evaluating the improvement of IDM learning, the large PR rate that was defined as the percentage of the number of muscle outputs that had PR larger than 80% was calculated (Figure 10). The
large PR rate was also improved by using ISM, especially for fast movement control. These results suggest that the FEL-FES controller using the ISM can be effective to realize a feedforward
controller by learning nonlinear characteristics of the musculoskeletal system to electrical stimulation. For practical applications of the FEL to FES, an effective method of IDM learning will be
needed, because the musculoskeletal system has nonlinear characteristics and also has hysteresis characteristics.
The FEL-FES controller using the ISM made better control with small values of ME at the first control trial for IDM learning as expected (Figure 8(a)). Since the difference in ME between with and
without the ISM was not so large, the feedback controller was considered to perform well. However, control performance of the feedback FES controller sometimes deteriorated in tracking control
because of nonlinear characteristics of the musculoskeletal system to electrical stimulation [16] although the feedback controller has been shown to perform properly [10, 11]. Therefore, the ISM is
expected to become useful in controlling before IDM learning.
After IDM learning, all controllers showed small values of ME with no significant difference between the controllers (Figure 8(b)). However, the controllers using ISM resulted in larger average and
minimum values of PR than those of the controller without the ISM (Figure 9(b)). This suggests that the PID controller had effect on decreasing errors for the controller without the ISM even after
IDM learning while the feedforward controller worked mainly in the controllers using ISM. Therefore, there is a possibility that the controller without ISM has a problem in movement control of the
musculoskeletal system that has nonlinear characteristics.
6. Conclusions
Feedback error learning (FEL) controller using the ISM with the IDM was applied to FES control. The FEL-FES controller was examined in controlling 2-DOF movements of the wrist joint through computer
simulation. In order to train the ISM in FES application, training data were acquired by controlling very slow movements with the PID controller. The ISM trained off line using the training data
obtained by the simple measurement method was found to perform properly in the positioning task. The output power ratio of the feedforward controller in the FEL-FES controller was increased by using
the ISM showing improvement of IDM learning. The FEL-FES controller using ISM would be useful in realizing feedforward controller for controlling musculoskeletal system that has nonlinear
characteristics to electrical stimulation and therefore expected to be useful in applying to hybrid FES system.
Appendix (or) Appendices
A. Calculation of Gain of Feedback Controller
The transformation matrix M was obtained as follows (see Figure 11).First, the input (stimulus intensity)-output (joint angle) characteristics of each muscle were measured by applying electrical
stimulation, in which stimulation intensity was increased very slowly. Then, the minimum () and the maximum () stimulus intensities for FES control were determined, and the characteristics were
approximated to a linear line between these intensities by the least square method. The slope of the approximated line was used as an element of the matrix M, . Here, the input-output relationship of
the musculoskeletal system was represented approximately by using experimentally determined constant matrix M: In case of controlling 2-DOF movements stimulating 4 muscles, the following equation is
obtained: where and show change of joint angles of dorsi/palmar flexion and radial/ulnar flexion, respectively. means change of stimulation intensity to muscle i.
The matrix M is not the square matrix in general because the number of muscles stimulated is larger than that of degree-of-freedom of movement controlled. Therefore, the generalized inverse matrix of
the matrix , , was calculated. That is, Since there are many generalized inverse matrices for , the generalized inverse matrix has to be determined uniquely.
Here, after changing negative sign of into positive one, the calculation of the generalized inverse matrix can be solved as the quadratic programming problem using (A.5) as the objective function
under the constraints shown by (A.6) and (A.7) [17] This type of the quadratic programming problem can be converted to the linear programming problem by the Wolfe's algorithm [18]. The unique
solution of such linear programming problem can be obtained after the finite number of iterative calculations by the simplex method [18]. That is, a set of positive values of minimizing the value L
can be calculated under the condition of after changing negative sign of into positive one. Finally, the sign of was changed to negative sign based on the sign of .
B. Musculoskeletal Model for FES Control
In this study, the 2-DOF wrist joint movements (dorsi/palmar flexions and radial/ulnar flexions) were controlled stimulating the flexor carpi radialis (FCR), the flexor carpi ulnaris (FCU), the
extensor carpi radialis longus/brevis (ECRL/B), and the extensor carpi ulnaris (ECU). Since the four stimulated muscles also relate to forearm or elbow movements, the skeletal model structure of the
upper extremity was constructed in order to represent elbow flexion/extension, forearm pronation/supination, and wrist dorsi/palmar flexions and radial/ulnar flexions as shown in Figure 12. The
shoulder joint was designed to be fixed at arbitrary angles of flextion/extention and rotation. The 15 muscles relating these movements as the agonist were included as listed in Table 1. Some muscles
were also modeled as the synergistic muscles for other movements.
The musculoskeletal model to predict responses of electrically stimulated muscles is outlined in Figure 13. Muscle force produced by electrical stimulation was described by the Hill type muscle model
including muscle activation level determined by electrical stimulation a[m](s), length-force relationship k(l), velocity-force relationship h(v), and maximum muscle force . That is, where s, l, and v
were normalized stimulation intensity, muscle length, and contraction velocity, respectively. Active torque produced by electrical stimulation was calculated by muscle force and moment arm . That is,
Moment arm was represented by an approximated polynomial equation as a nonlinear function of joint angle θ for each movement developed by each muscle [15]. For example, the moment arm for the wrist
dosri/palmar flexion and elbow flexion/extension was described by the following equation: where ~ were parameters for each movement of each muscle. Each element of the is described in the following.
The nonlinear recruitment property of electrically stimulated muscle u(s) was modeled by the following [19]: where s[c], s[h], x[c], and y[c] were constants. Electrical stimulation was expressed in
normalized stimulation intensity s. The muscle activation a[m] was described by the following dynamics using the recruitment property with different two time constants, t[r] and t[f] [20]:
The length-force relationship k(l) was described by the following equation. means optimum muscle length [21]:
The velocity-force relationship h(v) during shortening and lengthening of muscle was modeled. shows maximum contraction velocity [21, 22]:
The maximum muscle force produced by electrical stimulation was determined by PCSA (physiological cross-sectional area) as follows [15]:
The passive viscoelastic element developed passive torque τ[P] calculated by the following equation for each joint movement [23]. The range of motion was also represented by this property: where and
were joint angle and angular velocity, respectively. Constants , , , and were determined for each joint movement.
The authors thank Dr. Kenji Kurosawa for his helpful advice on FEL controller for FES. This work was supported in part by the Saito Gratitude Foundation.
1. N. Hoshimiya, A. Naito, M. Yajima, and Y. Handa, “A multichannel FES system for the restoration of motor functions in high spinal cord injury patients: a respiration-controlled system for
multijoint upper extremity,” IEEE Transactions on Biomedical Engineering, vol. 36, no. 7, pp. 754–760, 1989. View at Scopus
2. B. Smith, Z. Tang, M. W. Johnson, S. Pourmehdi, M. M. Gazdik, J. R. Buckett, and P. Hunter Peckham, “An externally powered, multichannel, implantable stimulator-telemeter for control of paralyzed
muscle,” IEEE Transactions on Biomedical Engineering, vol. 45, no. 4, pp. 463–475, 1998. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
3. Y. Handa, K. Ohkubo, and N. Hoshimiya, “A portable multi-channel FES system for restoration of motor function of the paralyzed extremities,” Automedica, vol. 11, pp. 221–231, 1989.
4. K. A. Ferguson, G. Polando, R. Kobetic, R. J. Triolo, and E. B. Marsolais, “Walking with a hybrid orthosis system,” Spinal Cord, vol. 37, no. 11, pp. 800–804, 1999. View at Scopus
5. R. Kobetic, C. S. To, and C. S. To, “Development of hybrid orthosis for standing, walking, and stair climbing after spinal cord injury,” Journal of Rehabilitation Research and Development, vol.
46, no. 3, pp. 447–462, 2009. View at Publisher · View at Google Scholar · View at Scopus
6. C. R. Kinnaird and D. P. Ferris, “Medial gastrocnemius myoelectric control of a robotic ankle exoskeleton,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 17, no. 1, pp.
31–37, 2009. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
7. G. S. Sawicki and D. P. Ferris, “A pneumatically powered knee-ankle-foot orthosis (KAFO) with myoelectric activation and inhibition,” Journal of NeuroEngineering and Rehabilitation, vol. 6, no.
1, article 23, 2009. View at Publisher · View at Google Scholar · View at PubMed
8. M. Kawato, K. Furukawa, and R. Suzuki, “A hierarchical neural-network model for control and learning of voluntary movement,” Biological Cybernetics, vol. 57, no. 3, pp. 169–185, 1987. View at
Publisher · View at Google Scholar · View at Scopus
9. H. Miyamoto, M. Kawato, T. Setoyama, and R. Suzuki, “Feedback-error-learning neural network for trajectory control of a robotic manipulator,” Neural Networks, vol. 1, no. 3, pp. 251–265, 1988.
View at Scopus
10. T. Watanabe, K. Iibuchi, K. Kurosawa, and N. Hoshimiya, “A method of multichannel PID control of two-degree-of-freedom wrist joint movements by functional electrical stimulation,” Systems and
Computers in Japan, vol. 34, no. 5, pp. 25–36, 2003. View at Publisher · View at Google Scholar · View at Scopus
11. K. Kurosawa, T. Watanabe, R. Futami, N. Hoshimiya, and Y. Handa, “Development of a closed-loop FES system using 3-D magnetic position and orientation measurement system,” Journal of Automatic
Control, vol. 12, no. 1, pp. 23–30, 2002.
12. K. Kurosawa, R. Futami, T. Watanabe, and N. Hoshimiya, “Joint angle control by FES using a feedback error learning controller,” IEEE Transactions on Neural Systems and Rehabilitation Engineering,
vol. 13, no. 3, pp. 359–371, 2005. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
13. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986. View at Scopus
14. M. A. Arbib, The Handbook of Brain Theory and Neural Networks, MIT Press, Cambridge, Mass, USA, 2nd edition, 2002.
15. M. A. Lemay and P. E. Crago, “A dynamic model for simulating movements of the elbow, forearm, and wrist,” Journal of Biomechanics, vol. 29, no. 10, pp. 1319–1330, 1996. View at Publisher · View
at Google Scholar · View at Scopus
16. T. Watanabe, T. Matsudaira, N. Hoshimiya, and Y. Handa, “A test of multichannel closed-loop FES control on the wrist joint of a hemiplegic patient,” in Proceedings of the 10th Annual Conference
of the International FES Society, pp. 56–58, 2005.
17. K. Kurosawa, H. Murakami, T. Watanabe, R. Futami, N. Hoshimiya, and Y. Handa, “A study on modification method of stimulation patterns for FES,” Japanese Journal of Medical Electronics and
Biological Engineering, vol. 34, no. 2, pp. 103–110, 1996. View at Scopus
18. S. I. Gass, Linear Programming: Methods and Applications, McGraw-Hill, New York, NY, USA, 1969.
19. M. Levy, J. Mizrahi, and Z. Susak, “Recruitment, force and fatigue characteristics of quadriceps muscles of paraplegics isometrically activated by surface functional electrical stimulation,”
Journal of Biomedical Engineering, vol. 12, no. 2, pp. 150–156, 1990. View at Scopus
20. M. G. Pandy, B. A. Garner, and F. C. Anderson, “Optimal control of non-ballistic muscular movements: a constraint-based performance criterion for rising from a chair,” Journal of Biomechanical
Engineering, vol. 117, no. 1, pp. 15–26, 1995. View at Scopus
21. B. M. Nigg and W. Herzong, Biomechanics of the Musculo-Skeletal System, John Wiley & Sons, New York, NY, USA, 1995.
22. G. M. Eom, T. Watanabe, R. Futami, N. Hoshimiy, and Y. Handa, “Computer-aided generation of stimulation data and model identification for functional electrical stimulation (FES) control of lower
extremities,” Frontiers of Medical and Biological Engineering, vol. 10, no. 3, pp. 213–231, 2000. View at Scopus
23. J. M. Winters and L. Stark, “Analysis of fundamental human movement patterns through the use of in-depth antagonistic muscle models,” IEEE Transactions on Biomedical Engineering, vol. 32, no. 10,
pp. 826–839, 1985. View at Scopus | {"url":"http://www.hindawi.com/journals/jr/2010/908132/","timestamp":"2014-04-17T13:05:27Z","content_type":null,"content_length":"227103","record_id":"<urn:uuid:275745ed-10b3-4772-a62e-7c73235a7ff1>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Molecular Dynamics Exercise 1
Exercise 1: Starting Out
1. Getting familiar with the high-performance computing platform you will be using for the workshop.
2. Getting familiar with the Molecular Dynamics algorithm used in all of these exercises.
You can move on when?
You have successfully compiled, submitted, and competed a run with the Molecular Dynamics program and completed a plot of the scaling of the algorithm with respect to matrix dimension.
In Exercise 1, you will become familiar with the serial version of the algorithm described in
section. A reference implementation will be provided, with your task to examine and make sure you understand it, compile it on your HPC architecture, and then submit several runs of differing atom
counts to view the performance characteristics of the code and the processors in your machine.
The program can be downloaded at: Since the code is one straight file, compilation is trivial:
• C/C++
□ For Kraken: CC md.cpp -o md
□ For Ranger: pgCC md.cpp -o md
□ For Bluefire: xlC md.cpp -o md
• FORTRAN
□ For Kraken: ftn md.F -o md
□ For Ranger: pgf90 md.F -o md
□ For Bluefile: xlF md.F -o md
For further help on compiling codes on these HPC architectures:
The program has the following syntax:
moldyn <NumberOfParticles> <NumIterations>
NumberOfParticles ? Total number of particles in the system.
NumIterations - The number of fixed iterations
For example:
sbrown@kraken-pwd4(XT5): ./moldyn 100 10
The Total Number of Cells is 144 With 7 particles per cell, and
1000 particles total in system
Iteration 1 with Total Energy 0.6493868661E+05 Per Particle
Iteration 2 with Total Energy 0.6493862883E+05 Per Particle
Iteration 3 with Total Energy 0.6493849175E+05 Per Particle
Iteration 4 with Total Energy 0.6493827537E+05 Per Particle
Iteration 5 with Total Energy 0.6493797969E+05 Per Particle
Iteration 6 with Total Energy 0.6493760472E+05 Per Particle
Iteration 7 with Total Energy 0.6493715046E+05 Per Particle
Iteration 8 with Total Energy 0.6493661691E+05 Per Particle
Iteration 9 with Total Energy 0.6493600409E+05 Per Particle
Iteration 10 with Total Energy 0.6493531200E+05 Per Particle
The Iteration Time is 0.0599999987
1. Download the serial version of the code in your language of choice.
2. Spend some time looking over the code, if there is something you don't understand, please ask an instructor to help.
3. Compile the code with optimization level -O3.
4. Test the code on a small number of atoms (while your code may not give exactly the same answer as above, it should be similar).
5. Submit the following matrix sizes for 100 iterations to the queue: 1000, 10,000, and 100000 atoms.
6. Make a plot of atoms vs. time reported to determine the scaling of the algorithm.
Questions to Ponder...
1. Naively, this algorithm should scale as the number of atom squared, due to adding one more atom would require us to compute the contribution of the force from all of the other atoms in the
system. By using cells and only computing adjacent cells contributions we have changed this. What is the scaling of your algorithm with number of atoms? Is it linear, square or something in
2. What may limit the size of system you can do with this serial algorithm?
Extra Credit
1. Are there any compiler flags beyond -O3 that enhance the serial performance of the code?
2. Are there any programmatic enhancements that could be made to improve performance?
3. One could analyze this algorithm with in-depth performance tools to understand why it performance at certain sizes.
1. The queue submission script for this exercise should be fairly similar to the one you used for the example hello_world at the beginning of the workshop. | {"url":"http://hpcuniversity.org/vscse/petascale/molecular-dynamics-exercise-1.php","timestamp":"2014-04-16T04:47:25Z","content_type":null,"content_length":"8911","record_id":"<urn:uuid:acb2a5b0-cffd-4b36-a827-416ab49a99ff>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
Page:Popular Science Monthly Volume 76.djvu/389
This page has been
These observation equations follow:
Normal equations for each constant are formed from these observation equations by multiplying each equation by the coefficient of the constant concerned in the equation and adding. This gives us
three equations containing three unknown quantities. These unknown quantities are determined by any method and substituted in the general formula for S, T and U, respectively. For example, in 1900,
before the census returns for that year were available, the process above outlined yielded the following equations:
When these equations are solved, it is found that
S = 6.08, T = 0.690, U = 0.622.
If we substitute these in the formula, we get
P = 6.08 + 6.9 + 62.2, or P = 75.2 millions,
which is the forecast for 1900.
(It should be observed that in this work the year 1790 was considered — 1, and 1800 was taken as the origin.)
This estimate proved somewhat low, as the census returns reported 76.3 millions for 1900. This indicates that the population of the country is growing a little more rapidly than would be indicated
from its past history.
While the government authorities are at work on the census for 1910, it will be interesting to try this method of forecasting, and to see how well our results will compare with those to be announced
later on. I have made a number of equations which are supposed to represent empirically the growth of the population of our country. These have been made in various ways, but all depend upon the
parabolic formula, and the method outlined above.
The equations yield the following values for the census of 1910: | {"url":"http://en.wikisource.org/wiki/Page:Popular_Science_Monthly_Volume_76.djvu/389","timestamp":"2014-04-20T09:55:51Z","content_type":null,"content_length":"26156","record_id":"<urn:uuid:9e6dbd8d-f527-428d-995c-04ff88a52c7e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of discriminate between
The body mass index (BMI), or Quetelet index, is a statistical measurement which compares a person's weight and height. Though it does not actually measure the percentage of body fat, it is a useful
tool to estimate a healthy body weight based on how tall a person is. Due to its ease of measurement and calculation, it is the most widely used diagnostic tool to identify obesity problems within a
population. However it is not considered appropriate to use as a final indication for diagnosing individuals. It was invented between 1830 and 1850 by the Belgian polymath Adolphe Quetelet during the
course of developing "social physics.
Body mass index is defined as the individual's body weight divided by the square of their height. The formulas universally used in medicine produce a unit of measure of kg/m^2:
SI units $mathrm\left\{BMI\right\} = frac\left\{mathit\left\{weight\right\} mathrm\left\{\left(kg\right)\right\}\right\}\left\{mathit\left\{height\right\}^2 \left(mathrm\left\{m^2\right\}\right)\
$mathrm\left\{BMI\right\} = frac\left\{mathit\left\{weight\right\} mathrm\left\{\left(lb\right)\right\}*703\right\}\left\{mathit\left\{height\right\}^2 \left(mathrm\left\{in^2\right\}\right)
US units \right\}$
$mathrm\left\{BMI\right\} = frac\left\{mathit\left\{weight\right\} mathrm\left\{\left(lb\right)\right\}*4.88\right\}\left\{mathit\left\{height\right\}^2 \left(mathrm\left\{ft^2\right\}\
BMI can also be determined using a BMI chart, which displays BMI as a function of weight (horizontal axis) and height (vertical axis) using contour lines for different values of BMI or colors for
different BMI categories.
As a measure, BMI became popular during the early 1950s and 60s as obesity started to become a discernible issue in prosperous Western societies. BMI provided a simple numeric measure of a person's
"fatness" or "thinness", allowing health professionals to discuss over- and under-weight problems more objectively with their patients. However, BMI has become controversial because many people,
including physicians, have come to rely on its apparent numerical authority for medical diagnosis, but that was never the BMI's purpose. It is meant to be used as a simple means of classifying
sedentary (physically inactive) individuals with an average body composition. For these individuals, the current value settings are as follows: a BMI of 18.5 to 25 may indicate optimal weight; a BMI
lower than 18.5 suggests the person is
while a number above 25 may indicate the person is
; a BMI below 17.5 may indicate the person has
or a related disorder; a number above 30 suggests the person is
(over 40, morbidly obese).
For a fixed body shape and body density, and given height, BMI is proportional to weight. However, for a fixed body shape and body density, and given weight, BMI is inversely proportional to the
square of the height. So, if all body dimensions double, and weight scales naturally with the cube of the height, then BMI doubles instead of remaining the same. This results in taller people having
a reported BMI that is uncharacteristically high compared to their actual body fat levels. This anomaly is partially offset by the fact that many taller people are not just "scaled up" short people,
but tend to have narrower frames in proportion to their height. It has been suggested that instead of squaring the body height (as the BMI does) or cubing the body height (as seems natural), it would
be more appropriate to use an exponent of between 2.3 to 2.7.
BMI Prime
BMI Prime
, a simple modification of the BMI system, is the ratio of actual BMI to upper limit BMI (currently defined at BMI 25). As defined, BMI Prime is also the ratio of body weight to upper body weight
limit, calculated at BMI 25. Since it is the ratio of two separate BMI values, BMI Prime is a
dimensionless number
, without associated units. Individuals with BMI Prime < 0.74 are underweight; those between 0.74 and 0.99 have optimal weight; and those at 1.00 or greater are overweight. BMI Prime is useful
clinically because individuals can tell, at a glance, what percentage they deviate from their upper weight limits. For instance, a person with BMI 34 has a BMI Prime of 34/25 = 1.36, and is 36% over
his or her upper mass limit. In Asian populations (see International Variation section below) BMI Prime should be calculated using an upper limit BMI of 23 in the denominator instead of 25.
Nonetheless, BMI Prime allows easy comparison between populations whose upper limit BMI values differ.
A frequent use of the BMI is to assess how much an individual's body weight departs from what is normal or desirable for a person of his or her height. The weight excess or deficiency may, in part,
be accounted for by body fat (
adipose tissue
) although other factors such as muscularity also affect BMI significantly (see discussion below and
). The
regard a BMI of less than 18.5 as underweight and may indicate
, an
eating disorder
, or other health problems, while a BMI greater than 25 is considered overweight and above 30 is considered
. These ranges of BMI values are valid only as statistical categories when applied to adults, and do not predict health.
Category BMI range - kg/m^2 BMI Prime Mass (weight) of a person with this BMI
Severely underweight/Anorexic less than 16.5 less than 0.60 under
Underweight from 16.5 to 18.5 from 0.6 to 0.74 between
Normal from 18.5 to 25 from 0.74 to 1.0 between
Overweight from 25 to 30 from 1.0 to 1.2 between
Obese Class I from 30 to 35 from 1.2 to 1.4 between
Obese Class II from 35 to 40 from 1.4 to 1.6 between
Severely Obese from 40 to 45 from 1.6 to 1.8 between
Morbidly Obese from 45 to 50 from 1.8 to 2.0 between
Super Obese from 50 to 60 from 1.8 to 2.2 between
Hyper Obese above 60 above 2.2 above
The U.S. National Health and Nutrition Examination Survey of 1994 indicates that 59% of American men and 49% of women have BMIs over 25. Extreme obesity — a BMI of 40 or more — was found in 2% of the
men and 4% of the women. The newest survey in 2007 indicates a continuation of the increase in BMI, 63% of Americans are overweight, with 26% now in the obese category. There are differing opinions
on the threshold for being underweight in females, doctors quote anything from 18.5 to 20 as being the lowest weight, the most frequently stated being 19. A BMI nearing 15 is usually used as an
indicator for starvation and the health risks involved, with a BMI <17.5 being an informal criterion for the diagnosis of anorexia nervosa.
BMI is used differently for
. It is calculated the same way as for adults, but then compared to typical values for other children of the same age. Instead of set thresholds for underweight and overweight, then, the BMI
allows comparison with children of the same sex and age. A BMI that is less than the 5th percentile is considered underweight and above the 95th percentile is considered overweight. Children with a
BMI between the 85th and 95th percentile are considered to be at risk of becoming overweight.
Recent studies in England have indicated that females between the ages 12 and 16 have a higher BMI than males of the same age by 1.0 kg/m² on average.
International variations
These recommended distinctions along the linear scale may vary from time to time and country to country, making global, longitudinal surveys problematic. In 1998, the U.S.
National Institutes of Health
brought U.S. definitions into line with
World Health Organization
guidelines, lowering the normal/overweight cut-off from BMI 27.8 to BMI 25. This had the effect of redefining approximately 30 million Americans, previously "technically healthy" to "technically
overweight". It also recommends lowering the normal/overweight threshold for South East Asian body types to around BMI 23, and expects further revisions to emerge from clinical studies of different
body types.
In Singapore, the BMI cut-off figures were revised in 2005 with an emphasis on health risks instead of weight. Adults whose BMI is between 18.5 and 22.9 have a low risk of developing heart disease
and other health problems such as diabetes. Those with a BMI between 23 and 27.4 are at moderate risk while those with a BMI of 27.5 and above are at high risk of heart disease and other health
Category BMI range - kg/m^2
Starvation less than 14.9
Underweight from 15 to 18.4
Normal from 18.5 to 22.9
Overweight from 23 to 27.5
Obese from 27.6 to 40
Morbidly Obese greater than 40
Statistical device
The Body Mass Index is generally used as a means of correlation between groups related by general mass and can serve as a vague means of estimating
. The duality of the Body Mass Index is that, whilst easy-to-use as a general calculation, it is limited in how accurate and pertinent the data obtained from it can be. Generally, the Index is
suitable for recognising trends within sedentary or overweight individuals because there is a smaller margin for errors.
This general correlation is particularly useful for consensus data regarding obesity or various other conditions because it can be used to build a semi-accurate representation from which a solution
can be stipulated, or the RDA for a group can be calculated. Similarly, this is becoming more and more pertinent to the growth of children, due to the majority of their exercise habits.
The growth of children is usually documented against a BMI-measured growth chart. Obesity trends can be calculated from the difference between the child's BMI and the BMI on the chart. However, this
method again falls prey to the obstacle of body composition: many children who primarily grow as endomorphs would be classed as obese despite body composition. Clinical professionals should take into
account the child's body composition and defer to an appropriate technique such as densitometry e.g. Dual energy X-ray absorptiometry, also known as DEXA or DXA.
Clinical practice
BMI has been used by the
as the standard for recording obesity statistics since the early 1980s. In the United States, BMI is also used as a measure of underweight, owing to advocacy on behalf of those suffering with eating
disorders, such as
anorexia nervosa
bulimia nervosa
BMI can be calculated quickly and without expensive equipment. However, BMI categories do not take into account many factors such as frame size and muscularity. The categories also fail to account
for varying proportions of fat, bone, cartilage, water weight, and more.
Despite this, BMI categories are regularly regarded as a satisfactory tool for measuring whether sedentary individuals are "underweight," "overweight" or "obese" with various qualifications, such as:
Individuals who are not sedentary being exempt - athletes, children, the elderly, the infirm, and individuals who are naturally endomorphic or ectomorphic (i.e., people who don't have a medium
One basic problem, especially in athletes, is that muscle is denser than fat. Some professional athletes are "overweight" or "obese" according to their BMI - unless the number at which they are
considered "overweight" or "obese" is adjusted upward in some modified version of the calculation. In children and the elderly, differences in bone density and, thus, in the proportion of bone to
total weight can mean the number at which these people are considered underweight should be adjusted downward.
Medical underwriting
In the United States, where
medical underwriting
of private health insurance plans is widespread, most private health insurance providers will use a particular high BMI as a cut-off point in order to raise insurance rates for or deny insurance to
higher-risk patients, thereby ostensibly reducing the cost of insurance coverage to all other subscribers in a 'normal' BMI range. The cutoff point is determined differently for every health
insurance provider and different providers will have vastly different ranges of acceptability. Many will implement phased surcharges, in which the subscriber will pay an additional penalty, usually
as a percentage of the monthly premium, for each arbitrary range of BMI points above a certain acceptable limit, up to a maximum BMI past which the individual will simply be denied admissibility
regardless of price. This can be contrasted with
group insurance policies
which do not require medical underwriting and where insurance admissibility is guaranteed by virtue of being a member of the insured group, regardless of BMI or other risk factors that would likely
render the individual inadmissible to an individual health plan.
Limitations and shortcomings
The medical establishment has generally acknowledged some shortcomings of BMI. Because the BMI is dependent only upon weight and height, it makes simplistic assumptions about distribution of muscle
and bone mass, and thus may overestimate adiposity on those with more lean body mass (e.g. athletes) while underestimating adiposity on those with less lean body mass (e.g. the elderly).
One recent study Romero-Corral et al. found that BMI-defined obesity was present in 19.1% of men and 24.7% of women, but that obesity as measured by bodyfat percentage was present in 43.9% of men and
52.3% of women. Moreover, in the intermediate range of BMI (25-29.9), BMI failed to discriminate between bodyfat percentage and lean mass. The study concluded that "the accuracy of BMI in diagnosing
obesity is limited, particularly for individuals in the intermediate BMI ranges, in men and in the elderly. . . . These results may help to explain the unexpected better survival in overweight/mild
obese patients."
The exponent of 2 in the denominator of the formula for BMI is arbitrary. It is meant to reduce variability in the BMI associated only with a difference in size, rather than with differences in
weight relative to one's ideal weight. If taller people were simply scaled-up versions of shorter people, the appropriate exponent would be 3, as weight would increase with the cube of height.
However, on average, taller people have a slimmer build relative to their height than do shorter people, and the exponent which matches the variation best is between 2 and 3. An analysis based on
data gathered in the USA suggested an exponent of 2.6 would yield the best fit. The exponent 2 is used instead by convention and for simplicity.
Some argue that the error in the BMI is significant and so pervasive that it is not generally useful in evaluation of health. Due to these limitations, body composition for athletes is often better
calculated using measures of body fat, as determined by such techniques as skinfold measurements or underwater weighing and the limitations of manual measurement have also led to new, alternative
methods to measure obesity, such as the body volume index. However, recent studies of American football linemen who undergo intensive weight training to increase their muscle mass show that they
frequently suffer many of the same problems as people ordinarily considered obese, notably sleep apnea.
In an analysis of 40 studies involving 250,000 people, heart patients with normal BMIs were at higher risk of death from cardiovascular disease than people whose BMIs put them in the "overweight"
range (BMI 25-29.9). Patients who were underweight (BMI <20) or severely obese (BMI >35) did, however, have an increased risk of death from cardiovascular disease. The implications of this finding
can be confounded by the fact that many chronic diseases, such as diabetes, can cause weight loss before the eventual death. In light of this, higher death rates among thinner people would be the
expected result.
A further limitation relates to loss of height through aging. In this situation, BMI will increase without any corresponding increase in weight.
See also
External links | {"url":"http://www.reference.com/browse/discriminate+between","timestamp":"2014-04-16T13:12:39Z","content_type":null,"content_length":"94772","record_id":"<urn:uuid:a5d6d4b9-2ce6-4a73-a4e6-85b0110f6c03>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analysis of alternative signaling pathways of endoderm induction of human embryonic stem cells identifies context specific differences
Lineage specific differentiation of human embryonic stem cells (hESCs) is largely mediated by specific growth factors and extracellular matrix molecules. Growth factors initiate a cascade of signals
which control gene transcription and cell fate specification. There is a lot of interest in inducing hESCs to an endoderm fate which serves as a pathway towards more functional cell types like the
pancreatic cells. Research over the past decade has established several robust pathways for deriving endoderm from hESCs, with the capability of further maturation. However, in our experience, the
functional maturity of these endoderm derivatives, specifically to pancreatic lineage, largely depends on specific pathway of endoderm induction. Hence it will be of interest to understand the
underlying mechanism mediating such induction and how it is translated to further maturation. In this work we analyze the regulatory interactions mediating different pathways of endoderm induction by
identifying co-regulated transcription factors.
hESCs were induced towards endoderm using activin A and 4 different growth factors (FGF2 (F), BMP4 (B), PI3KI (P), and WNT3A (W)) and their combinations thereof, resulting in 15 total experimental
conditions. At the end of differentiation each condition was analyzed by qRT-PCR for 12 relevant endoderm related transcription factors (TFs). As a first approach, we used hierarchical clustering to
identify which growth factor combinations favor up-regulation of different genes. In the next step we identified sets of co-regulated transcription factors using a biclustering algorithm. The high
variability of experimental data was addressed by integrating the biclustering formulation with bootstrap re-sampling to identify robust networks of co-regulated transcription factors. Our results
show that the transition from early to late endoderm is favored by FGF2 as well as WNT3A treatments under high activin. However, induction of late endoderm markers is relatively favored by WNT3A
under high activin.
Use of FGF2, WNT3A or PI3K inhibition with high activin A may serve well in definitive endoderm induction followed by WNT3A specific signaling to direct the definitive endoderm into late endodermal
lineages. Other combinations, though still feasible for endoderm induction, appear less promising for pancreatic endoderm specification in our experiments.
Human embryonic stem cells; Endoderm; Hierarchical clustering; Biclustering; Bootstrap
Embryonic stem cells have been shown to have tremendous impact in the field of regenerative medicine because of its potential to differentiate to multiple cell types of interest. Efficient harvesting
of this potential requires careful development of protocols to evolve the cells through specific signaling pathways which will induce desired lineages and properties in the differentiated phenotypes.
Our primary interest lies in differentiation of human embryonic stem cells (hESCs) to insulin producing β-cells of the pancreas as a cellular transplantation strategy for diabetes mellitus. The first
and perhaps the most important step in differentiation to endodermal organs like pancreas and liver is the commitment to definitive endoderm (DE) [1]. Multiple signaling pathways have been reported
to have success in inducing endoderm differentiation with subsequent maturation to liver, pancreas and lung. While there is some understanding of the activity pathway of these individual signaling
molecules, detailed knowledge of transcriptional controls activated through these signaling pathways is largely unknown. Moreover, cooperative effect of these endoderm induction pathways, along with
its impact on long term maturation has received less attention. Although standard protocols have been established for the later stages of pancreatic induction, it is not always obvious how these
endoderm derivatives derived from different pathways will respond to subsequent pancreatic induction signals. In this article, we have analyzed the endoderm induction stage of the differentiation
process induced by the combinatorial action of the signaling pathways using an integrated experimental and mathematical approach. A detailed mathematical analysis is adopted to capture co-regulated
TFs across different growth factor combinations and projection of maturation potential of the various endoderm derivatives.
Differentiation of hESCs to DE
Activin A (henceforth denoted as activin) has been shown to be effective in inducing DE from hESCs and is a key induction factor used in many protocols [2,3]. However, recent studies have shown that
activin alone may not produce homogeneous differentiation and additional factors must be used to modulate supplementary signaling pathways along with the nodal pathway activated by activin [1,4]. We
chose several widely used DE induction protocols all of which involve activin with either PI3K inhibition [5], WNT3A [3], BMP4 [6] or FGF2 [7]. The hESCs were differentiated into DE using these
molecules alone and in all possible combinations, at the end of which the differentiated cell population was analyzed for endoderm markers. Our aim is twofold: to identify which growth factor
combinations are most effective for efficient DE induction; and to understand TF interactions governing these induction conditions. We analyzed the mean expression data using Hierarchical clustering
(HC) to identify relationships between the conditions and the TFs and biclustering on the original expression data with replicates to identify the TFs which are co-regulated under subsets of these
Hierarchical clustering
HC is a useful technique to analyze and interpret multivariate data. Each data point here is represented as a vector and the distances between these data points are measured using a suitable distance
measure [8]. The clustering process then links the data points together and the result is a hierarchical grouping of the data points in each of the dimensions (TFs and conditions in our case). Our
primary goal in using HC is to capture the similarities between different growth factor treatments for DE induction as well as to identify co-regulated TFs under each of these treatments. HC has been
successfully used in a number of bioinformatics applications including microarray data analysis, structure identification of bio-molecules and gene pathway identification [9].
Biclustering to identify co-regulated genes across different conditions
While HC homogenizes the entire dataset, techniques like biclustering are useful in preserving the second dimension in clustering; in our case all the endoderm induction conditions. We are interested
in identifying specific sets of genes exhibiting similar expression patterns across various subsets of experimental conditions, which can be achieved by biclustering. Likewise, many TFs are known to
have multiple functions, and hence participate in multiple regulatory networks, which can also be captured by overlapped biclusters [10]. In 2000, Cheng and Church proposed the use of a similarity
measure called the mean square residue for identification of coherent biclusters [11]. Since then newer and better algorithms have been developed to identify biclusters with particular characteristic
trends like coherence, low overlaps and hierarchical structure [12]. These algorithms perform either one or a combination of iterative row and column clustering, greedy iterative search, exhaustive
bicluster enumeration or distribution parameter identification [13]. Bleuler et al. proposed an evolutionary algorithm (EA) to determine high quality, partially overlapped biclusters using the Cheng
and Church formulation [14]. EAs have the advantage of large search space and are efficient methods for complex optimization problems [15]. High quality biclusters should satisfy many criteria;
namely they should contain as many genes and conditions as possible, low mean square residue, high row variance and should have low overlapping. Divina et al. formulated Sequential Evolutionary
Biclustering (SEBI) algorithm to identify such biclusters from the expression data which has been adopted in the current work to identify important biclusters for the endoderm induction data under
different combinations of the growth factors [15]. SEBI can find high quality biclusters and has been proved to perform well for large-scale biological datasets [15]. At the same time, it allows the
user the flexibility of selecting the degree of overlap of the biclusters.
Handling data variability
The gene expression data obtained for cell culture systems are subjected to noise because of the heterogeneity and stochasticity associated with the system. Differences among the biological
replicates may therefore arise due to the inherent heterogeneity of the ES cell population as well as by experimental noise [16]. Therefore, it is essential that the biclustering algorithm be
supplemented with additional methods to discover good quality and robust biclusters from noisy gene expression data. One way to do this is to obtain a large number of experimental replicates and
perform biclustering over the entire dataset. This is however, expensive and impractical. A mathematical surrogate of this approach is bootstrapping, a concept first presented systematically by Efron
et al. [17].
Essentially, bootstrapping generates a pseudo dataset from the small number of experimental replicates by a sampling with replacement technique. The advantage of bootstrap lies in estimating
statistically significant parameters from a limited number of experimental replicates [18]. Thus, the results from a bootstrap analysis can provide information on the parameter variances and
confidence intervals. These bootstrap data-sets are further analyzed by ensemble methods like bagging to identify aggregation of biclusters, referred to as meta-clusters [19]. We have adopted a
similar approach to aggregate the individual biclusters identified from the bootstrap datasets. However instead of identifying an ensemble of biclusters, we have concentrated on identifying the most
repeated subset of the bicluster, which we denote as robust.
The focus of this work is to understand the mechanism of endoderm induction using different growth factors, acting alone and in combination, from an integrated experimental and computational approach
(summarized in Figure 1). The H1 human embryonic stem cells were induced towards endoderm lineage using activin along with alternate growth factors, namely FGF2, BMP4, PI3KI, WNT3A, added in 15
combinations. The cells differentiated thereof were analyzed in detail for their gene expression levels, specifically concentrating on a broad range of endoderm markers along with representative
pancreatic endoderm markers.
Figure 1. Work-flow for the entire analysis from data collection to identification of robust biclusters. In short, we start with the qRT-PCR data and perform bootstrap with re-sampling to obtain 1000
pseudo-datasets. Each of these datasets is subjected to biclustering analysis to obtain the most coherent pattern in each dataset. The resulting biclusters are then analyzed for the most repeated
subsets of biclusters.
Experimental analysis of endoderm differentiation using combinations of major pathways
Figure 2a shows the mean expression data plotted as fold changes in 12 genes across the 15 experimental conditions. At this stage, the fold change data showed interesting trends for the different
conditions. When using only one factor other than activin, PI3KI along with activin was found to give the highest expression of most of the DE markers while BMP4 and activin in combination was found
to give the lowest expression among the four conditions. Interestingly, BMP4 was found to perform better in combination with another factor like WNT3A or FGF2. Also, FGF2 containing conditions were
found to favor CER while BMP4 containing conditions to favor HNF4α. Among the 4 conditions which contain 3 factors other than activin, combinations of FGF2, BMP4 and PI3KI perform well. Using all the
factors together was not particularly useful since all the TFs maintained expressions in the same range as other combinations. Figure 2b shows the range of variation observed in each of the
transcriptional markers across the 15 experimental conditions along-with the experimental replicates. The levels of DE markers CER, FOXA2, CXCR4 and late endoderm markers HNF4α, HNF1β and GATA4
change substantially when the induction conditions are changed. This level of analysis, however, makes it difficult to draw mechanistic insights from the dataset. Hence, we performed a more rigorous
mathematical analysis to separate out the TF trends and associate them with the appropriate conditions. Because of the inherent differences in expression level of different genes, it is essential to
normalize the data to avoid bias. For the mathematical analysis, the data presented in Figure 2a was normalized by mean centering and variance scaling so that every TF has a mean expression value of
zero and standard deviation of one.
Figure 2. Fold change data for the 12 transcriptional markers across 15 experimental conditions. (a) The fold change calculated from the mean expression data from qRT-PCR on day 4 of the
differentiation process is plotted from the expression matrix, X, constructed using rows as the TFs and columns as the experimental conditions. (b) Variation observed in the 12 transcriptional
markers with changes in the signaling pathways presented as mean ± SE. All the major DE markers CER, CXCR4, FOXA2, SOX17 and the later endoderm markers HNF4α, HNF1β and GATA4 show significant changes
with the nature of DE induction.
Hierarchical clustering of the mean expression data identifies differences in the endoderm induced by BMP4 in the presence and absence of exogenous FGF2
The mean experimental data matrix was first analyzed using hierarchical clustering which clusters the TFs and conditions separately, as shown in Figure 3. Among the conditions, two major branches
were observed: the first cluster contains BMP4 dominant conditions (B, B + W, B + P, B + W + P) and the second cluster contains the remaining conditions which also includes BMP4 but interestingly
only in combination with FGF2. The TFs also segregate into two branches; the first branch contains the late endoderm markers and one of the DE markers (HNF4α, HNF1β, GATA4, PDX1, FOXA2), the second
branch contains the early DE and late endoderm markers (OCT4, BRACHYURY, CER, HNF6, CXCR4, SOX17, PTF1α). The first group of markers is particularly high in BMP4 dominant conditions and low in the
other conditions. The second group of markers is low in the BMP4 dominant conditions and high in the presence of PI3KI, WNT3A and BMP4 and high FGF2. Thus our results point to differences in activin
and BMP4 induced endoderm in the presence and absence of exogenous FGF2. We performed principal component analysis (PCA) on the same data retaining only the first three components to filter noise and
identify the most represented groups. As shown in the Additional file 1, a similar conclusion can be drawn from PCA further supporting our analysis.
Additional file 1. Principal Component Analysis.docx
Format: DOCX Size: 136KB Download file
Figure 3. Hierarchical clustering on the mean expression data. The conditions cluster into two major groups, one containing BMP4 in the absence of exogenous FGF2 and the other containing all the
other treatments and BMP4 in combination with exogenous FGF2. Activin A is common among all the treatments. The TFs cluster into two groups, the late and early endoderm markers.
The clusters identified by the hierarchical algorithm reflect our biological understanding of the induction conditions as seen from the previous studies. A major difference between the two clusters
of conditions was the context dependent function of BMP4. In the presence of FGF2 and high activin, BMP4 was found to favor the endodermal lineage which was seen in several recent studies [20-22] and
was also on par with PI3KI dominant conditions which gave the best endoderm in our experiments. Also, in our BMP4 dominant conditions, the late stage markers showed very high expression while the
major DE markers were low indicating that the resulting endoderm may already be mature. Among the second group of conditions, PI3KI and high activin resulted in high expression of three major DE
markers SOX17, CXCR4 and CER which is supported by a number of earlier studies [23,24]. Using all the factors together does not improve upon the endoderm derived by PI3KI treatment. The second group
of conditions also contains FGF2 as a major factor along with WNT3A. It is found that both pluripotency (OCT4) and the endoderm factors (CER and HNF6) are relatively favored by conditions involving
FGF2 and WNT3A as the major contributor. In fact, FGF2 has been found to be sufficient to maintain the hESCs in the pluripotent state [25] and has also been used for endoderm induction in several
differentiation protocols [26]. Thus, FGF2 can potentially favor both pluripotency as well as endoderm differentiation depending on associated conditions.
Identification of co-regulated transcription factors by biclustering
While hierarchical clustering enables a fast and simplistic analysis of the experimental data sets, it does not provide information on which subsets of TFs are co-regulated across subsets of
conditions. Identifying such co-clusters will be beneficial, since the governing signaling pathways change with the induction condition and the same TFs may not be co-regulated. The technique of
biclustering serves to mine subgroups of such TFs exhibiting similar trends in their expression level under subsets of conditions. Hence TFs appearing in the same bicluster can be inferred to be
co-regulated and constituents of a similar network architecture. The experimental data matrix, X, constituting the mean expression data across all the growth factor conditions is analyzed using the
algorithm elaborated in Methods section. Here, the biclustering approach is formulated as an optimization problem solved using genetic algorithm (GA) and the quality of every candidate bicluster is
assessed by a fitness function. The fitness function has a number of free parameters associated with it which can be tuned in order to identify certain desired trends. The detailed procedure on the
selection of the optimum parameters is outlined in the Additional file 2.
Additional file 2. Selection of biclustering parameters.docx
Format: DOCX Size: 179KB Download file
The developed optimization based bicluster identification algorithm was applied to the mean expression data with the above mentioned parameters, which resulted in a 3-gene 5-condition bicluster as
illustrated in Figure 4 (a). However, to identify additional biclusters, possibly with overlaps, the SEBI algorithm was subsequently run by penalizing the identified biclusters. One such bicluster
is presented in Figure 4 (b). Although, the SEBI algorithm allows some degree of overlapping amongst the subsequent biclusters, the current mean dataset did not result in any overlaps.
Figure 4. Biclusters obtained from the normalized mean expression data. (a) Optimal Bicluster The bicluster contains 3 genes across 5 conditions. (b) Subsequent bicluster containing 3 genes and 7
conditions. The bicluster parameters selected were δ = 1.5, Wc, Wr=1.
Recently, a new method was proposed by Banka et al. called as Fuzzy Possibilistic Biclustering which assigns a membership value to each gene-condition pair in the expression matrix and therefore,
allows varying degree of overlapping amongst the biclusters [27,28]. However, though the method has been proven to provide very large biclusters with acceptable residue, the selection of the degree
of fuzziness often depends upon the question that the biologists have set to answer [29]. In our case, we are interested in analyzing the well identified markers of endoderm induction under necessary
signaling pathways. Since, our aim is to discover subtle differences in the gene regulation when the induction conditions are changed, a traditional crisp method like SEBI will be more useful for
identifying the best induction condition.
Robust biclusters identify WNT3A treatment to favor both early and late endoderm
The above identified biclusters were for the mean dataset, and hence does not explicitly take into account the experimental variations. In general biological datasets are known for their noise and
uncertainty, and in particular stem cells have inherent heterogeneity and stochasticity. In order to increase confidence in the identified bicluster we undertook bootstrap analysis on the
experimental data to generate 1000 pseudo-datasets. Each of these datasets were treated as an experimental repeat and subjected to the entire biclustering analysis. In order to identify somewhat
overlapped biclusters, we ran the biclustering algorithm five times at each data point by subsequently penalizing previously identified biclusters.
The next task was to determine a robust bicluster from this array of alternate biclusters. We hypothesize that the robust bicluster will not be significantly affected by the experimental noise, and
hence will appear a large number of times in the bootstrapped-bicluster data set. However, a thorough search of the entire array of alternate biclusters for frequency of repeats did not yield any
satisfactory outcome. Thus we could not find a single bicluster that was significantly repeated in its entirety across the data set. Instead, we realized subsets of genes and conditions of the
bicluster were being repeated with very high frequency instead of the entire bicluster. Hence, we focused on identifying such subsets from the family of bootstrap + bicluster solutions. Setting a
minimum threshold of 50% repeats across the bootstrap samples, we identified 6 such subsets. First five of these contained different combinations of the same two markers and four conditions. Hence we
collected them together into a single group. The profiles of the repeated subsets are presented in Figure 5. These subsets are of two kinds: Group 1 contains (CER, HNF6 | F, F + W, B + W + P, B + P)
and Group 2 contains (HNF6, HNF4α | F + B, F + P, W + P). It is important to note that the robust biclusters were different from the biclusters obtained for the mean expression data. For example, the
biclusters in Figure 4 show that HNF4α clusters closer to HNF1β (and GATA4) rather than CER. This is also evident from our hierarchical clusters in Figure 3. The fact that they do not appear
together in the robust biclusters is interesting and shows that analysis from mean datasets can be risky for stem cell systems when there is inherent variability among the replicates. Supportively;
the HNF4α, HNF1β (and GATA4) combination occurs in subsets with less than 300 repeats (data not shown).
Figure 5. Robust subsets identified from the 1000 bootstrap datasets. Robust biclusters are the most repeated subsets (>500). The bicluster parameters selected were δ = 1.5, Wc, Wr=1. Note: Group 1
contains five subsets only one of which is shown.
Figure 6 shows a summary of the robust biclusters represented as a bipartite graph of genes and conditions. The identified biclusters are biologically relevant to the development stages in vivo.
Group 1 contains endoderm markers CER and HNF6 under FGF2/WNT3A and BMP4/WNT3A/PI3KI. CER is an important early marker for the DE stage rising after the formation of the primitive streak during
development while HNF6 is a marker for a more primitive foregut stage in pancreas development [2]. Thus, Group 1 is similar to the foregut development stage in vivo[30]. In addition, the conditions
in Group 1 contain FGF2 and WNT3A but not BMP4 and as seen from Figure 5, CER and HNF6 decrease under BMP4 dominance. Thus, the biclustering analysis shows that the early marker CER and a late
endoderm marker HNF6 are controlled by the FGF2, WNT3A pathway and are relatively down-regulated under BMP4 and PI3KI. Group 2 contains another primitive foregut stage marker HNF4α along-with HNF6[2
]. Interestingly here, the biclustering results show that pancreatic endodermal transcriptional machinery may not be favored at the DE stage by the FGF2 + BMP4 combination although in our
hierarchical clustering results FGF2 + BMP4 combination clustered with the other conditions that gave a better DE signature. We also note that WNT3A and PI3KI combination with high activin increased
the expression of HNF4α and HNF6 and these conditions also gave a successful DE signature as seen from the hierarchical clustering. Thus our results indicate that WNT3A pathway can favor both early
and late markers like CER, HNF4α and HNF6. Also, WNT3A + PI3KI induced DE cells may be more capable of developing into later pancreatic lineages. While WNT3A and PI3KI have been used for DE induction
towards pancreatic maturation [3,5], the effect of co-induction has not been explored yet.
Figure 6. Robust subsets of co-regulated genes presented as a bipartite graph.. We have identified high Activin along with PI3K inhibition or activin in combination with WNT3A to work the best to
co-regulate early endoderm marker CER and late endoderm markers HNF6. The Group 2 TFs HNF4α and HNF6 are part of the network inducing NGN3 and PDX1, reminiscent of the pancreatic genotype and are
favored by high activin with PI3KI and WNT3A.
The differentiation of hESCs into the endoderm lineages is carried out by the activation of different signaling pathways mimicking in vivo development. However, there is no consensus on which
induction method is the most desirable and whether combination of these could result in an endoderm with the best signature. Here, we have used a combination of experimental and mathematical
techniques to shed light on these concerns.
The DE signature differs under exogenous activation of different signaling pathways participating in endoderm commitment
Our experiments with different DE inducing conditions show that the DE potential of the differentiating hESCs is highly dependent on the method of DE induction. The major DE markers (CER, CXCR4,
FOXA2, SOX17) showed considerable variation when some of the pathways were activated above their basal levels.
All the pathways studied here have been known to be important at the earlier stages of in vivo endoderm differentiation and has also been documented as necessary for in vitro differentiation [2,6,7,
31]. The common denominator in our studies is activin A which is an essential inducer of DE [2,3,24]. This is primarily because activin, being a member of the TGFβ family, mimics nodal signaling
which is proven to be necessary for endoderm development [4]. Activin has been shown to maintain pluripotency at low concentrations and to induce mesoderm and endoderm at high concentrations [25].
However, activin alone may not result in efficient endoderm induction [1]. Low PI3K signaling was essential for efficient induction of DE from hESCs [24]. Our hierarchical clusters show that Activin
and PI3K inhibition in combination favor the up-regulation of a number of DE markers and form the most minimal signaling pathways to be modulated for efficient DE induction. In fact a number of
recent studies have identified the interplay between PI3K/Akt and Activin/Smad2,3 pathways and the resulting regulation of the gene transcription events necessary for early DE induction [23].
Among the DE markers, CER showed up-regulation on differentiation, and the highest up-regulation was achieved in the presence of FGF2, WNT and PI3KI treatments. Katoh et al. recently identified the
binding domains of several key signaling effectors of the activin and WNT pathways on the promoter regions of CER in hESCs [32]. According to their results, the key nodal effectors Smad3/Smad4 as
well as the WNT effectors beta-catenin and TCF/LEF transcriptional complex regulate the expression of the CER gene. In addition to high activin and WNT signaling, PI3K inhibition may be necessary to
enhance the effect of nodal signaling as Smad3/Smad4 complex is negatively regulated by Akt [23]. Exogenous FGF2 simultaneously activates the ERK pathway and maintains the expression of other key
regulators of differentiation [33]. However, BMP4 effectors Smad1/3 may compete with the activin pathway and thus reduce the up-regulation of CER, as substantiated by the consistent grouping of the
BMP4 dominant conditions in the hierarchical clustering with low CER as a common marker.
The response to the BMP4 pathway, however, was highly dependent on the context, namely the presence and absence of FGF2 which was a striking feature of the hierarchical clustering on the 15
conditions. BMP4 is typically known as an activin antagonist and high concentrations of BMP4 in the culture with high activin results in mesoderm fate [34-36]. At the same time, BMP4 alone results in
the extra-embryonic lineages [37]. The presence of FGF2 with BMP4 modulates the net response to the mesendoderm fate, which is an intermediate stage that can result in DE and mesoderm. Several recent
studies have demonstrated the use of this combination to promote endoderm formation [21,22,38]. FGF2 sustains the expression of Nanog (a pluripotency marker) and this sustained Nanog expression is
found to shift the outcome of BMP4 induced differentiation of hESCs towards mesendoderm [22]. However, prolonged use of FGF2 and BMP4 together may be detrimental for pancreatic differentiation, since
this combination has been shown to induce hepatic differentiation after the DE stage [30]. Also, BMP4 dominant clusters showed high expression of late endoderm markers HNF4α, HNF1β and GATA4. This
may indicate that BMP4 accelerates the differentiation to the mesendoderm phase and therefore, the overall dynamics may be faster for the BMP4 dominant case. But, it was striking to note that the
expression of HNF6, another important marker for late endoderm was still lower in the BMP4 dominant case. Hence, hierarchical clustering alone was not sufficient to answer if BMP4 addition could be
useful for late endoderm differentiation. Importantly, BMP4 dominant conditions gave low expression of markers from the robust biclusters. Thus the current analysis shows that BMP4 may not be a
suitable choice for endoderm induction.
WNT3A/β-catenin signaling has been shown to be important both for maintenance of pluripotency as well as induction of differentiation [25]. The WNT pathway is also found to be important in the
formation of primitive streak due to which it is often used in the very early stages of in vitro differentiation until the formation of mesendoderm [2]. Stabilization of β-catenin by canonical WNT
signaling is found to be responsible for differentiation by epithelial-mesenchymal transition;, however presence of Wnt after this stage supports mesoderm [36]. Also, FGF2 is found to synergistically
influence the WNT pathway [39]. WNT alongwith PI3KI was commonly present in both the groups identified by our hierarchical clustering. WNT was consistently found to be supportive to the activin +
FGF2 signaling assessed by the up-regulation of DE markers. Hence, WNT and PI3KI may be the essential pathway modulators necessary for endoderm differentiation.
Robust biclusters identify the necessary pathways for efficient endoderm differentiation to the pancreatic lineage
The robust biclusters identified by the biclustering + bootstrap analysis show the most important trends preserved under experimental variations. Supportively, CER, HNF6 and HNF4α belonged to the
robust clusters. As mentioned earlier, CER is an important target of the activin and WNT signaling pathways and HNF6 is a very early pancreatic progenitor marker taking part in the transcriptional
network activating pancreatic progenitors [32,40]. As seen from the Group 1 bicluster, FGF2 + WNT3A conditions favor CER and HNF6 while BMP4 limits their up-regulation. It is also found that the
stability of β–catenin is partly enhanced by PI3K signaling (activated by FGF2) [41] and hence this combination of high activin + FGF2 + WNT3A may work to control the expression of some endoderm
markers like CER and HNF6. At the same time, CER protein is a negative regulator of the Tgfb (activin, BMP4) pathway and up-regulation of CER is necessary to limit the activation of these pathways,
since inhibition of the Tgfb pathway was found to be necessary for efficient differentiation to the pancreatic progenitors after PDX1 and HNF6 expression [42]. However, external addition of WNT3A may
still be necessary since CER negatively regulates the WNT pathway [32].
Alternatively, the markers HNF4α and HNF6 which occur in Group 2 are co-regulated under FGF2 + BMP4, FGF2 + WNT3A + PI3KI action. These markers also occur in the MODY network for induction of
Neurogenin expressing cells which represents mature pancreatic lineage [40]. HNF6 occupies a predominant position in regulating the expression of HNF4α and other genes prior to PDX1 induction. A key
result identified by the bicluster was the consistent up-regulation of the late pancreatic markers HNF4α and HNF6 under WNT3A + PI3KI dominant conditions and studies by Nostro et al. have indicated
the necessity of WNT3A for induction of pancreatic progenitors [42]. CER, HNF6 combination was also up-regulated under WNT3A conditions and thus WNT3A addition was found to favor both DE markers as
well as late pancreatic endoderm markers supposedly showing similarity with in vivo pancreatic organogenesis. The presence of FGF2 and BMP4 lowers the expression of these markers and is consistent
with the inhibition of FGF2 and BMP4 at the later stages for inhibition of a hepatic fate and efficient pancreatic lineage selection [42]. The key signaling pathway interactions from the robust
biclusters are summarized in Figure 7.
Figure 7. Figure summarizing the functional dependence of the co-regulated genes on the active signaling pathways of endoderm induction. CER and HNF6 are favoured by High activin and PI3KI, WNT3A,
FGF2 while HNF4α and HNF6 are favoured by High activin, WNT3A and PI3KI. Combining the early and late stages, high activin with PI3KI and WNT3A together is an effective strategy for endoderm
The focus of the current work was to achieve insights into the in vitro differentiation process of human embryonic stem cells to the endoderm stage using both experimental and mathematical
approaches. Our work has identified the differences between the different protocols for endoderm induction. Essentially, high activin A and PI3K inhibition or high activin A with FGF2 or WNT3A serve
well as early DE inducer. Additionally, biclustering shows that the early and late endoderm markers are co-regulated under high activin and WNT3A. Thus, overall high activin with PI3KI and WNT3A
together may serve better for in vitro differentiation of hESCs to the definitive endoderm and pancreatic endoderm lineages.
Experimental methods
Cell culture and treatment
hESC maintenance
H1 hESCs were placed on hESC certified matrigel coated wells and maintained with mTeSR1 with media change every day. Cells were passaged every 5 to 7days by incubating in 1mg/ml dispase for
5minutes followed by mechanically breaking the colonies and splitting at a 1:3–1:5 dilution. Cells were examined under the microscope every day and colonies with observable differentiation were
picked and removed before the media changes.
hESC differentiation to DE
H1 hESCs were allowed to grow to 60-70% confluency before the experiments were started. Once confluency was reached, differentiation was performed by adding DE induction media for 4days with media
change every day. Several induction conditions were chosen according to previously published studies [3,5-7]. All conditions were prepared in DMEM:F12 supplemented with B27 and 0.2% BSA with 100ng/
ml Activin A. Conditions involved the use of individual and all possible combinations of growth factors and molecules at the following concentrations: basic FGF (F) at 100ng/ml, BMP4 (B) at 100ng/
ml, WNT3A (W) at 25ng/ml and Wortmannin (PI3K inhibitor, P) at 1μM. This leads to 15 different experimental conditions.
Measurement of Transcription Factor (TF) expression
After 4days of DE induction, cells were lysed and RNA extracted using Nucleospin RNA II kit (Macherey Nagel) according to the manufacturer’s instructions. The sample absorbance at 280nm and 260nm
was measured using a BioRad Smart Spec spectrophotometer to obtain RNA concentration and quality. Reverse transcription was performed using ImProm II Promega reverse transcription kit following the
manufacturer’s recommendation. qRT-PCR analysis was performed for endoderm and pancreatic markers using the primers listed in Additional file 3: Table S1.
Additional file 3. Transcription factors and primers list.docx
Format: DOCX Size: 17KB Download file
A total of 12 transcription factors were studied which included pluripotency marker OCT4, mesendoderm marker BRACHYURY, DE markers namely, CXCR4, SOX17, CER, FOXA2 and pancreatic progenitor markers
PTF1α, PDX1, GATA4, HNF1β, HNF4α and HNF6. GAPDH was selected as the housekeeping gene. Briefly, the fold change was calculated from the cycle times, C[T], after normalization with respect to the
control sample and housekeeping gene, GAPDH as: , where, ΔΔC[T]=[(C[T,target]−C[T,GAPDH])[sample]−(C[T,target]−C[T,GAPDH])[undiffcells]]. The control sample was chosen to be
undifferentiated cells at day 0.
TF expression profiles
The TF expression profiles can be grouped together to form an expression matrix with the rows corresponding to the measurements of interest (like the relative mRNA concentrations) and the columns
corresponding to the experimental conditions or samples. Thus, each element in the matrix refers to the intensity of the particular measurement in a given sample [43]. Many of the genes are closely
regulated under a subset of conditions indicating that they are probably under the influence of the same regulatory network under these conditions [12]. The expression data is helpful in identifying
such sub groups of transcription factors and conditions. However, expression data matrices are often complex and further computational analysis is required to mine important connections from such
large expression matrices.
Mathematical analysis
Hierarchical clustering
Hierarchical clustering partitions the data into clusters through an iterative process, where similarity or dissimilarity between every pair of variables in the data matrix is calculated using an
appropriate distance measure followed by grouping the variables in close proximity using a linkage function. We used the in-built Matlab functions to perform the analysis using various distance
measures e.g. Euclidean, city block etc., on the mean centered and variance scaled expression matrix. The results were represented as a clustergram i.e. the linkage tree and the corresponding heat
map. We tested the tree generated using different linkage measures after normalization of the mean expression matrix and found all the trees to be very similar with the cophenetic correlation
coefficient greater than 0.9.
Biclustering algorithm
Biclustering can be described as two dimensional clustering, where a subset of genes exhibiting similar trend across a subset of conditions is being identified. Such subsets can be considered to be
participating in similar regulatory mechanism, hence constituting a regulatory network. In order to identify sets of TFs expressing coherent trends under specific sets of conditions, we analyzed our
TF-condition matrix, X, using the Sequential Evolutionary Biclustering (SEBI) developed by Divina et al. [15]. The SEBI algorithm identifies coherent biclusters sequentially with the help of a number
of metrics as described below. For a bicluster B(I,J)∈X, containing elements, e[ij] for i∈I,j∈J, the residue, r[ij] of each element in the bicluster is defined as: r[ij]=e[ij]−e[iJ]−e[
Ij]−e[IJ]. The gene base is defined as , with I and J representing the total number of genes and conditions respectively in the bicluster B. The condition base is defined as . The base of the
bicluster is the mean of all entries in the bicluster, i.e., . The residue, therefore, indicates the degree of coherence of the element with other elements in the bicluster. Further, the squared
mean residue of all the elements in the bicluster is defined as . It is possible to have biclusters having constant expression values and hence have low residue value. To avoid such trivial
biclusters, the variance metric is introduced. The variance, var[IJ], of a bicluster is defined as, . Hence, the variance captures fluctuating trends. Finally, we would be interested in biclusters
with as many genes and conditions as possible i.e. having large volume. The basic premise of the analysis is that the genes belonging to a bicluster are under the influence of a common regulatory
pathway and hence show coherence in their expression trends. However it is possible for the genes to participate in multiple regulatory pathways, to capture which we allow certain degree of
overlapping amongst the biclusters discovered sequentially by the SEBI algorithm using a penalty term.
Thus, our final goal is to find biclusters of maximum size, with mean squared residue lower than a given threshold (δ), with relatively high row variance, and a low level of overlapping among the
biclusters. We represent this as an optimization problem with objective function defined as [15]:
In this function, B (I, J) is an individual solution, m_residue is the mean squared residue of the bicluster B, row_variance (B) is the row variance of B, , where w[p] is defined as
Where N, M are the number of rows and columns of the expression matrix, respectively and |Cov(e[ij])| is the number of previous biclusters containing e[ij]. The use of the penalty term biases the
search against members which already have appeared in the previous biclusters, thus reducing the overlapping amongst the biclusters.
w[d] is defined as and δ is the threshold mean squared residue and biclusters with mean squared residue above δ are discarded.
Solution procedure
The current optimization formulation has been identified to be NP-hard and has been shown to be effectively handled by evolutionary techniques like Genetic Algorithm (GA) [15]. GA is an iterative
search process which looks for the fittest member of a population (candidate solutions) using the biological principle of evolution under mutation and natural selection [44]. In a typical GA, the
optimization variables are encoded as a sequence of binary bits and these sequences are concatenated to form the chromosome. Thus, for the present formulation, each chromosome consists of I binary
bits for genes and J binary bits for conditions forming the I+J binary bits of the chromosome. The binary variables, 0 and 1 represent the absence or presence of a gene (or condition) respectively.
Thus, a GA population is made of chromosomes with each chromosome representing a candidate bicluster.
Each chromosome has a metric associated with it called the fitness which we wish to maximize. The GA algorithm is initiated by randomly initializing a population of chromosomes (i.e. biclusters). The
population is continuously evolved in every generation by the operators: reproduction, crossover and mutation. At the end of every generation, individuals for the next one are selected on the basis
of their fitness values. This cycle of evolution is continued until a predetermined termination criterion is reached. For the present case, we continued the simulations for a maximum number of
generations until no further change in the population was observed. The biclustering formulation was coded in FORTRAN R90 and the Genetic Algorithm (version 1.7a) driver obtained from David Carroll,
CU Aerospace, Urbana, IL. Computations were performed on INTEL (R) Core (TM) 2 Quad CPU (Q8400 @ 2.66GHz).
Determination of robust biclusters
The inherent noise in biological systems makes it difficult to draw meaningful conclusions from a deterministic analysis. The formulation proposed above is based on the mean gene expression data
which possibly reduces confidence in the identified bicluster. Here we have adopted the bootstrap technique to obtain robust biclusters from noisy experimental data. Bootstrap is a statistical
technique to generate large data set from a small number of experimental replicates, using sampling with replacement technique. The present formulation systematically re-samples the original
experimental data set using Monte Carlo algorithm to generate the artificial data set. The optimization formulation of the biclustering problem is then solved at each of the bootstrap data points to
generate a family of alternate biclusters. The final goal will be to identify the most repeated biclusters in the entire array, based on the justification that such a bicluster will be relatively
insensitive to experimental noise and hence is robust. To this end, the number of repeats of a particular gene-condition combination is analyzed using the quicksort algorithm (N log N). Our analysis
showed that the complete bicluster was typically not repeated significantly; instead only subsets of the biclusters were repeated sufficient number of times. For identification of robust biclusters,
we set the threshold frequency of repeats as 500 out of every 1000 alternate biclusters. The most repeated subsets are thereby concluded to be robust under experimental noise. The work flow for the
entire analysis is depicted in Figure 1.
Authors’ contributions
Conceived and designed experiments: IB MJ ASG SM. Performed the experiments: MJ. Conducted mathematical analysis: SM, XZ, LZ. Contributed materials/analysis tools: IB. Drafted the manuscript: SM IB.
All authors read and approved the final manuscript.
We would like to thank Dr. Ira Fox from the University of Pittsburgh for his generous gift of H1 hESCs.
1. Zhang DH, Jiang W, Shi Y, Deng HK: Generation of pancreatic islet cells from human embryonic stem cells.
Sci China Series C: Life Sci 2009, 52:615-621. Publisher Full Text
2. D’Amour KA, Agulnick AD, Eliazer S, Kelly OG, Kroon E, Baetge EE: Efficient differentiation of human embryonic stem cells to definitive endoderm.
Nat Biotechnol 2005, 23:1534-1541. PubMed Abstract | Publisher Full Text
3. D’Amour KA, Bang AG, Eliazer S, Kelly OG, Agulnick AD, Smart NG, Moorman MA, Kroon E, Carpenter MK, Baetge EE: Production of pancreatic hormone–expressing endocrine cells from human embryonic
stem cells.
Nat Biotechnol 2006, 24:1392-1401. PubMed Abstract | Publisher Full Text
4. Payne C, King J, Hay D: The role of activin/nodal and Wnt signaling in endoderm formation.
5. Zhang D, Jiang W, Liu M, Sui X, Yin X, Chen S, Shi Y, Deng H: Highly efficient differentiation of human ES cells and iPS cells into mature pancreatic insulin-producing cells.
Cell res 2009, 19:429-438. PubMed Abstract | Publisher Full Text
6. Phillips BW, Hentze H, Rust WL, Chen QP, Chipperfield H, Tan EK, Abraham S, Sadasivam A, Soong PL, Wang ST: Directed differentiation of human embryonic stem cells into the pancreatic endocrine
Stem cells develop 2007, 16:561-578. Publisher Full Text
7. Basma H, Soto-Gutiérrez A, Yannam GR, Liu L, Ito R, Yamamoto T, Ellis E, Carson SD, Sato S, Chen Y: Differentiation and transplantation of human embryonic stem cell-derived hepatocytes.
Gastroenterology 2009, 136:990-999.
PubMed Abstract | Publisher Full Text | PubMed Central Full Text
8. Friedman J, Hastie T, Tibshirani R: The elements of statistical learning. Springer Series in Statistics: Springer Verlag; 2008.
9. Slonim DK: From patterns to pathways: gene expression data analysis comes of age.
Nat Genet 2002, 32:502-508. PubMed Abstract | Publisher Full Text
10. Kerr G, Ruskin HJ, Crane M, Doolan P: Techniques for clustering gene expression data.
Comp biol med 2008, 38:283-293. Publisher Full Text
11. Cheng Y, Church GM: Biclustering of expression data. Molecular Biology: In Proceedings of International Conference on Intelligent Systems in; 2000:93.
12. Yang J, Wang H, Wang W, Yu P: Enhanced biclustering on expression data.
13. Madeira SC, Oliveira AL: Biclustering algorithms for biological data analysis: a survey.
IEEE Trans comput Biol Bioinformatics 2004, 1:24-45. Publisher Full Text
14. Bleuler S, Prelic A, Zitzler E: An EA framework for biclustering of gene expression data.
Evol Comput 2004, 161:166-173.
2004 CEC2004 Congress on; 19–23 June 2004
15. Divina F, Aguilar-Ruiz JS: Biclustering of expression data with evolutionary computation.
16. Willems E, Leyns L, Vandesompele J: Standardization of real-time PCR gene expression data from independent biological replicates.
Anal Biochem 2008, 379:127-129. PubMed Abstract | Publisher Full Text
17. Politis DN, Romano JP: The stationary bootstrap.
J Am Stat Assoc 1994, 89:1303-1313. Publisher Full Text
18. Hanczar B, Nadif M: Using the bagging approach for biclustering of gene expression data.
Neurocomputing 2011, 74:1595-1605. Publisher Full Text
19. Bernardo AS, Faial T, Gardner L, Niakan KK, Ortmann D, Senner CE, Callery EM: BRACHYURY and CDX2 mediate BMP-induced differentiation of human and mouse pluripotent stem cells into embryonic and
extraembryonic lineages.
Cell stem cell 2011, 9:144-155. PubMed Abstract | Publisher Full Text
20. Xu X, Browning V, Odorico J: Activin, BMP and FGF pathways cooperate to promote endoderm and pancreatic lineage cell differentiation from human embryonic stem cells.
Mech Dev 2011, 128:412-427. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
21. Yu P, Pan G, Yu J, Thomson JA: FGF2 Sustains NANOG and Switches the Outcome of BMP4-Induced Human Embryonic Stem Cell Differentiation.
Cell stem cell 2011, 8:326-334. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
22. Singh AM, Reynolds D, Cliff T, Ohtsuka S, Mattheyses AL, Sun Y, Menendez L, Kulik M, Dalton S: Signaling Network Crosstalk in Human Pluripotent Cells: A Smad2/3-Regulated Switch that Controls the
Balance between Self-Renewal and Differentiation.
Cell stem cell 2012, 10:312-326. PubMed Abstract | Publisher Full Text
23. McLean AB, D’Amour KA, Jones KL, Krishnamoorthy M, Kulik MJ, Reynolds DM, Sheppard AM, Liu H, Xu Y, Baetge EE: Activin a efficiently specifies definitive endoderm from human embryonic stem cells
only when phosphatidylinositol 3 kinase signaling is suppressed.
Stem Cells 2007, 25:29-38. PubMed Abstract | Publisher Full Text
24. Reynolds D, Vallier L, Chng Z, Pedersen R: Signaling Pathways in Embryonic Stem Cells. In Regulatory Networks in Stem Cells. Edited by Rajashekhar VK, Vemuri MK. New York NY: Human Press;
25. Shiraki N, Yoshida T, Araki K, Umezawa A, Higuchi Y, Goto H, Kume K, Kume S: Guided Differentiation of Embryonic Stem Cells into Pdx1 Expressing Regional Specific Definitive Endoderm.
Stem Cells 2008, 26:874-885. PubMed Abstract | Publisher Full Text
26. Mitra S, Banka H, Paik JH: Evolutionary fuzzy biclustering of gene expression data.
Lecture notes in Computer Science 2007, 4481:284-291. Publisher Full Text
27. Filippone M, Masulli F, Rovetta S, Mitra S, Banka H: Possibilistic approach to biclustering: An application to oligonucleotide microarray data analysis.
Lecture notes in Computer Science 2006, 4210:312-322. Publisher Full Text
28. Nosova E, Tagliaferri R, Masulli F, Rovetta S: Biclustering by Resampling.
Comput Int Methods Bioinformatics Biostatistics 2011, 147-158.
29. Zorn AM, Wells JM: Vertebrate endoderm development and organ formation.
Annual rev cell dev biol 2009, 25:221. Publisher Full Text
30. Zaret KS, Grompe M: Generation and regeneration of cells of the liver and pancreas.
Science’s STKE 2008, 322:1490. PubMed Abstract
31. Katoh M: CER1 is a common target of WNT and NODAL signaling pathways in human embryonic stem cells.
Int j mol med 2006, 17:795-799. PubMed Abstract | Publisher Full Text
32. Mfopou JK, Chen B, Sui L, Sermon K, Bouwens L: Recent advances and prospects in the differentiation of pancreatic cells from human embryonic stem cells.
Diabetes 2010, 59:2094-2101. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
33. Poulain M, Fürthauer M, Thisse B, Thisse C, Lepage T: Zebrafish endoderm formation is regulated by combinatorial Nodal, FGF and BMP signalling.
Development 2006, 133:2189. PubMed Abstract | Publisher Full Text
34. Sulzbacher S, Schroeder IS, Truong TT, Wobus AM: Activin A-Induced Differentiation of Embryonic Stem Cells into Endoderm and Pancreatic Progenitors—The Influence of Differentiation Factors and
Culture Conditions.
Stem Cell Rev Reports 2009, 5:159-173. Publisher Full Text
35. Sumi T, Tsuneyoshi N, Nakatsuji N, Suemori H: Defining early lineage specification of human embryonic stem cells by the orchestrated balance of canonical Wnt/-catenin, Activin/Nodal and BMP
Development 2008, 135:2969. PubMed Abstract | Publisher Full Text
36. Xu RH, Chen X, Li DS, Li R, Addicks GC, Glennon C, Zwaka TP, Thomson JA: BMP4 initiates human embryonic stem cell differentiation to trophoblast.
Nat Biotechnol 2002, 20:1261-1264. PubMed Abstract | Publisher Full Text
37. Vallier L, Touboul T, Chng Z, Brimpari M, Hannan N, Millan E, Smithers LE, Trotter M, Rugg-Gunn P, Weber A: Early cell fate decisions of human embryonic stem cells and mouse epiblast stem cells
are controlled by the same signalling pathways.
PLoS One 2009, 4:e6082. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
38. Katoh M: Review Cross-talk of WNT and FGF Signaling Pathways at GSK3β to Regulate β-Catenin and SNAIL Signaling Cascades.
Cancer biol therapy 2006, 5:1059-1064. Publisher Full Text
39. Wilding L, Gannon M: The role of pdx1 and HNF6 in proliferation and differentiation of endocrine precursors.
Diabetes/metabol res rev 2004, 20:114-123. Publisher Full Text
40. Voskas D, Ling LS, Woodgett JR: Does GSK-3 provide a shortcut for PI3K activation of Wnt signalling?
41. Nostro MC, Sarangi F, Ogawa S, Holtzinger A, Corneo B, Li X, Micallef SJ, Park IH, Basford C, Wheeler MB: Stage-specific signaling through TGFβ family members and WNT regulates patterning and
pancreatic specification of human pluripotent stem cells.
Development 2011, 138:861-871. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
42. Tanay A, Sharan R, Shamir R: Biclustering algorithms: A survey. In Handbook of comput mol biol. Edited by Aluru S. Chapman and Hall/CRC; 2005.
Sign up to receive new article alerts from BMC Systems Biology | {"url":"http://www.biomedcentral.com/1752-0509/6/154","timestamp":"2014-04-16T10:57:18Z","content_type":null,"content_length":"170497","record_id":"<urn:uuid:306935f2-588f-4ced-b649-12f3b65e8f93>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
variance of $1/(X+1)$ where $X$ is Poisson-distributed with parameter $\lambda$
up vote 2 down vote favorite
What is the variance of $1/(X+1)$ where $X$ is Poisson-distributed with parameter $\lambda$! The series for the second moment is horrible!
$E({1\over (X+1)^2})=\sum_{k=1}^{\infty}\frac{1}{k^{2}}\frac{\lambda^{k}e^{-\lambda}}{k!}$
Is there an easy way to do it?
This is very hard to read because: (1) You seem to have left out the squares when typing your equation and (2) There is a great deal of discussion of your emotional state, and little of the
problem. How about just "I am trying to compute the variance of ... This means I need to evaluate the sum ... Does anyone know how to do this?" – David Speyer Jan 27 '10 at 20:47
I've cleaned up the equations a bit. – vilvarin Jan 27 '10 at 20:49
add comment
3 Answers
active oldest votes
Sorry, I gave a moronic answer before. Let me try to give a better one.
There should be no expression for $f(\lambda) := \sum_{k \geq 1} \lambda^k/(k^2 k!)$ in elementary functions. If there were, then $g(\lambda) = \lambda f'(\lambda) = \sum_{k \geq 1}
up vote 2 down \lambda^{k}/(k \cdot k!)$ would also be elementary. But $g(\lambda)=\int_0^{\lambda} \frac{e^t-1}{t} dt$ and $e^t/t$ is a standard example of a function without an elementary
vote accepted antiderivative.
add comment
the previous answer has vanished. To tell the truth I haven't understood it completly :(
up vote 0 down vote
add comment
thanks, David. The original problem I had is to compute the variance Y/(X+1) where Y bernoulli distributed with parameter p , and X is poisson distributed. But I don't think it will
up vote 0 down change anything. So I will just leave the sum (Vilvarin)
add comment
Not the answer you're looking for? Browse other questions tagged st.statistics or ask your own question. | {"url":"http://mathoverflow.net/questions/13181/variance-of-1-x1-where-x-is-poisson-distributed-with-parameter-lambda?sort=oldest","timestamp":"2014-04-19T17:44:39Z","content_type":null,"content_length":"58147","record_id":"<urn:uuid:74489071-6ae8-4d50-895d-2c325748c61e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
linear programming
May 24th 2008, 10:25 AM #1
Junior Member
Apr 2008
linear programming
OK I have attempted 2 Linear Programming questions and i would like to know if i have got all the equations correct for the first question, and some help on the second one would be much
A bank is attempting to determine where its assets should be invested during the current year. At present $500 million is available for investment in bonds, home loans, car loans, and personal
loans. The annual rate of return on each type of investment is known to be: bonds, 7%; home loans, 8%; car loans, 12%; personal loans, 11%. In order to ensure that the bank’s portfolio is not too
risky, the bank’s investment manager has placed the following three restrictions on the bank’s portfolio:
(a) The amount invested in personal loans cannot exceed the amount invested in bonds.
(b) The amount invested in home loans cannot exceed the amount invested in car loans.
(c) No more than 25% of the total amount invested may be in personal loans.
The bank’s objective is to maximize the annual return on its investment portfolio. Formulate an LP (in standard form) that will enable the bank to meet this goal. Assume interest is calculated
My Answer:
Let b=bonds h=home loans c=car loans and p=personal loans
maximise z=0.07b+0.08h+0.12c+0.11p
subject to:
b, h, c, p>=0
(note that 500e6 is 500,000,000 and p<=b means that p i s less than or equal to b)
Bulldust Inc. blends silicon and nitrogen to produce two types of fertiliser. Fertiliser 1 must be at least 40% nitrogen and sells for $20/kg. Fertiliser 2 must be at least 70% silicon and sells
for $18/kg. Bulldust can purchase up to 800 kg of nitrogen at $10/kg and up to 1000 kg of silicon at $8/kg. Assuming that all fertiliser produced can be sold, formulate an LP to help Bulldust
maximize profit.
HELP please =P
I'll give it a shot.
$Let \ x \ = \ no.\ of \ kg \ of \ fertilizer \ 1$
$Let \ y \ = \ no. \ of \ kg \ of \ fertilizer \ 2$
$.40x + .30y \leq800$
$.60x + .70y \leq1000$
Maximum profit=20x+18y - overhead
Overhead = cost of purchasing nitrogen (800 kg @ $10/kg) and silicon (1000kg @ $8/kg) which totals $16,000 in overhead.
Last edited by masters; May 24th 2008 at 01:18 PM.
Ok I think you are on the right track..but where did u get .30y and .60x from? are you jsut assuming that all silicon and nitrogen is used up?
Also what about my first answer?
I'll give it a shot.
$Let \ x \ = \ no.\ of \ kg \ of \ fertilizer \ 1$
$Let \ y \ = \ no. \ of \ kg \ of \ fertilizer \ 2$
$.40x + .30y \leq800$
$.60x + .70y \leq1000$
Maximum profit=20x+18y - overhead
Overhead = cost of purchasing nitrogen (800 kg @ $10/kg) and silicon (1000kg @ $8/kg) which totals $16,000 in overhead.
You have assumed that 40/60 and 30/70 mixes are those that maximise profit, if this is true you need to justify/show/prove it.
May 24th 2008, 12:50 PM #2
A riddle wrapped in an enigma
Jan 2008
Big Stone Gap, Virginia
May 24th 2008, 02:52 PM #3
Junior Member
Apr 2008
May 25th 2008, 06:40 AM #4
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/pre-calculus/39501-linear-programming.html","timestamp":"2014-04-20T00:22:55Z","content_type":null,"content_length":"43144","record_id":"<urn:uuid:4a04737f-b532-4c35-a67c-511c2010b18d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00237-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Express answers in simplest exact form. What is the area of a sector with measure of arc equal to 90° and radius equal to 1 foot?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/505e25f2e4b02e1394101b98","timestamp":"2014-04-21T15:47:35Z","content_type":null,"content_length":"44067","record_id":"<urn:uuid:1ea2bdb2-3963-4bdb-a833-56e000ef58f1>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
God Plays Dice
Dreading the upcoming work week, because it's Sunday night and you still don't make enough money to buy fancy booze?
Get drunk not broke
sorts drinks by alcohol to cost ratio. See, math is good for something.
There's also
Get drunk not fat
(alcohol to calories).
And finally there's Brian Gawalt's
Get drunk but neither broke nor fat
. This page finds the three drinks such that every other drink is either more expensive or more caloric per ounce of alcohol.
From Jones, Game Theory: Mathematical Models of Conflict (link goes to Google Books), in the preface:
"Some teachers may be displeased with me for including fairly detailed solutions to the problems, but I remain unrepentant [...] In my view any author of a mathematical textbook should be required by
the editor to produce detailed solutions for all the problems set, and these should be included where space permits."
By the way, Jones was writing this in 1979; presumably if space does not permit, in the present day solutions can be posted on the author's web site. (This will pose a problem if websites move,
though; perhaps an arXiv-like electronic repository of solutions would be a good idea?) A reviewer at Amazon points out that the inclusion of solutions to problems might be an issue for those
choosing to assign the textbook in a course where homework is collected and graded. Jones has a PhD from Cambridge and as far I can tell was at Imperial College, London at the time of writing; the
willingness to include solutions may have something to do with the difference between the British and American educational systems.
I've seen frustration about the lack of provided solutions in textbooks on the part of my more conscientious students. (This isn't with regard to this text - I'm not currently teaching game theory -
but with regard to other texts I've used in other courses.) They want to do as many problems as they can, which is good. This practice of leaving out the solutions is perhaps aimed at the median
student - in my experience the median student does all of the homework problems but would never consider trying anything that's not explicitly assigned. (And although I don't know for sure, the
student who goes out of their way to get a bootleg solutions manual is probably not the conscientious student I'm referring to.)
From tomorrow's New York Times: an article on the growing prevalence of Asian-Hispanic fusion food in California. It's part of a series on the Census. Orange County, California is 17.87% Asian and
34.13% Hispanic -- so the majority of the population, 52.00%, is either Asian or Hispanic. Not surprisingly there's a guy there with a food truck named Dos Chinos, which serves such food as
sriracha-tapatito-tamarind cheesecake.
This is accompanied by a little map showing the sum of Asian and Hispanic population in any given county. (Well, it might be the sum; to be honest I don't know, as in the Census "Hispanic" vs.
"non-Hispanic" is orthogonal to "race", which takes values White, Black, Asian, American Indian, and Native Hawaiian.) In many places in the southern half of the state it's over 50%.
But wouldn't the relevant statistic for this article be not (0.1787) + (0.3413), but 2*(0.1787)*(0.3413) = 0.1219, the probability that if two random Orange Country residents run into each other, one
of them will be Asian and the other will be Hispanic? Fresno County, for example, is 50.3% Hispanic and 9.6% Asian -- that's 59.9% "Hispanic or Asian" -- but there wouldn't seem to be quite as many
opportunities for such fusion as the probability of a Hispanic-Asian pair in Fresno County is only 2*(50.3%)*(9.6%) = 9.7%.
(Except that 97% of Fresno County's Asians are Hispanic, according to the frustratingly hard-to-navigate American FactFinder.So maybe some "fusion" has already taken place.)
James Tanton, of the St. Mark's Math Institute at St. Mark's School in Southborough, Massachusetts, makes excellent short mathematical videos.
He and his students also folded a very long piece of paper 13 times -- that is, they created 2^13-ply toilet paper. This is a world record. (There's a bit of a question about whether they actually
got 13 folds or just 12 -- the 13th fold has to be held in place. 12 has been done before.) You can read about it in a local newspaper, or see a video on Youtube. They did it in the "Infinite
Corridor" of MIT, which is not infinite but is very long, about 800 feet. On a Sunday, apparently, and on what must be the third or fourth floor. They got access thanks to OrigaMIT, MIT's origami
club. I am only very mildly surprised that such a club exists.
This whole thing may be the only known good use of single-ply toilet paper.
So you all^1 know that if I have a biased coin with probability p of coming up heads, and I flip it n times, then the expected number of heads is np and the variance is npq. That's the binomial
distribution. Alternatively, if I have an urn containing pN white balls and qN black balls, with p + q = 1, and I draw n balls with replacement then the distribution of the number of white balls has
that mean and variance.
Some of you know that if I sample without replacement from that same urn -- that is, if I take balls out and don't put them back -- then the expected number of white balls is np and the variance is
npq(N-n)/(N-1). The distribution of the number of white balls is the hypergeometric distribution.
So it makes sense, I think, to think of (N-n)/(N-1) as a "correction factor" for going from sampling with replacement to sampling without replacement. This is the approach taken in Freedman, Pisani,
and Purves
How do you prove this? On this, FPP are silent. The proof I know -- see, for example, Pitman
S[n] = I[1] + ... + I[n]
where IS[k] is 1 if the kth draw gives a white ball and 0 otherwise. Then E(I[k]) is just the probability of getting a white ball on the kth draw, and so it's equal to p by symmetry. By linearity of
expectation E(S[n]) = np. To get the variance, it's enough to get E(S[n]^2). And by expanding out that sum of indicators there, you get
S[n]^2 = (I[1]^2 + ... + I[n]^2) + (I[1] I[2] + I[1] I[3] + ... + I[n-1] I[n]).
There are n terms inside the first set of parentheses, and n(n-1) inside the second set, which includes every pair I[j] I[k] where j and k aren't equal. By linearity of expectation and symmetry,
E(S[n]^2) = nE(I[1]) + n(n-1)E(I[1] I[2]).
The first term, we already know, is np. The second term is n(n-1) times the probability that both the first and second draws yield white balls. The first draw yields a white ball with probability p.
For the second draw there are N-1 balls left, of which pN-1 are white, so that draw yields a white ball with probability (pN-1)/(N-1). The probability is the product of these. Do the algebra, let the
dust settle, and you get the formula I claimed.
But this doesn't explain things in terms of the correction factor. It doesn't refer back to the binomial distribution at all! But in the limit where your sample is small compared to your population,
sampling without replacement and smapling with replacement are the same! So can we use this somehow? Let's try to guess the correction factor without writing down any random variables. We'll write
Variance without replacement = f(N,n) npq
where n is the sample size and N is the population size, and think about what we know about f(N,n)
First, f(N,1) = 1. If you have a sample of size 1, sampling with and without replacement are actually the same thing.
Second, f(N,N) = 0. If your sample is the entire population, you always get the same result.
But most important is that if we sample without replacement, and take samples of size n or of size N-n, we should get the same variance! Taking a sample of size N-n is the same as taking a sample of
size n and deciding to take all the other balls instead. So for each sample of size n with w white balls, there's a corresponding sample of size N-n with pN-w white balls. The distributions of
numbers of white balls are mirror images of each other, so they have the same variance. So you get
nf(N,n)pq = (N-n)f(N, N-n)pq.
Of course the pq factors cancel. For ease of notation, let g(x) = f(N,x). Then we need to find some function g such that g(1) = 1, g(N)=0, and ng(n) = (N-n)g(N-n). Letting n = 1 you get g(1) = (N-1)g
(N-1), so g(N-1) = 1/(N-1). The three values of g that we have so far are consistent with the guess that g is linear. So let's assume it is -- why should it be anything more complicated? And that
gives you the formula. This strikes me as the Street-Fighting Mathematics approach to this problem.
Question: Is there a way to rigorize this "guess" -- some functional equation I'm not seeing, for example?
1. I use "all" in the mathematician's sense. This means I wish you knew this, or I think you should know it. Some of you probably don't. That's okay. | {"url":"http://godplaysdice.blogspot.com/2011_04_01_archive.html","timestamp":"2014-04-18T18:10:53Z","content_type":null,"content_length":"77205","record_id":"<urn:uuid:d8140470-adda-4629-9bbc-7fc371e28dc4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: Non-linear recursive functions
Date: Jan 30, 2013 4:00 PM
Author: Richard Clark
Subject: Re: Non-linear recursive functions
> It's a two dimensional system, so you really should test all the
> points in the plane, but in computational terms it seems we just need
> a lattice of points reaching from say (-3,-3) to (+3,+3).
> I've roughed this out just now: there is a tear drop shape of stable
> positions in the lower right quadrant.
> Points orbit this teardrop shape concentrically, or at least it
> appears that they will do this. Because the shape is very simple there
> is no need to get too much resolution. So for instance incrementing by
> 0.1 gives some neat results without a lot of data points; about 3600
> test points in this scenario. There is an xy chart available in excel
> so you may be able to pull this off. Once you have things setup you
> can modify the function easily, but then when you start getting
> something fractal or so and you want higher res I suspect you will
> want a different platform to do your computations on. There are an
> overwhelming number of choices. I found libGd and have been using it
> for most of my graphics. I use C++, but that's a tough learning curve.
> The recursive stuff is great. Keep going.
> I am open to being wrong. It's pretty easy to get a few invisible bugs
> going. But I'm pretty sure it's as I describe: like a flower petal a
> little ways out from the origin.
> - Tim
The teardrop shape is centred at the stationary point (1,1) and points to the the stationary point at (2,2). Points move concentrically around this teardrop shape (with it becoming more pronounced the further we move out) until we get to about (1.999,1.999),when it goes round the teardrop 6 times and then shoots off to the left. Between (1.999,1.999)and (2,2)similar behaviour is displayed, but we can't predict how many times the point will go round before shooting off.
Points coming in from the top right go straight across, dipping to (2.2) as they pass. If we start low enough they go in a horseshoe below the teardrop.
It's much easier to do in Excel than in a language like C++ because we can put the formulas in and then just drag them down, instead of setting up a loop
and plotting the results. | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8193520","timestamp":"2014-04-19T10:46:42Z","content_type":null,"content_length":"3496","record_id":"<urn:uuid:a069c24b-4092-40fb-80cc-66bab0617f3a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
ZX BASIC:FSqrt.bas
From BorielWiki
ZX BASIC uses the ZX Spectrum ROM routine to calculate many of the floating point math functions. Unfortunately, some of the functions are notoriously slow. Square root being one of them - there
wasn't enough space in the original ROM design to do a good routine, so they cheated. They calculate x^(0.5) instead, and roll straight into the exponent routine. It turns out that though this works,
it's exceptionally slow.
This function uses the Newton Raphson method. It produces exactly the same full floating point result (it even uses the ROM calculator), but it does it about six times faster. If you have a program
that's calculating lots of square roots, this will make a big difference. However, note that integer square roots are faster still, and if you are games programming, accuracy might not be so
critical. See iSqrt.bas for details.
' Fast floating Point Square Root Function
' Adapted and modified for Boriel's ZX BASIC
' By Britlion
FUNCTION FASTCALL fSqrt (radicand AS FLOAT) AS FLOAT
; FLOATS arrive in A ED CB
;A is the exponent.
AND A ; Test FOR zero argument
RET Z ; RETURN with zero.
;Strictly we should test the number FOR being negative AND quit IF it is.
;But LET's assume we like imaginary numbers, hmm?
; IF you'd rather break it change to a jump to an error below.
;BIT 7,E ; Test the bit.
;JR NZ,REPORT ; back TO REPORT_A
; 'Invalid argument'
RES 7,E ; Now it's a positive number, no matter what.
call __FPSTACK_PUSH ; Okay, We put it on the calc stack. Stack contains ABS(x)
; Halve the exponent TO achieve a good guess.(accurate with .25 16 64 etc.)
; REMember, A is the exponent.
XOR $80 ; toggle sign of exponent
SRA A ; shift right, bit 7 unchanged.
INC A ;
JR Z,ASIS ; forward with say .25 -> .5
JP P,ASIS ; leave incREMent if value > .5
DEC A ; restore TO shift only.
ASIS: XOR $80 ; restore sign.
call __FPSTACK_PUSH ; Okay, NOW we put the guess on the stack
rst 28h ; ROM CALC ;;guess,x
DEFB $C3 ;;st-mem-3
DEFB $02 ;;delete
SQRLOOP: DEFB $31 ;;duplicate
DEFB $E3 ;;get-mem-3
DEFB $C4 ;;st-mem-4
DEFB $05 ;;div
DEFB $E3 ;;get-mem-3
DEFB $0F ;;addition
DEFB $A2 ;;stk-half
DEFB $04 ;;multiply
DEFB $C3 ;;st-mem-3
DEFB $E4 ;;get-mem-4
DEFB $03 ;;subtract
DEFB $2A ;;abs
DEFB $37 ;;greater-0
DEFB $00 ;;jump-true
DEFB SQRLOOP - $ ;;to sqrloop
DEFB $02 ;;delete
DEFB $E3 ;;get-mem-3
DEFB $38 ;;end-calc SQR x.
jp __FPSTACK_POP | {"url":"http://www.boriel.com/wiki/en/index.php/ZX_BASIC:FSqrt.bas","timestamp":"2014-04-16T13:03:25Z","content_type":null,"content_length":"21809","record_id":"<urn:uuid:11b04b36-0157-41e6-ba64-30afbbfed0aa>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
Golden Isles, FL Math Tutor
Find a Golden Isles, FL Math Tutor
...I know how to enable students to master accounting topics. I found economics very fascinating. In the words of my economics professor: "In economics you must think with both parts of your
brain" was a challenge to me.
9 Subjects: including precalculus, English, accounting, reading
...With the experience I have teaching Algebra 2, in high school and also one on one with my students, I am able to see exactly where the student is at. I am also able to show simple ways for
students to understand the material. I have developed techniques and methods that facilitate learning Algebra.
48 Subjects: including precalculus, chemistry, elementary (k-6th), physics
...Regarding Group Rates, I will not endeavor to teach a group unless I am certain that EVERY STUDENT can successfully learn the subject. This is dependent not only on my ability to relate the
information effectively (in which I am most confident), but also on the ability of every student to cooper...
14 Subjects: including prealgebra, algebra 1, algebra 2, study skills
I am experienced as both an elementary teacher and middle school math teacher. For thirty-six years I taught K-6 in self contained rooms and as a math specialist. I also was a curriculum coach
for the same public school district in Pittsburgh, Pennsylvania.
7 Subjects: including algebra 1, geometry, prealgebra, reading
...I work hard to make sure all of my students understand the skill they are learning and feel confident with the skills they are learning.I am a certified Water Safety Instructor. I have taught
swimming lessons to all ages from 6 months to senior citizens for over 10 years. I am also a lifeguard and water fitness instructor.
16 Subjects: including prealgebra, grammar, geometry, algebra 1
Related Golden Isles, FL Tutors
Golden Isles, FL Accounting Tutors
Golden Isles, FL ACT Tutors
Golden Isles, FL Algebra Tutors
Golden Isles, FL Algebra 2 Tutors
Golden Isles, FL Calculus Tutors
Golden Isles, FL Geometry Tutors
Golden Isles, FL Math Tutors
Golden Isles, FL Prealgebra Tutors
Golden Isles, FL Precalculus Tutors
Golden Isles, FL SAT Tutors
Golden Isles, FL SAT Math Tutors
Golden Isles, FL Science Tutors
Golden Isles, FL Statistics Tutors
Golden Isles, FL Trigonometry Tutors
Nearby Cities With Math Tutor
Aventura, FL Math Tutors
Golden Beach, FL Math Tutors
Hallandale Math Tutors
Hallandale Beach, FL Math Tutors
Indian Creek, FL Math Tutors
Keystone Islands, FL Math Tutors
Melrose Vista, FL Math Tutors
Ojus, FL Math Tutors
Port Everglades, FL Math Tutors
Seybold, FL Math Tutors
South Florida, FL Math Tutors
Sunny Isles, FL Math Tutors
Uleta, FL Math Tutors
Venetian Islands, FL Math Tutors
West Hollywood, FL Math Tutors | {"url":"http://www.purplemath.com/Golden_Isles_FL_Math_tutors.php","timestamp":"2014-04-17T01:39:12Z","content_type":null,"content_length":"24129","record_id":"<urn:uuid:126e3219-0cad-45cb-b0d1-43816913bcf7>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
Degree reduction argument in Guth-Katz'sproof of Erdos distinct distance problem in the plane
up vote 4 down vote favorite
In the middle of page 9 of http://arxiv.org/PS_cache/arxiv/pdf/1011/1011.4105v1.pdf.
They said " Now we select a random subset....choosing lines independently with probability $\frac{Q}{100}$. With positive probability....
I can not see why there is positive probability...
Could any one explain a bit about what is going on there? I feel they are applying large number law, but I can not see it clearly, for example what is the probability measure space, what is the
random variables, how the law is used?..
co.combinatorics erdos
add comment
1 Answer
active oldest votes
Not a complete answer but a quick explanation of how I read page 9 in that paper.
1. The underlying probabilistic model is that for every line $l\in\mathfrak L'$ you throw a coin that shows head with probability $100/Q$ and tail with probability $(Q-100)/Q$, and
then $\mathfrak L''$ is the set of all lines whose coin showed head. In other words, you choose a random subset $\mathfrak L''$ of $\mathfrak L'$ and the probability for choosing a
particular set $\mathfrak L_0\subseteq \mathfrak L'$ equals $$\mathbf{P}[\mathfrak L''=\mathfrak L_0]=\left(\frac{100}{Q}\right)^{|\mathfrak L_0|}\left(1-\frac{100}{Q}\right)^{|\
mathfrak L'\setminus\mathfrak L_0|}.$$
2. By linearity of expectation the expected cardinality of $\mathfrak L''$ is $\mathbf{E}(|\mathfrak L''|)=\frac{100\alpha N^2}{Q}$, and this implies $$\mathbf{P}\left[|\mathfrak L''|\
leqslant\frac{200\alpha N^2}{Q}\right]\geqslant\frac12.$$
3. We are done when we can show that the probability of the event "Every line in $\mathfrak L'$ intersects $N/20$ lines in $\mathfrak L''$" has probability at least $1/2+\varepsilon$
for some positive $\varepsilon$, since then the event "$|\mathfrak L''|\leqslant\frac{200\alpha N^2}{Q}$ and every line in $\mathfrak L'$ intersects $N/20$ lines in $\mathfrak L''$"
has probability at least $\varepsilon$.
up vote 3 As I understand it, the intuition is that a typical line in $\mathfrak L'$ intersects a quadratic number of lines in $\mathfrak L'$ so it is highly unlikely that less than $N/20$ of
down vote these are chosen for $\mathfrak L''$.
At the moment I don't see how to make that rigorous. I haven't read the rest of the paper, yet, but my impression is that it might be convenient (or even necessary) to replace $\
mathfrak L'$ by something slightly smaller, throwing away some rubbish:
• I don't see why $\mathfrak L'$, as it is defined, cannot contain a few exceptional lines that have almost all their intersections with lines outside $\mathfrak L'$, so they
intersect less than $N/20$ lines in $\mathfrak L'$. If this is the case, these lines have no chance to intersect $N/20$ lines in $\mathfrak L''$.
• It looks easier to show that with high probability almost every line in $\mathfrak L'$ intersects $\mathfrak L''$ at least $N/20$ times (instead of "every line in $\mathfrak L'$" ).
Maybe it is sufficient to replace $\mathfrak L'$ by this big subset.
Let's hope for a better answer by someone who understands what's going on.
Thanks Kali, I think you are right. then one can take large N to ensure the 3 by Large number law. I agree that one should work on a smaller set. – user13289 Mar 17 '11 at 15:11
Hmm. I'm not really convinced by my own answer. I have to think about it some more. – Thomas Kalinowski Mar 17 '11 at 21:45
in fact one can do estimate directly to see the probability is positive – user13289 Mar 18 '11 at 1:39
one can do this as following Claim: suppose both $L_{1}$ and $L_{2}$ has $O(N^{2})$ lines, each line in $L_{1}$ intersects with at least almost $QN$ lines in $L_{2}$, now choosing
line in $L_{2}$ independently with probability $\frac{1}{Q}$, the resulting subset of $L_{2}$ is denoted by $L_{3}$ then one has similar statement in 2: with probability bigger than 1
/2 $L_{3}$ contains $\frac{O(N^{2})}{Q}$ lines for 3, the probability that a line in $L_{1}$ intersects more than $\frac{N}{2}$ lines in $L_{3}$ is bigger than $1-e^{-N/100}$ –
user13289 Mar 18 '11 at 3:21
this can be done by using estimate in en.wikipedia.org/wiki/Binomial_distribution. then the probability that every line in $L_{2}$ intersects with at least $\frac{N}{2}$ lines in $L_
{3}$ is bigger than $(1-e^{-N/100})^{N^{2}}$, which goes to 1 when $N$ very large. – user13289 Mar 18 '11 at 3:25
add comment
Not the answer you're looking for? Browse other questions tagged co.combinatorics erdos or ask your own question. | {"url":"http://mathoverflow.net/questions/58718/degree-reduction-argument-in-guth-katzsproof-of-erdos-distinct-distance-problem?answertab=votes","timestamp":"2014-04-18T20:59:26Z","content_type":null,"content_length":"59456","record_id":"<urn:uuid:df823725-155d-40bc-9a81-bc1b49f02170>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analytical Evaluation of the Performance of Proportional Fair Scheduling in OFDMA-Based Wireless Systems
Journal of Electrical and Computer Engineering
Volume 2012 (2012), Article ID 680318, 12 pages
Research Article
Analytical Evaluation of the Performance of Proportional Fair Scheduling in OFDMA-Based Wireless Systems
Faculty of Engineering and Applied Science, Memorial University, St. John’s, NL, Canada A1B 3X5
Received 2 March 2012; Accepted 7 May 2012
Academic Editor: Yi Su
Copyright © 2012 Mohamed H. Ahmed et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
This paper provides an analytical evaluation of the performance of proportional fair (PF) scheduling in Orthogonal Frequency-Division Multiple Access (OFDMA) wireless systems. OFDMA represents a
promising multiple access scheme for transmission over wireless channels, as it combines the orthogonal frequency division multiplexing (OFDM) modulation and subcarrier allocation. On the other hand,
the PF scheduling is an efficient resource allocation scheme with good fairness characteristics. Consequently, OFDMA with PF scheduling represents an attractive solution to deliver high data rate
services to multiple users simultaneously with a high degree of fairness. We investigate a two-dimensional (time slot and frequency subcarrier) PF scheduling algorithm for OFDMA systems and evaluate
its performance analytically and by simulations. We derive approximate closed-form expressions for the average throughput, throughput fairness index, and packet delay. Computer simulations are used
for verification. The analytical results agree well with the results from simulations, which show the good accuracy of the analytical expressions.
1. Introduction
OFDMA is a promising solution for the high data-rate coverage required in multiuser broadband wireless communications. Current and evolving standards for broadband wireless systems, such as IEEE
802.16e, have proposed OFDMA as the multiple access technique for the air interface. OFDMA is a multiple access technique which is based on OFDM. In OFDM systems, a single user gets access to the
whole available spectrum at any time instant, and, as a result, multiple users share resources using time scheduling. On the other hand, in OFDMA systems users share the available spectrum using
subcarrier allocation. Hence, OFDMA requires scheduling in both time and frequency domains (time slots and frequency subcarriers). This additional degree of freedom makes the scheduling problem in
OFDMA systems more challenging, but also more effective.
Scheduling plays a key role in the OFDMA systems resource management [1]. Efficient scheduling implies effective utilization of the available radio resources, high throughput, low packet delay, and
fair treatment of all users in the system. Various scheduling techniques have been proposed for OFDMA systems [1–4]. For example, a maximum carrier-to-interference ratio-based scheduling algorithm
is adopted in [1] to provide a more fair treatment among users, while in [2] the resource allocation problem is studied with and without service request constraints. Two-dimensional matrix-based
scheduling algorithms are proposed in [2] using the raster scanning approach to achieve high system throughput with relatively lower complexity.
The PF algorithm is an appealing scheduling scheme to meet the quality of service requirements in OFDMA systems [5–8], as it can improve the fairness among users without sacrificing the efficiency in
terms of average (or aggregate) throughput. With this algorithm, the level of satisfaction and starvation of all users in the system is sensed over time, and resources are assigned to users based on
that. Moreover, the PF algorithm is flexible and can scale between fairness and efficiency. In [8], we propose an iterative two-dimensional (time symbols and frequency subbands) PF scheduling for
OFDMA systems. However, the performance of PF scheduling for OFDMA systems is not determined analytically and it is usually determined by computer simulations.
An analytical method, which is based on the Gaussian approximation of the instantaneous data rate in a Rayleigh fading environment, is used to analyze the performance of PF scheduling in [9].
However, this method is developed for single-carrier systems and limited to the case of users with full buffers. We adopt the methodology in [9] to develop an analytical solution for the PF
scheduling in OFDMA systems for bursty traffic conditions and full buffers scenario, as well. In this paper, we provide approximate closed-form expressions for the average throughput and throughput
fairness index of our PF scheduling scheme proposed for OFDMA systems in [8]. In addition, simulation results are provided in the paper to check the accuracy of the analytical method.
The rest of this paper is organized as follows: Section 2 describes the OFDMA system model. The PF scheduling algorithm is provided in Section 3. The closed-form analytical derivations of the
throughput, fairness index, and delay are presented in Section 4. Then, Section 5 provides numerical results from the analytical solution, as well as simulation outcomes. Finally, conclusions are
provided in Section 6.
2. System Model
As shown in Figure 1, the OFDMA system resources have two dimensions: frequency and time. In frequency domain, the signal bandwidth is divided into a plurality of subbands, which contain highly
correlated orthogonal subcarriers. A number of S subcarriers are grouped into M subbands, each with subcarriers. In time domain, data is organized in frames, which are further divided in time
symbols. The minimum allocable resource unit in the system is defined by the intersection between a subband in frequency domain and time symbol in time domain.
We consider a single-cell scenario, with N users with bursty traffic demands. The signals are affected by path loss, lognormal shadowing, and Rayleigh fading. The smallest data entity which the base
station can handle is a fixed-size data packet. We use the Poisson traffic model. The cell shape is circular and the base station is located at the center. Users are uniformly distributed over the
cell area. We consider the downlink only. However, the analysis can be easily extended to the uplink case. Moreover, adaptive coding and modulation (ACM) is used to enhance the resource utilization.
The suitable modulation level and coding rate are decided depending on the channel state information (CSI) for each subband. Table 1 shows the ACM schemes used in this paper, along with the
corresponding signal-to-noise ratios (SNRs).
The frequency subcarriers are correlated in the frequency domain. The fading affecting the frequency subcarriers has cross correlation because of the coherence bandwidth of the wireless channel [10].
A frequency selective Rayleigh fading channel is modeled based on [10–12]. The frequency selective Rayleigh subcarriers are generated with correlation between them in the frequency domain, where the
complex valued correlation is formulated as a function of frequency separation between the subcarriers. In order to minimize the bit error rate and improve the OFDMA system reliability, we consider
the worst case subcarrier fading in each subband for the SNR and link budget calculations. Although the worst case subcarrier fading is considered in a subband while selecting an ACM scheme, the
overall SNR calculation does not significantly change because the fading difference between subcarriers within a subband is insignificant because the fading coefficients are highly correlated.
3. PF Scheduling Algorithm for OFDMASystems
Closed-form expressions are subsequently derived for the throughput and fairness index for the PF scheduling algorithm that we proposed in [8]. The algorithm is briefly explained, followed by its
analytical performance analysis.
According to the PF scheduling algorithm that we develop in [8] for OFDMA systems, the user with the index is ranked first among the N users on subband . Here, is the instantaneous data rate of user
i, on subband j at time frame n, and is the time-average data rate of user at time frame . The time-average data rate is updated at the end of a time frame for each user i on all the available
subbands as follows: where represents the set of subbands assigned to user i during time frame n, and is the averaging window expressed in time frames which controls the amount of historical
information taken into account when sharing the resources among multiple users and can be chosen to achieve a desirable throughput-fairness tradeoff. User i is scheduled on time frame n if and is not
scheduled if .
Since the packet arrival is assumed to be bursty, the best user (chosen by (1)) might have empty buffer. In this case, the subband assigned to the best user should be given to the second best user if
this has nonempty buffer. If not, the subband is assigned to the third best users and so on, where the ranking of users is based on the same criterion used in (1), that is, . As such, we modify (2)
as follows: where , , represents a selector indicator which equals 1 if user i is ranked kth on subband j and frame n and equals 0 otherwise, and α is the probability that the buffer of user i is not
empty. We assume that α is the same for all users. The terms in the right-hand side of (3) represent the potential achievable throughput for a user. The first term reflects the average throughput
achieved by the round-robin (RR) algorithm, while the remaining N terms represent the additional average throughput provided by our algorithm when compared with RR. The first term (out of the
remaining N terms) represents the additional average throughput when user i is ranked first and assigned subband . The second term (out of the remaining terms) reflects the additional average
throughput when user is ranked second and assigned subband because the user ranked first has empty buffer, and so on.
The PF scheduling algorithm consists of two steps [8]. In the first step, all users in the system are ranked. A resource matrix that contains the ranking of all users on all subbands is generated
based on (1). The instantaneous data rate, , represents the efficiency factor, whereas the historical average rate combined with represents the fairness factor. As such, the ranking of the users
reflects both the channel gain and shortage of service. In the second step, scheduling is performed based on the ranking and demands of the users on one hand and the resource accessibility on the
other hand. The algorithm iteratively serves the user with the highest rank among all users on all subbands.
A user will be excluded from the waiting users’ list if all waiting packets are served. This algorithm allows subband sharing in time domain, where different time symbols in the subband can be
utilized by different users. A subband will be eliminated from the resource matrix if the remaining resources cannot support at least one packet for any requesting user within this time frame. The
algorithm tracks the satisfaction levels of all users at the end of each time frame by updating the historical data rate, , using (2).
4. Performance Analysis
4.1. Average Throughput
It is shown that assuming a linear relationship between the instantaneous data rate, , and the SNR is unrealistic under Rayleigh fading environment [9, 13]. Actually, it is demonstrated that it is
more realistic to assume that follows a Gaussian distribution with mean and variance given, respectively, as follows [9]: where denotes the expectation operator. According to the PF algorithm
presented in (1) and (2), one can express the average achievable throughput of user on all the available subbands in the time frame n as follows:
We can rewrite (5) as follows:
where is the probability that user is ranked kth on subband and time frame . Under the assumption of stationary throughput [9], , and independent subbands, one can further express (6) as follows:
By applying the Bayes’ theorem, (7) can be rewritten as follows: where denotes the probability density function (pdf) of . By assuming independent and based on the PF selection criterion presented in
(1), we can determine the conditional ranking probabilities as follows:
where is the cumulative distribution function (cdf) of , while and are the indexes of the users ranked the first and the second (on subband ), respectively. By using (9) and the Gaussian pdf of , and
under the assumptions that and is an ergodic process (such that its moving average equals the statistical average), now (9) can be re-written as follows:
Hence, (8) can be expressed as follows: By assuming a Gaussian distribution of the instantaneous traffic rate, (11) becomes
Now, assume , so, can be re-written as [8] where (·) represents the standard normal cdf with zero-mean and unit-variance. Furthermore, we assume a proportional relationship between the mean and
standard deviation of all users in the system [8]; hence, the previous expression can be approximated as
After some mathematical manipulations, one can further express (12) as
It is straightforward to show that
Then, one can easily find that
and, finally, through the mathematical induction, we can write
Thus, (15) can be expressed as follows:
The probability of the nonempty buffer for any user, , in terms of average throughput and traffic rate, is given as follows: where is the average arrival traffic rate per user. By substituting (20)
into (19), becomes
As represents the throughput of user i in the system, the average throughput of the entire system is
4.2. Fairness Index
Jain’s fairness index is a well-known quantitative metric that is widely used in wireless communications to measure fairness, and it is defined as follows [14]: where is the amount of resources
accessed by user among competing users. Based on the result for the average throughput for user , as given in (21), it is straightforward to express the Jain’s fairness index of the users’ throughput
as follows: For nonbursty traffic (full-buffer scenario), the analysis is the same as for bursty traffic given above, except that α (the probability of having non-empty buffer) is equal to 1.
4.3. Average Packet Delay
In order to calculate the packet delay, we model the system by using the M/G/1 queuing model. Hence, the average packet delay is given by where is the throughput variance. In order to determine , we
calculate using (3) as follows:
By assuming stationary throughput per user, we can use . Therefore, (26) can be re-written as follows:
In order to determine , we need to find , which can be expressed as follows: and then can be re-written as
The first term in the right-hand side of (29) can be further written as follows:
Using (9) and the assumption of stationary first-order ergodic [9], (30) becomes which can be simplified to
Then, by simply expressing , (32) can be re-written as follows:
Thus, can be expressed as follows:
Next, we determine the second term in the right-hand side of (29), which can be re-written as follows:
From (29), (34) and (35), can be expressed as follows:
Then, we simplify the second term in the right-hand side of (27) as follows:
Substituting (36) and (37) in (27), it can be easily shown that the throughput variance is expressed as:
By substituting (21) and (38) in (25), we can calculate the average packet delay ().
5. Numerical and Simulation Results
The accuracy of the analytical closed-form expressions for the average throughput, fairness index, and packet delay (derived in Section 4) is examined by comparing the analytical results with
simulation results. Computer simulations of one cell with N users are conducted independently of the analytical expressions derived in the previous section to estimate the average throughput,
fairness index, and packet delay. We set the signal bandwidth to 20MHz, the carrier frequency to 2GHz, the noise power to −130 dBW, and to 5000 frames (except in Figures 2, 3, and 10). In addition,
we consider a path loss exponent of 4, the standard deviation of the lognormal shadowing equal to 10dB, the cell radius set to 1500m, the number of users, N, in the cell equal to 32, the frame
duration of 2ms, and the packet size of 180 bits. The number of subbands, M, is 32 and the number of subcarriers, S, is 256. We use Poisson traffic with an arrival rate of λ, which is kept as a
variable to control the traffic load given by λN.
We first analyze the effect of the averaging window () and the impact of using OFDMA instead of OFDM. In OFDM, all subcarriers are given to the selected user by the PF. As shown in Figure 2 (when )
the larger the the higher the throughput. When increases, PF needs more time to compensate disadvantaged users (with low SNR), which leads to a higher throughput for the advantaged users (with good
SNR). As a result, the average throughput increases. On the other hand, when , PF losses its fairness and becomes an opportunistic scheduling algorithm which favors advantaged users, and it is known
that opportunistic scheduling algorithms achieve the highest average throughput (but at the expense of the fairness). Also, it is evident from Figure 2 that PF with OFDMA has higher throughput than
that of PF with OFDM, as the former efficiently utilizes the resources in the frequency domain, and can handle efficiently the bursty traffic because of the subband sharing.
The Jain’s fairness index of PF with OFDMA and PF with OFDM is depicted in Figure 3. Both algorithms show approximately the same values of Jain’s fairness index with a slight improvement for PF with
OFDMA. Also, we can notice that as , increases (when 0), the fairness index decreases, as the algorithm becomes less fair (as discussed above). Furthermore, the lowest Jain’s fairness index is
associated with because this is the case when PF becomes completely opportunistic, as discussed above.
In Figures 4 and 5, the throughput and the Jain’s fairness index of the system are, respectively, shown versus the total traffic load in the cell. Results obtained from both analytical expressions in
(20) and (21) and simulations are presented. It is noteworthy the good agreement between these results, which validate our analytical solution. From Figure 4, one can observe that (as expected) the
average throughput increases sharply at low traffic load, and then it saturates at high traffic load. On the other hand, as shown in Figure 5, the fairness index decreases with the traffic load
increase, and it saturates at high traffic load. This is because as the traffic load increases, fewer resources become available and it becomes more difficult to satisfy the demand of all users.
The performance of the PF scheduling algorithm that we propose in [8] and the agreement between analytical and simulation results are also investigated for a different number of users, N, where the
traffic load expected from each user is assumed to be 10Mbps and the averaging window, , for the simulation, is selected to be 5000. Figures 6 and 7 show the average throughput and Jain’s fairness
index versus the number of users, respectively. Again, it is straightforward to notice that there is good matching between analytical and simulation results. From Figure 6, one can see the increase
in the average throughput when the number of users increases for both analytical and simulation bars. This can be easily explained as follows: as the number of users increases, the traffic loads
increase in the system. Also, as the number of users increases, the chance of scheduling users on subbands with preferable channel gain increases, so the scheduling algorithm utilizes the multiuser
diversity. From Figure 7, we notice a slight fairness index decrease when the number of users increases. This fairness index decrease is expected, as the competition when the number of users
Figure 8 shows the throughput performance at different number of subbands (M). The available frequency bandwidth is divided into different number of subbands to study the behavior of the system with
different numbers of subbands. It is evident that the analytical results and the simulation results agree very well. We also notice that the throughput reaches the maximum when the number of subbands
equals 64. When the number of subbands is small, the number of subcarriers per subband is larger. Hence, the use of the adaptive coding and modulation for all the subcarriers, based on the
subcarriers with worst channel conditions, will waste the resources of many subcarriers with favorable channel conditions. On the other hand, when the number of subbands is large, few subcarriers are
grouped to create a subband, which degrades the throughput because of the increasing amount of unused fractions of subbands at the end of time frames. In other words, when the number of subbands
increases, the number of subbands that are not fully utilized at the end of time frames increases, which degrades the throughput performance.
Figure 9 shows the Jain’s fairness index at different number of subbands. We notice that the number of subbands does not affect the fairness of the system, as all users suffer from the same
degradation of subbands utilization. Thus, the chance of accessing the resources will be affected equally for all users in the system, which keeps the fairness performance the same, regardless of the
number of subbands.
Figure 10 shows the packet delay versus traffic load for the proposed scheduling algorithm, for equals 5000, 3000, and 1000. It is evident that as the traffic load increases, the competition between
users becomes harder, which causes more packets to wait longer time in the users queues. Also, we notice that when increases, the packet delay increases. This can be explained as follows. When
increases, the scheduler tries to maximize the system throughput by forcing greedy treatment among users by allocating most of the resources to a few of users who have favorable channel conditions.
That behavior blocks more packets for requesting users, which increases the average packet delay in the system.
Figure 11 shows the packet delay versus traffic load for the proposed scheduling algorithm (PF with OFDMA), analytically and by simulation, and the packet delay for the PF with OFDM, where the
observation window equals 5000. As we notice, the analytical curve agrees very well with the simulation curve. Also, we notice a slight improvement of the proposed scheduling algorithm over the PF
with OFDM. We notice that on high traffic load (650Mbps) our proposed scheduling algorithm mean packet delay equals 3.75 seconds while the mean packet delay of PF with OFDM equals 3.45 seconds.
It is noteworthy that there is a small difference between the analytical and simulation results. This result difference can be explained because of the approximations that have been introduced while
deriving the analytical model. Such approximations simplify the model at the cost of minor result deviations.
6. Conclusion
In this work, the PF scheduling is investigated for OFDMA wireless systems. The main contribution of this work is the analytical evaluation of the performance of PF scheduling algorithm in OFDMA
systems. We derive approximate closed-form expressions for the average throughput, Jain’s fairness index, and packet delay as the performance metrics. The algorithm performance is investigated for a
broad range of the traffic load and number of subbands. We compare the performance of the proposed algorithm (PF with OFDMA) with that of PF with OFDM. In addition, we verify the correctness and
accuracy of the analytical solution through simulations. Analytical and simulation results are in good agreement, which validates our analytical performance analysis. In future work, we plan to
extend the analysis to the case of different probabilities of the non-empty buffer for different users. We will also consider other fading distributions, such as the Rician distribution.
The authors are grateful to the anonymous reviewers and the editor for their constructive comments that improved the quality of the paper. This work has been supported by the NSERC Discovery Grant
1. L. C. Wang and W. J. Lin, “Throughput and fairness enhancement for OFDMA broadband wireless access systems using the maximum C/I scheduling,” in Proceedings of the IEEE 60th Vehicular Technology
Conference (VTC '04), pp. 4696–4700, September 2004. View at Scopus
2. Y. Ben-Shimol, I. Kitroser, and Y. Dinitz, “Two-dimensional mapping for wireless OFDMA systems,” IEEE Transactions on Broadcasting, vol. 52, no. 3, pp. 388–396, 2006. View at Publisher · View at
Google Scholar · View at Scopus
3. C. Y. Wong, R. S. Cheng, K. B. Letaief, and R. D. Murch, “Multiuser OFDM with adaptive subcarrier, bit, and power allocation,” IEEE Journal on Selected Areas in Communications, vol. 17, no. 10,
pp. 1747–1758, 1999. View at Publisher · View at Google Scholar · View at Scopus
4. I. C. Wong and B. L. Evans, “Optimal resource allocation in the OFDMA downlink with imperfect channel knowledge,” IEEE Transactions on Communications, vol. 57, no. 1, pp. 232–241, 2009. View at
Publisher · View at Google Scholar · View at Scopus
5. H. J. Zhu and R. H. M. Hafez, “Scheduling schemes for multimedia service in wireless OFDM systems,” IEEE Wireless Communications, vol. 14, no. 5, pp. 99–105, 2007. View at Publisher · View at
Google Scholar · View at Scopus
6. N. Ruangchaijatupon and Y. Ji, “Simple proportional fairness scheduling for OFDMA frame-based wireless systems,” in Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC
'08), pp. 1593–1597, usa, April 2008. View at Scopus
7. K. W. Choi, W. S. Jeon, and D. G. Jeong, “Resource allocation in OFDMA wireless communications systems supporting multimedia services,” IEEE/ACM Transactions on Networking, vol. 17, no. 3, pp.
926–935, 2009. View at Publisher · View at Google Scholar · View at Scopus
8. R. Almatarneh, M. Ahmed, and O. Dobre, “Frequency-time scheduling algorithm for OFDMA systems,” in Proceedings of the Canadian Conference on Electrical and Computer Engineering (CCECE '09), pp.
766–771, May 2009. View at Publisher · View at Google Scholar · View at Scopus
9. E. Liu and K. K. Leung, “Proportional fair scheduling: analytical insight under Rayleigh fading environment,” in Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC
'08), pp. 1883–1888, April 2008. View at Scopus
10. B. Sklar, “Rayleigh fading channels in mobile digital communication systems Part I: characterization,” IEEE Communications Magazine, vol. 35, no. 7, pp. 90–100, 1997. View at Scopus
11. B. Sklar, “Rayleigh fading channels in mobile digital communication systems Part II: mitigation,” IEEE Communications Magazine, vol. 35, no. 9, pp. 148–155, 1997. View at Scopus
12. L. C. Tran, T. A. Wysocki, A. Mertins, and J. Seberry, “A generalized algorithm for the generation of correlated Rayleigh fading envelopes in wireless channels,” Eurasip Journal on Wireless
Communications and Networking, vol. 2005, no. 5, pp. 801–815, 2005. View at Publisher · View at Google Scholar · View at Scopus
13. P. J. Smith and M. Shafi, “On a Gaussian approximation to the capacity of wireless MIMO systems,” in Proceedings of the International Conference on Communications (ICC '02), pp. 406–410, May
2002. View at Scopus
14. R. Jain, D. Chiu, and W. Hawe, “A quantitative measure of fairness and discrimination for resource allocation in shared computer systems,” DEC Report DEC-TR-301, Digital Equipment Corporation,
Littleton, Mass, USA, 1984. | {"url":"http://www.hindawi.com/journals/jece/2012/680318/","timestamp":"2014-04-18T18:23:50Z","content_type":null,"content_length":"511881","record_id":"<urn:uuid:d7fbac42-eba2-41a5-9600-074b39f6d9dd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
Enumeration of benzenoid systems and other polyhexes, Topics
"... Fibonacenes (zig-zag unbranched catacondensed benzenoid hydrocarbons) are a class of polycyclic conjugated systems whose molecular graphs possess remarkable properties, often related with the
Fibonacci numbers. This article is a review of the chemical graph theory of fibonacenes, with emphasis on th ..."
Cited by 5 (0 self)
Add to MetaCart
Fibonacenes (zig-zag unbranched catacondensed benzenoid hydrocarbons) are a class of polycyclic conjugated systems whose molecular graphs possess remarkable properties, often related with the
Fibonacci numbers. This article is a review of the chemical graph theory of fibonacenes, with emphasis on their Kekule--structure--related and Clar--structure--related properties.
"... In this paper, a fast and complete method to constructively enumerate fusenes and benzenoids is given. It is fast enough to construct several million non isomorphic structures per second. The
central idea is to represent fusenes as labelled inner duals and generate them in a two step approach using ..."
Cited by 1 (0 self)
Add to MetaCart
In this paper, a fast and complete method to constructively enumerate fusenes and benzenoids is given. It is fast enough to construct several million non isomorphic structures per second. The central
idea is to represent fusenes as labelled inner duals and generate them in a two step approach using the canonical construction path method and the homomorphism principle.
"... In this paper, a fast and complete method to constructively enumerate fusenes and benzenoids is given. It is fast enough to construct several million non isomorphic structures per second. The
central idea is to represent fusenes as labelled inner duals and generate them in a two step approach using ..."
Add to MetaCart
In this paper, a fast and complete method to constructively enumerate fusenes and benzenoids is given. It is fast enough to construct several million non isomorphic structures per second. The central
idea is to represent fusenes as labelled inner duals and generate them in a two step approach using the canonical construction path method and the homomorphism principle.
"... Fibonacenes (zig-zag unbranched catacondensed benzenoid hydrocarbons) are a class of polycyclic conjugated systems whose molecular graphs possess remarkable properties, often related with the
Fibonacci numbers. This article is a review of the chemical graph theory of fibonacenes, with emphasis on th ..."
Add to MetaCart
Fibonacenes (zig-zag unbranched catacondensed benzenoid hydrocarbons) are a class of polycyclic conjugated systems whose molecular graphs possess remarkable properties, often related with the
Fibonacci numbers. This article is a review of the chemical graph theory of fibonacenes, with emphasis on their Kekulé–structure–related and Clar–structure–related properties. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1449866","timestamp":"2014-04-16T07:37:04Z","content_type":null,"content_length":"18293","record_id":"<urn:uuid:4a3db478-132c-487c-a041-08b150ba9d8f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
IV diode charateristic, Find constant A , Urgent
1. The problem statement, all variables and given/known data
this is a lab task, i have found out the IV characteristic of the diode,
now they ask me to suggest a procedure to find the constant A, in the equation
I= AT^3 x exp (-E/ kT),
T being the temperature, and I = saturation current
2. Relevant equations
i know how to find ideality factor fron the the IV graph, but can I find 'A' from the IV graph ?
If not, then what other test equipment and procedures do i need to find 'A' ?
Is 'A' related to the intrinsic carrier concentration in any way?
3. The attempt at a solution
its very urgent, so i ll really appreciate a prompt reply.
thanks :)
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution | {"url":"http://www.physicsforums.com/showthread.php?t=223880","timestamp":"2014-04-17T04:04:58Z","content_type":null,"content_length":"20611","record_id":"<urn:uuid:50b1c155-b05f-4ada-af9b-15798bc84c70>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
show that set is measurable
November 10th 2013, 04:48 PM #1
Junior Member
May 2010
show that set is measurable
Hi all,
Given a set E; for each open, bounded interval (a,b):
b-a = m*((a,b)∩E) + m*((a,b)~E)
implies E is (Lebesgue) measurable.
(we're working with the real line here, so E is a subset of R)
I'm having trouble with this problem. Hint given is to show that A = { E: b-a = m*((a,b)∩E) + m*((a,b)~E) } is a sigma algebra.
I understand the hint logic - if A is a sigma algebra then the sets in A are measurable sets so E must be measurable.
I know that all open, bounded intervals are measurable.
I know the three requirements for a collection of sets to be a sigma algebra.
I don't know how to connect the dots though.
Where do I start? Thanks
Last edited by director; November 10th 2013 at 05:17 PM.
Follow Math Help Forum on Facebook and Google+
Re: show that set is measurable
Here is the definition of meausrability of a set E given in my text:
A set E is said to be measurable provided for any set A, m*(A) = m*(A∩E) + m*(A∩(complement of E))
It looks like (A∩(complement of E)) = (A~E) or (A\E) (just different notation)
So, the equation in the original problem looks very close to what I have in the definition of measurable set. Except I can't say it holds for any set A, only if A=(a,b).
If I were to restrict the measure space (correct term?) to one that contains the just open sets, then E would be measurable?
The complement of an measurable open set is a measurable closed set... What if I add these in?
Last edited by director; November 11th 2013 at 04:38 PM.
Follow Math Help Forum on Facebook and Google+
November 11th 2013, 04:31 PM #2
Junior Member
May 2010 | {"url":"http://mathhelpforum.com/differential-geometry/224065-show-set-measurable.html","timestamp":"2014-04-16T05:12:55Z","content_type":null,"content_length":"33264","record_id":"<urn:uuid:a5094aa4-8081-4c95-8081-c7f610dc2f59>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
vectors help.
April 25th 2010, 03:26 AM #1
Super Member
Sep 2008
vectors help.
Given that;
$XY = \begin{pmatrix} 2\\ 4 \\ 6 \end{pmatrix}$
$ZX = \begin{pmatrix} -10 \\ -8 \\ 6 \end{pmatrix}$
find YZ.
I am not sure how to work this out, and my textbook does not show me an example, can someone please show me?
Make a drawing that shows the points X, Y, and Z. Then try to traverse that triangle from Y to Z along edges that you know how to express in terms of the given vectors:
$\vec{YZ} = \vec{YX}+\vec{XZ}=-\vec{XY}-\vec{ZX}=-\begin{pmatrix}2\\4\\6\end{pmatrix}-\begin{pmatrix}-10\\-8\\6\end{pmatrix}=\ldots$
April 25th 2010, 07:03 AM #2 | {"url":"http://mathhelpforum.com/pre-calculus/141227-vectors-help.html","timestamp":"2014-04-19T00:37:39Z","content_type":null,"content_length":"34135","record_id":"<urn:uuid:a0227b03-41b1-4b6e-8425-244784d6d4a9>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
weakening rule
weakening rule
The weakening rule
In logic, the weakening rule states that premises may be added to the hypotheses of a valid deduction while remaining valid. Along with the contraction rule and the exchange rule, it is one of the
most commonly adopted structural rules.
The weakining rule is not used in all logical frameworks, for instance in linear logic it is discarded.
Exactly how this looks depends on the logic used.
Revised on January 20, 2014 01:45:29 by
Urs Schreiber | {"url":"http://ncatlab.org/nlab/show/weakening+rule","timestamp":"2014-04-17T03:52:05Z","content_type":null,"content_length":"12888","record_id":"<urn:uuid:635ee29e-2106-4dbc-b7b6-dd2008de8c52>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 110
, 1989
"... Mountain View, California 9403 In standard fractal terrain models based on fractional d Brownian motion the statistical character of the surface is, by esign, the same everywhere. A new approach
to the synthesis p of fractal terrain height fields is presented which, in contrast to revious techniq ..."
Cited by 105 (2 self)
Add to MetaCart
Mountain View, California 9403 In standard fractal terrain models based on fractional d Brownian motion the statistical character of the surface is, by esign, the same everywhere. A new approach to
the synthesis p of fractal terrain height fields is presented which, in contrast to revious techniques, features locally independent control of the f f frequencies composing the surface, and thus
local control o ractal dimension and other statistical characteristics. The new f i technique, termed noise synthesis, is intermediate in difficulty o mplementation, between simple stochastic
subdivision and s n Fourier filtering or generalized stochastic subdivision, and doe ot suffer the drawbacks of creases or periodicity. Varying the w local crossover scale of fractal character or the
fractal dimension ith altitude or other functions yields more realistic first approxm imations to eroded landscapes. A simple physical erosion odel is then suggested which simulates hydraulic and
thermal ...
- Psychological Bulletin , 2003
"... Do people behave differently when they are lying compared with when they are telling the truth? The combined results of 1,338 estimates of 158 cues to deception are reported. Results show that
in some ways, liars are less forthcoming than truth tellers, and they tell less compelling tales. They also ..."
Cited by 79 (0 self)
Add to MetaCart
Do people behave differently when they are lying compared with when they are telling the truth? The combined results of 1,338 estimates of 158 cues to deception are reported. Results show that in
some ways, liars are less forthcoming than truth tellers, and they tell less compelling tales. They also make a more negative impression and are more tense. Their stories include fewer ordinary
imperfections and unusual contents. However, many behaviors showed no discernible links, or only weak links, to deceit. Cues to deception were more pronounced when people were motivated to succeed,
especially when the motivations were identity relevant rather than monetary or material. Cues to deception were also stronger when lies were about transgressions. Do people behave in discernibly
different ways when they are lying compared with when they are telling the truth? Practitioners and laypersons have been interested in this question for centuries (Trovillo, 1939). The scientific
search for behavioral cues to deception is also longstanding and has become especially vigorous in the past few decades. In 1981, Zuckerman, DePaulo, and Rosenthal published the first
- Psychological Review , 2007
"... The authors present a computational model that builds a holographic lexicon representing both word meaning and word order from unsupervised experience with natural language. The model uses
simple convolution and superposition mechanisms (cf. B. B. Murdock, 1982) to learn distributed holographic repr ..."
Cited by 57 (6 self)
Add to MetaCart
The authors present a computational model that builds a holographic lexicon representing both word meaning and word order from unsupervised experience with natural language. The model uses simple
convolution and superposition mechanisms (cf. B. B. Murdock, 1982) to learn distributed holographic representations for words. The structure of the resulting lexicon can account for empirical data
from classic experiments studying semantic typicality, categorization, priming, and semantic constraint in sentence completions. Furthermore, order information can be retrieved from the holographic
representations, allowing the model to account for limited word transitions without the need for built-in transition rules. The model demonstrates that a broad range of psychological data can be
accounted for directly from the structure of lexical representations learned in this way, without the need for complexity to be built into either the processing mechanisms or the representations. The
holographic representations are an appropriate knowledge representation to be used by higher order models of language comprehension, relieving the complexity required at the higher level.
- Behavioral and Brain Sciences , 2006
"... Human cognition is unique in the way in which it relies on combinatorial (or compositional) structures. Language provides ample evidence for the existence of combinatorial structures, but they
can also be found in visual cognition. To understand the neural basis of human cognition, it is therefore e ..."
Cited by 28 (2 self)
Add to MetaCart
Human cognition is unique in the way in which it relies on combinatorial (or compositional) structures. Language provides ample evidence for the existence of combinatorial structures, but they can
also be found in visual cognition. To understand the neural basis of human cognition, it is therefore essential to understand how combinatorial structures can be instantiated in neural terms. In his
recent book on the foundations of language, Jackendoff formulated four fundamental problems for a neural instantiation of combinatorial structures: the massiveness of the binding problem, the problem
of 2, the problem of variables and the transformation of combinatorial structures from working memory to long-term memory. This paper aims to show that these problems can be solved by means of neural
‘blackboard ’ architectures. For this purpose, a neural blackboard architecture for sentence structure is presented. In this architecture, neural structures that encode for words are temporarily
bound in a manner that preserves the structure of the sentence. It is shown that the architecture solves the four problems presented by Jackendoff. The ability of the architecture to instantiate
sentence structures is illustrated with examples of sentence complexity observed in human language performance. Similarities exist between the architecture for sentence structure and blackboard
architectures for combinatorial structures in visual cognition, derived from the structure of the visual cortex. These architectures are briefly discussed, together with an example of a combinatorial
structure in which the blackboard architectures for language and vision are combined. In this way, the architecture for language is grounded in perception. 2 Content
- Ann. of Math , 1997
"... We study the level-spacings distribution for eigenvalues of large N \Theta N matrices from the Classical Compact Groups in the scaling limit when the mean distance between nearest eigenvalues
equals 1. Defining by jN (s) the number of nearest neighbors spacings, greater than s ? 0 (smaller than s ? ..."
Cited by 15 (5 self)
Add to MetaCart
We study the level-spacings distribution for eigenvalues of large N \Theta N matrices from the Classical Compact Groups in the scaling limit when the mean distance between nearest eigenvalues equals
1. Defining by jN (s) the number of nearest neighbors spacings, greater than s ? 0 (smaller than s ? o ) we prove functional limit theorem for the process (j N (s) \Gamma IEj N (s))=N 1=2 , giving
weak convergence of this distribution to some Gaussian random process on [0; 1) . The limiting Gaussian random process is universal for all Classical Compact Groups. It is Holder continuous with any
exponent less than 1=2 : Numerical results suggest it not to be a standard Brownian bridge. Our methods can be also applied to study n-level spacings distribution. AMS Subject classification :
Probability theory and stochastic processes 1 Introduction and Formulation of Main Results The idea that statistical behavior of eigenvalues of large random matrices would give an information about
, 1993
"... There are five fundamental concerns in the synthesis of realistic imagery of fractal landscapes: 1) convincing geometric models of terrain; 2) efficient algorithms for rendering those
potentially-large terrain models; 3) atmospheric effects, or aerial perspective, to provide a sense of scale; 4) sur ..."
Cited by 14 (0 self)
Add to MetaCart
There are five fundamental concerns in the synthesis of realistic imagery of fractal landscapes: 1) convincing geometric models of terrain; 2) efficient algorithms for rendering those
potentially-large terrain models; 3) atmospheric effects, or aerial perspective, to provide a sense of scale; 4) surface textures as models of natural phenomena such as clouds, water, rock strata,
and so forth, to enhance visual detail in the image beyond what can be modelled geometrically; and 5) a global context in which to situate the scenes. Results in these five areas are presented, and
some aspects of the development of computer graphics as a new process and medium for the fine arts are discussed. Heterogeneous terrain models are introduced, and preliminary experiments in
simulating fluvial erosion are presented to provide fractal drainage network features. For imaging detailed terrain models we describe grid tracing, a time- and memory-efficient algorithm for ray
tracing height fields. To obtain aerial perspective we develop geometric models of aerosol density distributions with efficient integration schemes for determining scattering and extinction, and an
efficient Rayleigh scattering approximation. We also describe physically-based models of the rainbow and mirage. Proceduralism is an underlying theme of this work; this is the practice of abstracting
models of complex form and behaviors into relatively terse algorithms, which are evaluated in a lazy fashion. Procedural textures are developed as models of natural phenomena such as mountains and
clouds, culminating a procedural model of an Earth-like planet which in the future may be explored interactively in a virtual reality setting.
- Physical Environments, DIS2000, 17-19 August 2000, ACM Publ , 2000
"... This paper introduces a Dimension Space describing the entities making up richly interactive systems. The Dimension Space is intended to help designers understand both the physical and virtual
entities from which their systems are built, and the tradeoffs involved in both the design of the entities ..."
Cited by 10 (4 self)
Add to MetaCart
This paper introduces a Dimension Space describing the entities making up richly interactive systems. The Dimension Space is intended to help designers understand both the physical and virtual
entities from which their systems are built, and the tradeoffs involved in both the design of the entities themselves and of the combination of these entities in a physical space. Entities are
described from the point of view of a person carrying out a task at a particular time, in terms of their attention received, role, manifestation, input and output capacity and informational density.
The Dimension Space is applied to two new systems developed at Grenoble, exposing design tradeoffs and design rules for richly interactive systems. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=45459","timestamp":"2014-04-17T01:49:26Z","content_type":null,"content_length":"38012","record_id":"<urn:uuid:56ec8a86-eaff-4ff7-b443-811036d9aa60>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of impulse
In classical mechanics, an impulse is defined as the integral of a force with respect to time:
$mathbf\left\{I\right\} = int mathbf\left\{F\right\}, dt$
I is impulse (sometimes marked J),
F is the force, and
dt is an infinitesimal amount of time.
A simple derivation using Newton's second law yields:
$mathbf\left\{I\right\} = int frac\left\{dmathbf\left\{p\right\}\right\}\left\{dt\right\}, dt$
$mathbf\left\{I\right\} = int dmathbf\left\{p\right\}$
$mathbf\left\{I\right\} = Delta mathbf\left\{p\right\}$
p is momentum
This is often called the impulse-momentum theorem.
As a result, an impulse may also be regarded as the change in momentum of an object to which a force is applied. The impulse may be expressed in a simpler form when both the force and the mass are
$mathbf\left\{I\right\} = mathbf\left\{F\right\}Delta t = m Delta mathbf\left\{v\right\} = Delta p$
F is the constant total net force applied,
$Delta t$ is the time interval over which the force is applied,
m is the constant mass of the object,
Δv is the change in velocity produced by the force in the considered time interval, and
mΔv = Δ(mv) is the change in linear momentum.
However, it is often the case that one or both of these two quantities vary.
In the technical sense, impulse is a physical quantity, not an event or force. However, the term "impulse" is also used to refer to a fast-acting force. This type of impulse is often idealized so
that the change in momentum produced by the force happens with no change in time. This sort of change is a step change, and is not physically possible. However, this is a useful model for certain
purposes, such as computing the effects of ideal collisions, especially in game physics engines.
Impulse has the same units and dimensions as momentum (kg m/s = N·s).
Using basic math, Impulse can be calculated using the equation:
$mathbf\left\{F\right\}t = Delta p$
$Delta p$ can be calculated, if initial and final velocities are known, by using "mv(f) - mv(i)" or otherwise known as "mv - mu"
F is the constant total net force applied,
$t$ is the time interval over which the force is applied,
m is the constant mass of the object,
v is the final velocity of the object at the end of the time interval, and
u is the initial velocity of the object when the time interval begins.
Hence: $mathbf\left\{F\right\}t = mv - mu$
See also
• Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers. 6th ed., Brooks/Cole. ISBN 0-534-40842-7.
• Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics. 5th ed., W. H. Freeman. ISBN 0-7167-0809-4.
External links and references | {"url":"http://www.reference.com/browse/impulse","timestamp":"2014-04-20T11:56:15Z","content_type":null,"content_length":"83228","record_id":"<urn:uuid:351fdfc0-c716-46af-a7a6-3e850efdb040>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: st: Unbalanced repeated measures analysis question
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: Unbalanced repeated measures analysis question
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject RE: st: Unbalanced repeated measures analysis question
Date Thu, 22 Jul 2010 17:50:20 +0100
HLM in this context usually means hierarchical linear model[l]ing.
Ploutz-Snyder, Robert
You could define your model the way you suggested, yes, however mixed
models can be specified a number of different ways depending on your
research goals and how you want to consider the nesting of your repeated
measures factors (i.e. random terms).
There are a number of excellent books on this type of analysis, going by
names including mixed-effects modeling, mixed modeling, higher level
modeling (HLM), multi-level modeling (MLM) and probably a few other
terms... If you are interested in a more Applied book that uses Stata in
particular, Rabe-Hesketh and Skrondal put together a nice one book
called Multilevel and Longitudinal Modeling Using Stata. I think you
might do well to take a course in MLM if you can to at least wrap your
brain around the theory. But if you want to jump right in then a book
like this one could get you going in the right direction.
Karin Jensen
Thanks to Robert and David for your helpful comments. Sorry to sound
stupid here but mixed models are entirely new to me. I have been
reading up on them.
I have the variables outlined below:
SubjectID MeasurerID MeasurerType Result GoldStandard
where MeasurerID is always a certain MeasurerType (1-3)
SubjectID and MeasurerID should be random effects and MeasurerType
fixed? How would you specify that in the xtmixed syntax? I am
confused about having two grouping variables for the random effects.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2010-07/msg01176.html","timestamp":"2014-04-19T10:21:06Z","content_type":null,"content_length":"9862","record_id":"<urn:uuid:124e65bd-b97f-499d-aa28-ac2f386d111e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Click here to buy posters at Allposters!
Baseball is a game that generates an incredible amount of statistics. Fans use them to measure the performance of their favorite teams and players. Managers use them to evaluate players and to
make situational decisions. Since so much emphasis is placed on statistical analysis of the game, it might be helpful if you learn how these numbers are arrived at. On this page, you will find
formulas for computing the most basic baseball statistics. For those up to the challenge, I have also included sample problems and a quiz.
Batting Stats
Batting Average (BA) - Probably the most referred to stat for batters, the batting average expresses in decimal form the comparison of Hits to At Bats. The formula is as follows:
BA = Hits/At Bats
For example, in 1954, a young Henry Aaron produced 131 hits in 468 official At Bats. Plugging those numbers into our formula, we come up with
BA = 131/468
Dividing 131 by 468, and computing the quotient to four decimal places, we arrive at .2799. Rounding off to three digits, we find that Mr. Aaron batted .280 in 1954. (As an average, that figure
implies that he had 28 hits for every 100 at bats, or 280 for every thousand.) Speaking of Hank Aaron, his autobiography, I Had a Hammer: The Hank Aaron Story
Example 1. In 1959, Hammering Hank had 629 official AB's with 223 Hits. What was his batting average for 1959?
Example 2. In 1941, Ted Williams, The Splendid Splinter, hit safely 185 times out of 456 official times at bat. Figure his batting average, and commit it to memory.
Total Bases (TB) - Before we move on to Slugging Percentage and OPS (On Base Plus Slugging), let's have a quick look at Total Bases.
The formula for computing Total Bases is TB = a(1) + b(2) + c(3) + d(4), where "a" represents the number of singles, "b" the number of doubles, "c" the number of triples and "d" the number of
home runs. A batter's stat line doesn't usually include singles. Just remember that you can determine how many singles a player has hit by subtracting his or her total of extra-base hits from
their total number of hits.
For example, we know that in 1954 Hank Aaron banged out 131 hits. Of those 131, 27 were doubles, 6 were triples, and 13 were home runs. Adding 27 + 6 + 13, we come up with 46 extra-base hits.
Subtracting 46 from 131, we find that Mr. Aaron stroked 85 singles that year. Using the same variables as we used for computing Total Bases, we can make a formula for arriving at a player's
singles total: Singles = Total Hits - b - c - d.
Example 3. Of the 185 hits Ted Williams had in 1939 44 were doubles, 11 were triples, and 31 were home runs. How many total bases did he accumulate?
Slugging Percentage (SLG) - A common criticism of the Batting Average is the fact that it doesn't differentiate between singles and home runs and therefore isn't a true measure of a batter's
performance. Slugging Percentage (which normally isn't presented as a percentage) may give us a clearer picture of a batter's effectiveness. Slugging Percentage is equal to Total Bases divided by
At Bats. The formula is SLG = TB/AB.
In 1957, St. Louis Cardinals great Stan Musial batted .357 with 176 hits (38 doubles, 3 triples, and 29 home runs) in 502 At Bats. Let's figure out his SLG. First, let's compute his total bases.
Subtracting his 70 extra-base hits from his hit total we arrive at 106 singles. Plugging the numbers into the formula (see above), we arrive at this:
TB = 1(106) + 2(38) + 3(3) + 29(4) = 106 + 76 + 9 + 116 = 307
Now, we're ready to compute his SLG.
SLG = TB/AB = 307/502 = .6115 = .612
Example 4. In 1920, Babe Ruth had a pretty good year batting .376 with 54 home runs. That year marked the first time he or anyone had hit 30, 40 or 50 home runs in a season. In fact, he hit more
home runs that year than 14 of the other 15 teams. His output was as follows: 458 At Bats, 172 hits, 36 doubles, 9 triples, and 54 homers. What was the Babe's slugging percentage?
On Base Percentage (OBP) - Introduced by Sports Illustrated in 1956, the OBP, which indicates how often a player does not make an out, is thought by many to be an even better gauge of a batters
performance. In the formula below H represents hits, BB bases on balls, HBP number of times hit by a pitch, and PA, plate appearances.
OBP = [H + BB + HBP]/PA
To determine the total amount of Plate Appearances you must add AB's to those appearances that don't normally count as an official time at bat, times when a player walks, is hit by a pitch, or
sacrifices. Plug the numbers into this formula: PA = AB + BB + HBP + SF.
In 1949, Ted Williams posted these numbers: 566 At Bats, 194 Hits, 162 walks, 0 sacrifices, and 2 hit-by-pitches. First, lets compute Plate Appearances.
PA = AB + BB + HBP + SF = 566 + 162 + 2 + 0 = 730
Now, we're ready to figure out Ted's OBP.
OBP = [H + BB + HBP]/PA = [194 + 162 + 2]/730 = 358/730 = .4904 = .490
To express as a percentage, move the decimal two places to the right. In 1949, Ted Williams reached base 49% of the time.
Example 5. What was Stan "the Man" Musial's OBP for the 1949 season. Here are his numbers: 612 At Bats, 207 hits, 107 walks, 2 HBP's, and 0 sacrifices.
Before moving on, I'd like to mention a stat known as OBP + SLG. It is what it says - compute it by adding OBP and SLG. Be sure to keep your decimal points lined up.
Runs Created (RC) - A Stat To Play With. Discovered by stats maven Bill James, Runs Created is a remarkable predictor of how many a runs a team will score in a season. Since more runs scored
usually translates into more wins, this formula can be a valuable tool. By now, you should be able to compute this on your own. Here's the formula:
RC = [(H + BB) x (Total bases)]/[AB + BB]
Example 6. Here are Babe Ruth's 1920 stats again: 458 At Bats, 172 hits, 150 base on balls, 36 doubles, 9 triples, and 54 homers. Compute his RC.
Final Exam
These are Henry Aaron's numbers for the 1959 season: 154 Games, 629 At Bats, 223 Hits, 46 Doubles, 7 Triples, 39 Home Runs, 51 Bases on Balls, 9 Sacrifices, 4 HBP's. Compute the following:
Batting Average, Total Bases, Slugging Percentage and OBP. Compare his numbers with those of his fellow Hall of Famers.
Pitching Stats
Pitching stats are relatively easy to figure out. I will list here some basic formulas:
Won Lost Percentage (WLP) = Total Wins/Total Decisions
Earned Run Average (ERA) = (Earned Runs Given Up/Innings Pitched) X 9
WHIP = (Hits Allowed + Walks Allowed) / Total Innings Pitched
Strike outs per one hundred innings - Total K's /Total Innings Pitched
Strike out to Walk ratio - K 's/BB's | {"url":"http://www.sports.aceswebworld.com/baseballmath.html","timestamp":"2014-04-20T10:47:44Z","content_type":null,"content_length":"17094","record_id":"<urn:uuid:481ad6b8-5e6d-498f-9d17-24aa9104cbcf>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
Worth a Thousand Words
In Mathematica, Pictures Are Worth a Thousand Words
June 8, 2007 — Christopher Carlson, Senior User Interface Developer, User Interfaces
One of the challenges of developing Mathematica is resisting the urge to spend all my time playing with the graphics toys I create. A lot of what I do results in features so fun to explore that they
jeopardize the further development of Mathematica. I’d like to point out a few of them in this blog, starting with a simple but profound change in the behavior of Mathematica graphics: direct
graphics output.
In previous versions of Mathematica, the result of a Plot or other graphics command was the abbreviated form - Graphics - that represented the symbolic output. The actual graphical image itself was
spit out like a watermelon seed as a side-effect of the evaluation and was not associated with the symbolic output.
In Mathematica 6, the output and the image are one and the same, behavior we call “direct output” to contrast it with the “side-effect output” of previous versions. This simple change in behavior
underlies much of the interesting new functionality in Version 6.
An immediate consequence of direct output is that it is now easy to make lists of graphics:
(In previous versions, the output would have been:
“{- Graphics -, - Graphics -, - Graphics -}“.)
Tables of graphics are straightforward, too:
In fact, graphics and text are now so tightly integrated that graphics can appear anywhere in a typeset expression:
That’s all fine and good, but where did the symbolic output of graphics evaluations go? Answer: the image is the symbolic output. How can that be possible?
All Mathematica expressions have both input and output forms. The input form is the text you type when you enter an expression, for example:
The output form is the way the expression is presented in output–in this case, as a two-dimensional typeset fraction and radical:
As far as Mathematica is concerned, those are equivalent representations of the same expression. They can be used interchangeably.
In Mathematica 6, we made graphics follow the same paradigm. When you evaluate a graphics expression, the image you see is the output form of the corresponding Graphics or Graphics3D expression. For
the first time, the image and the textual expression are equivalent, and any output can be reused as input–that’s where the fun begins.
Let’s say you want to know the options of a Plot output. Type “// Options” after the image and evaluate.
{AspectRatio -> 1/GoldenRatio, Axes -> True, AxesOrigin -> {0, 0}, PlotRange -> {{0, 2 Pi}, {-1., 1.}}, PlotRangeClipping -> True, PlotRangePadding -> {Scaled[0.02], Scaled[0.02]}}
Want to add a label to your plot? Just Append the PlotLabel option to the image.
Replacement operations are especially fun. Replacing Line with Polygon in this plot output gives a filled plot.
Similarly, replacing Line with Point shows the vertices of the plot curve–revealing Mathematica‘s adaptive plot refinement strategy, which places points more densely in the regions of a plot that
have high curvature.
Of course, 3D graphics work the same way. Here’s a ParametricPlot3D of a cone. Adding black edges to the polygons that make up the cone’s surface reveals how Mathematica subdivides the polygonal mesh
to yield a smooth surface.
Notice that all of the symbolic information necessary to describe an image lurks not far behind the image itself. That makes it possible in Mathematica 6 to define functions that grab an image’s
symbolic description, rearrange or otherwise modify the description, and instantly update the image. Hook that function up to a button and put the button into a notebook, and you have the beginnings
of a palette that operates on graphics. It’s all incredibly easy in Mathematica 6. | {"url":"http://blog.wolfram.com/2007/06/08/in-mathematica-pictures-are-worth-a-thousand-words/","timestamp":"2014-04-19T09:25:11Z","content_type":null,"content_length":"55115","record_id":"<urn:uuid:35f9bc08-0d0e-4991-8b8d-2b17c5b0a6d5>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
July 2009
<<June July August>>
Since all the checkers are at 1-point, the players have only one legal move per dice throw - if it is a double they can bear-off four checkers; otherwise they bear-off two. We start with the simpler
version of a single player by defining f(n) to be the probability of the last throw being a double when we start with n checkers.
Clearly: f(1)=f(2)=1/6; f(3)=f(4)=11/36 and f(n)=1/6*f(n-4)+5/6*f(n-2).
The solution for this recursion formula is f(2n-1)=f(2n)=(2+5*(-1/6)^n)/7.
The recursion formula for two players in the game is a little more complicated:
f(n,m)=1/6*f(m,n-4)+5/6*f(m,n-2) (with similar boundary conditions)
and its solution is f(15,15)=~0.3417.
Note that for the full game, the answer would be even larger, since there are, for example, situations where one may bear off four checkers with a 6-6 throw and none with 3-4 throw.
If you have any problems you think we might enjoy, please send them in. All replies should be sent to: webmster@us.ibm.com | {"url":"http://domino.research.ibm.com/Comm/wwwr_ponder.nsf/Solutions/July2009.html","timestamp":"2014-04-21T12:09:07Z","content_type":null,"content_length":"11964","record_id":"<urn:uuid:278921d6-14dd-4b21-acfc-edf01c3e1686>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
Denver Algebra 2 Tutor
Find a Denver Algebra 2 Tutor
...I have been tutoring high school students one-on-one in algebra for several years and you can't possibly do algebra if you don't know anything about prealgebra. I have tutored high school as
well as college/university students in Calculus, and of course one must know something about Precalculus ...
15 Subjects: including algebra 2, French, calculus, algebra 1
...While studying as an undergraduate I became a tutor for the mathematics department. This allowed me to bring my love of math to others. Helping others with math became as rewarding as my own
6 Subjects: including algebra 2, calculus, geometry, algebra 1
...I enjoyed helping other students to understand the subject material they were struggling with. Through helping others, my excitement for learning was enhanced, which inspired me to pursue
tutoring opportunities. I now have experience in tutoring algebra, geometry, pre-calculus, trigonometry, ca...
13 Subjects: including algebra 2, calculus, physics, geometry
...I'm Kate and I'm a young professional in the realm of information science. I have several years of experience with running a private tutoring business and a Master's degree in Applied
Biostatistics and Epidemiology. I also have experience in medical school and with many kinds of research.
10 Subjects: including algebra 2, chemistry, algebra 1, biology
...Additionally, Notre Dame requires each student to obtain a broad liberal arts foundation regardless of major, which enables me to think critically about a variety of topics and present topics
from different angles to students when necessary. Also, due to my math and science background, I will wo...
15 Subjects: including algebra 2, calculus, algebra 1, SAT math
Nearby Cities With algebra 2 Tutor
Arvada, CO algebra 2 Tutors
Aurora, CO algebra 2 Tutors
Centennial, CO algebra 2 Tutors
Cherry Hills Village, CO algebra 2 Tutors
Edgewater, CO algebra 2 Tutors
Englewood, CO algebra 2 Tutors
Glendale, CO algebra 2 Tutors
Greenwood Village, CO algebra 2 Tutors
Lakewood, CO algebra 2 Tutors
Littleton, CO algebra 2 Tutors
Northglenn, CO algebra 2 Tutors
Thornton, CO algebra 2 Tutors
Western Area, CO algebra 2 Tutors
Westminster, CO algebra 2 Tutors
Wheat Ridge algebra 2 Tutors | {"url":"http://www.purplemath.com/Denver_Algebra_2_tutors.php","timestamp":"2014-04-18T11:45:51Z","content_type":null,"content_length":"23817","record_id":"<urn:uuid:a977c277-1f1d-4bf6-8194-5f3463bfebd1>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
Scope of functions available to OpenMx
short and sweet: how can OpenMx access user-written functions and functions from other packages? e.g., i would like to access alpha and sumx in the following code:
alpha <- .3
sumx <- function (x) {
x <- c(1, 2, 3)
testModel <- mxModel("testModel", mxAlgebra(expression="alpha * sumx(x)", name="test"))
gory and detailed: i can certainly do this using mxMatrix and mxAlgebra statements, but the problem that i am dealing with is much more complex. it involves analyzing symptom count data in twins
where there are lots of 0 counts, fewer counts of 1, etc. i assume that the count data follow poisson processes for twin 1 and twin2 (with respective parameters lambda1 and lambda2) and that lambda1
and lambda2 are random variables distributed as a bivariate gamma that allows for a correlation between lambda1 and lambda2.
to accomplish this, i need to integrate the bivariate gamma over lambda1 and lambda2. the easiest way to do this is to access numerical quadrature routines that are external to the OpenMx functions.
any help is greatly appreciated. | {"url":"http://openmx.psyc.virginia.edu/print/629","timestamp":"2014-04-19T19:58:06Z","content_type":null,"content_length":"8651","record_id":"<urn:uuid:3e2413b6-213b-4b6c-85b6-c9f930f1f188>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Journal of Statistical Theory and Practice
What Is Imprecision?
Uncertainty is usually modelled by a probability distribution, and treated using techniques from probability theory. Such an uncertainty model will often be inadequate in cases where insufficient
information is available to identify a unique probability distribution. In that case, imprecise probabilities aim to represent and manipulate the really available knowledge about the system.
Similar concerns arise when dealing with utility. In making decisions, each reward is assigned a single real number, called utility, and rewards are accordingly ranked. However, in many practical
cases, a complete ranking over all rewards is unrealistic. Imprecise utility aims to represent and reason with such incomplete preferences over rewards.
The main benefits of using imprecise probabilities and imprecise utilities, compared to classical statistical methods, are
• more reliable inference
• no pressing need for sensitivity analysis, as this is built into the model itself
• indecision can be explicitly modelled
• effect of modelling assumptions and information on inference is more apparent and credible
• information from different sources can be coherently combined
The term imprecision actually covers a very wide range of extensions of the classical theory of probability. To sum just a few, they include
• lower and upper previsions (Walley, 1991)
• belief functions (Dempster, 1967; Shafer, 1976), theory of hints (Kohlas and Monney, 1995), transferable belief model (Smets, 1992)
• possibility measures (Dubois, 1985, 1988)
• non-additive measures (Denneberg, 1994)
• credal sets and sets of probabilities and utilities (Levi, 1980)
• risk measures (Artzner et al., 1999)
• 2- and n-monotone set functions, Choquet capacities (Choquet, 1953)
• comparative probability orderings (Keynes, 1921; De Finetti, 1931; Fine, 1973; also see overview by Fishburn, 1986)
• robust Bayes methods (Berger, 1984)
• sets of desirable gambles (Walley, 1991)
• p-boxes (Ferson et al., 2003)
• lower and upper envelopes/collectives (Papamarcou and Fine, 1991)
• interval probability (Weichselberger 2000, 2001)
• capacities (Huber, 1965; Huber and Strassen, 1973)
• ambiguity (Ellsberg, 1961)
• logical/fiducial probabilities (Hampel, 1993; Weichselberger, 2005)
• linear partial information (Kofler and Menges, 1976)
• multiple priors (Gilboa, 1989)
• partial identification of probability distributions (Manski, 2003)
Recent Advances in Statistics Using Imprecision
Currently, a popular approach to statistics using imprecision, is by use of Walley's generalised Bayes rule, which is close in nature to the robust Bayesian approach where a set of priors is used,
and each prior in the set is updated to produce a set of posteriors. A particularly successful model, be it not without its critics, is the imprecise Dirichlet model, which has been applied for
example in game theory, classification, Markov decision processes, aggregation, etc. Other statistical methods with promising potential for application include the bounded derivative model, and
nonparametric predictive inference.
In classical statistics, limit theorems play a crucial role. Recently, generalised versions of such theorems have been presented, opening new possibilities from a frequentist perspective.
Imprecise probabilities and utilities also lead the way to generalised decision support methods. For example, generalisations of maximising expected utility lead to interesting research challenges,
the solutions to some of which have been presented and addressed. A common theme in all of these generalised methods, is that they focus on realistic reflection of available information and
preferences, and as such support informative decisions. This also poses challenges for elicitation and data collection.
Aim Of The Special Issue
With this special issue on imprecision we hope to promote new and recent techniques that employ imprecise methods in a useful way, and advance them to a wider audience. We especially hope to
demonstrate the benefits of imprecise models over traditional statistical methods. In particular we are looking for (but not exclusively):
• Applications enhanced by use of imprecision (less information, fewer assumptions).
• Theoretical and methodological developments inspired by practical problems, and illustrating their use in such problems.
• Studies, with examples of practical nature, to emphasise advantages and disadvantages of imprecise methods compared to classical (both frequentist and Bayes) inferential methods, and also to show
similarities and differences to robust statistical and nonparametric methods (see above list).
• Decision support using imprecision in probabilities and/or utilities, with applications or illustrations from a practical perspective.
• J. Abellán and S. Moral. Upper entropy of credal sets. Applications to credal classification. International Journal of Approximate Reasoning, 39:235-255, 2005.
• Philippe Artzner, Freddy Delbaen, Jean-Marc Eber, and David Heath. Coherent measures of risk. Mathematical Finance, 9(3):203-228, 1999.
• Thomas Augustin and Frank P. A. Coolen. Nonparametric predictive inference and interval probability. Journal of Statistical Planning and Inference, 124:251-272, 2004.
• James O. Berger. The robust Bayesian viewpoint. In J. B. Kadane, editor, Robustness of Bayesian Analyses, pages 63-144. Elsevier Science, Amsterdam, 1984.
• Jean-Marc Bernard. An introduction to the imprecise Dirichlet model for multinomial data. International Journal of Approximate Reasoning, 39(2-3):123-150, 2005.
• G. Choquet. Theory of capacities. Annales de l'Institut Fourier, 5:131-295, 1953-54.
• Frank P. A. Coolen and Pauline Coolen-Schrijner. Nonparametric predictive comparison of proportions. Journal of Statistical Planning and Inference, 137:23-33, 2007.
• Pauline Coolen-Schrijner and Frank P. A. Coolen. Adaptive age replacement strategies based on nonparametric predictive inference. Journal of the Operational Research Society, 55:1281-1297, 2004.
• Gert de Cooman and Marco Zaffalon. Updating beliefs with incomplete observations. Artificial Intelligence, 159(1-2):75-125, 2004.
• Gert de Cooman and Matthias C. M. Troffaes. Dynamic programming for deterministic discrete-time systems with uncertain gain. International Journal of Approximate Reasoning, 39(2-3):257-278, Jun
• Gert de Cooman and Matthias C. M. Troffaes. Dynamic programming for deterministic discrete-time systems with uncertain gain. International Journal of Approximate Reasoning, 39(2-3):257-278, Jun
• A. P. Dempster. Upper and lower probabilities induced by a multivalued mapping. Ann. Math. Statist., 38:325-339, 1967.
• Dieter Denneberg. Non-additive Measure and Integral. Kluwer, Dordrecht, 1994.
• Didier Dubois and Henri Prade. Possibility Theory - An Approach to Computerized Processing of Uncertainty. Plenum Press, New York, 1988.
• Daniel Ellsberg. Risk, ambiguity, and the Savage axioms. The Quarterly Journal of Economics, 75(4):643-669, 1961.
• Scott Ferson, Vladik Kreinovich, Lev Ginzburg, Davis S. Myers, and Kari Sentz. Constructing probability boxes and Dempster-Shafer structures. Technical Report SAND2002-4015, Sandia National
Laboratories, January 2003.
• Terrence L. Fine. Lower probability models for uncertainty and nondeterministic processes. Journal of Statistical Planning and Inference, 20:389-411, 1988.
• P. C. Fishburn. The axioms of subjective probability. Statistical Science, 1:335-358, 1986.
• Itzhak Gilboa and David Schmeidler. Maxmin expected utility with non-unique prior. Journal of Mathematical Economics, 18(2):141-153, 1989.
• F. Hampel. Some thoughts about the foundations of statistics. In S. Morgenthaler, E. Ronchetti, and W. A. Stahel, editors, New Directions in Statistical Data Analysis and Robustness, pages
125-137. Birkhäuser, Basel, 1993.
• Peter J. Huber. A robust version of the probability ratio test. The Annals of Mathematical Statistics, 36(6):1753-1758, 1965.
• Peter J. Huber. The use of Choquet capacities in statistics. Bulletin of the International Statistical Institute, XLV, Book 4, 1973.
• Peter J. Huber and Volker Strassen. Minimax tests and the Neyman-Pearson lemma for capacities. The Annals of Statistics, 1(2):251-263, 1973.
• E. Kofler and G. Menges. Entscheidungen bei unvollständiger Information, volume 136 of Lecture Notes in Economics and Mathematical Systems. Springer, Berlin, 1976.
• E. Kofler, Z. W. Kmietowicz, and A. D. Pearman. Decision making with linear partial information (L.P.I.). The Journal of the Operational Research Society, 35(12):1079-1090, 1984.
• J. Kohlas and P.-A. Monney. Mathematical Theory of Hints (An Approach to the Dempster-Shafer Theory of Evidence), volume 425 of Lecture Notes in Economics and Mathematical Systems. Springer,
• Isaac Levi. On indeterminate probabilities. Journal of Philosophy, 71:391-418, 1974.
• Isaac Levi. The Enterprise of Knowledge. An Essay on Knowledge, Credal Probability, and Chance. MIT Press, Cambridge, 1980.
• Charles Manski. Partial Identification of Probability Distributions. Springer Series in Statistics. Springer, New York, 2003.
• Adrian Papamarcou and Terrence L. Fine. Unstable collectives and envelopes of probability measures. The Annals of Probability, 19(2):893-906, 1991.
• R. Pelessoni and P. Vicig. Coherent risk measures and upper previsions. In G. de Cooman, T. L. Fine, and T. Seidenfeld, editors, ISIPTA '01 - Proceedings of the Second International Symposium on
Imprecise Probabilities and Their Applications, pages 307-315, Maastricht, 2001. Shaker Publishing.
• Glenn Shafer. A Mathematical Theory of Evidence. Princeton University Press, 1976.
• P. Smets. Resolving misunderstandings about belief functions. International Journal of Approximate Reasoning, 6:321-344, 1992.
• Matthias C. M. Troffaes. Learning and optimal control of imprecise Markov decision processes by dynamic programming using the imprecise Dirichlet model. In Miguel Lopéz-Díaz, María Á. Gil,
Przemyslaw Grzegorzewski, Olgierd Hyrniewicz, and Jonathan Lawry, editors, Soft Methodology and Random Information Systems, pages 141-148, Berlin, 2004. Springer.
• Matthias C. M. Troffaes. Decision making under uncertainty using imprecise probabilities. International Journal of Approximate Reasoning, 45:17-29, 2007.
• Lev V. Utkin and Thomas Augustin. Decision making under incomplete data using the imprecise Dirichlet model. International Journal of Approximate Reasoning, 44:322-338, 2007.
• Peter Walley. Statistical Reasoning with Imprecise Probabilities. Chapman and Hall, London, 1991.
• Peter Walley. Inferences from multinomial data: Learning about a bag of marbles. Journal of the Royal Statistical Society, 58(1):3-34, 1996.
• Peter Walley. A bounded derivative model for prior ignorance about a real-valued parameter. Scandinavian Journal of Statistics, 24(4):463-483, 1997.
• K. Weichselberger. The theory of interval probability as a unifying concept for uncertainty. International Journal of Approximate Reasoning, 24:149-170, 2000.
• K. Weichselberger. Elementare Grundbegriffe einer allgemeineren Wahrscheinlichkeitsrechnung I - Intervallwahrscheinlichkeit als umfassendes Konzept. Physica, Heidelberg, 2001. In cooperation with
T. Augustin and A. Wallner.
• K. Weichselberger. The logical concept of probability and statistical inference. In Fabio G. Cozman, Robert Nau, , and Teddy Seidenfeld, editors, ISIPTA '05: Proceedings of the Fourth
International Symposium on Imprecise Probabilities and Their Applications, pages 396-405, Pittsburgh, USA, July 2005.
• Marco Zaffalon. The naive credal classifier. Journal of Statistical Planning and Inference, 105(1):5-21, June 2002.
• Marco Zaffalon, Keith Wesnes, and Orlando Petrini. Reliable diagnoses of dementia by the naive credal classifier inferred from incomplete cognitive data. Artificial Intelligence in Medicine, 29
(1-2):61-79, 2003. | {"url":"http://www.maths.dur.ac.uk/users/matthias.troffaes/jstpip/improb.html","timestamp":"2014-04-19T06:52:02Z","content_type":null,"content_length":"16138","record_id":"<urn:uuid:c07e5082-b10f-43a1-a7cc-0e668f2398fa>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
Christmas Ornaments
Given a collection of Christmas ornaments in the shape of arbitrary
polyhedra (such as, but not limited to cubes, tetrahedra, icosahedra,
etc), suppose each face of each polyhedron is colored depending on
the number of edges of the face. For example, every four-sided face
(such as a face of a cube) has color number 4, and every triangular
face has color number 3, and in general every n-sided face has color
number n. Can there be a Christmas Ornament with all sides painted
a different color?
If we assume that two faces share no more than one common edge, and
that there are a finite number of faces, then the answer is easily
seen to be no. Simply list, in order, the numbers of neighbors for
the n faces, so we have a list of n strictly increasing positive
integers (because no two faces have the same number of neighbors).
Thus at least one of the n faces must border on at least n other
faces, which is impossible because there are only n-1 other faces.
On the other hand, if we remove the restriction to finite-faced
polyhedra, the answer is yes, because we can define a fractal
structure that has no two surfaces with the same number of "sides".
To illustrate this, consider a regular tetrahedron, each of whose
four triangular faces has 3 sides. Now draw a triangle on one face,
a square on another, a pentagon on another, and a 9-sided polygon
on the remaining face. If we exclude these polygonal regions from
their respective faces, then the original faces of the tetrahedron
have 6, 7, 8, and 12 edges, and the excluded regions have 3, 4, 5,
and 9 edges.
Needless to say, these "excluded regions" aren't really distinct
faces, because they reside entirely within the planes of the original
faces. However, we can fix this by building little pyramids on each
of those drawn polygons. This creates a large number of new
triangular faces, each of which obviously has three sides. Now we
can repeat the trick, drawing a 10-sided polygon on one of those
little triangular faces, an 11-sided polygon on another, a 15-sided
polygon on another, and so on. Then we build little pyramids on
these polygonal bases and repeat the process.
Thus, defining a solid as the limit of this construction process
carried on indefinitely, we have a fractal ornament with infinitely
many faces, no two of which has the same number of edges. Notice
that, even though each face shares only one edge with each of its
neighbors, this construction evades the earlier proof simply because
it's not a finite construction, i.e., there is no finite integer n
equal to the number of faces.
So, in order for the answer to be no, we need to restrict our
polyhedra to those with a finite number of faces. The question is
whether there exists a finite polyhedron such that no two faces
have the same number of edges - where "edge" is defined as a maximal
*contiguuous* line segment contained in two specific faces. Thus,
two faces can share more than one edge. (Note that a contiguous
line segment counts as more than one edge for a given face if it
intersects more than one other face.) I'll also assume the surface
has genus 2 for the moment.
Let's say a face is "normal" if it shares exactly one edge with
each of its neighboring faces. We've seen that if all the faces
of a solid are normal, it's impossible for all of them to have a
different number of edges. Now let's consider a solid with one or
more non-normal face(s), (as suggested by David Einstein) such as
two edges adjoining
the same face of neighbor
___ ____
\ \ / /
\ \/ /
This implies the existence of two faces X and Y that are adjacent
over (at least) two separate segments of their common line, as
illustrated below:
Face X
c ___/ \
/ d \
A_______B/ \C__________D
\_______ j__/
l g\ / k
h i
Face Y
Faces X and Y share the common edges AB and CD, but they do not
connect to each other on the segment BC. Instead, within the region
bounded by the cycle of vertices BcdefCkjihglB we have a set of
other surfaces. Let's call a region like this an "anomaly" (borrowed
from crystalography).
Suppose all the surfaces enclosed by X and Y in this anomaly are
normal. We can then apply our original argument (strengthened by the
fact that no face can have fewer than 3 edges) to show that at least
one of the n faces in this anomaly must have at least n+2 neighbors,
which is impossible because even counting X and Y there are only n+1
available neighbors.
Therefore, we conclude that not all the faces in this anomaly can be
normal, i.e., there must be two faces that share more than one common
edge. In other words, there must be a (strictly smaller) anomaly
within the anomaly. We can now repeat the above argument to prove
that THIS smaller anomaly must also contain an anomaly, and so on
ad infinitum.
This shows again that if we allow the polyhedron to have infinitely
many faces the proof doesn't work - fortunately - because the
theorem is false in that case. On the other hand, if we consider
only polyhedra with a FINITE number of faces, then we must at some
stage reach an anomaly that is filled with only normal faces, and
so those faces cannot all have a different number of edges.
What about the restriction to surfaces of genus 2? I imposed that
because of the possibility that for a surface of higher genus an
anomaly might "enclose itself", and thereby allow a finite solution,
if the anomaly contains a "wormhole". There may be some way to close
this loophole (literally!) for surfaces of higher genus, but I don't
see it at the moment.
"God hath given you one face,
and you make yourself another."
Return to MathPages Main Menu | {"url":"http://mathpages.com/home/kmath450.htm","timestamp":"2014-04-21T09:36:51Z","content_type":null,"content_length":"6688","record_id":"<urn:uuid:0594c53f-2f38-4e11-86a3-da29351e2b57>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Quick question about derivatives involving Newton's dot notation and the chain rule: I was just wondering if, say you have a velocity function in terms of t, would that be considered x-dot? So for
example, if they give x=2t and that's in terms of velocity, it's x-dot? And then the derivative of that for acceleration would be x-double dot? Furthermore, say you have a velocity position function
y=x^2. Would the chain rule for that be y = 2x*(x-dot)? If you're confused by when I say x-dot, I'm talking about this: http://web.mst.edu/~reflori/be150/Dyn%20Lecture%20Videos/
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50eccf25e4b0d4a537ccf7ec","timestamp":"2014-04-17T18:37:19Z","content_type":null,"content_length":"28216","record_id":"<urn:uuid:94836ec4-de12-4c14-8c23-96282e3f8357>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computerising Mathematical Text with MathLang
Download Links
by Fairouz Kamareddine , Joe B. Wells , CHRISTOPH ZENGLER
author = {Fairouz Kamareddine and Joe B. Wells and CHRISTOPH ZENGLER},
title = {Computerising Mathematical Text with MathLang},
year = {}
Mathematical texts can be computerised in many ways that capture differing amounts of the mathematical meaning. At one end, there is document imaging, which captures the arrangement of black marks on
paper, while at the other end there are proof assistants (e.g., Mizar, Isabelle, Coq, etc.), which capture the full mathematical meaning and have proofs expressed in a formal foundation of
mathematics. In between, there are computer typesetting systems (e.g., LATEX and Presentation MathML) and semantically oriented systems (e.g., Content MathML, OpenMath, OMDoc, etc.). The MathLang
project was initiated in 2000 by Fairouz Kamareddine and Joe Wells with the aim of developing an approach for computerising mathematical texts which is flexible enough to connect the different
approaches to computerisation, which allows various degrees of formalisation, and which is compatible with different logical frameworks (e.g., set theory, category theory, type theory, etc.) and
proof systems. The approach is embodied in a computer representation, which we call MathLang, and associated software tools, which are being developed by ongoing work. Four Ph.D. students (Manuel
Maarek (2002/2007), Krzysztof Retel (since 2004), Robert Lamar (since 2006)), and Christoph Zengler (since 2008) and over a dozen master’s degree and undergraduate
319 A Compendium of Continuous Lattices - Gierz, Hofmann, et al. - 1980
75 1895): “Beiträge zur Begründung der transifiniten Mengelehre - Cantor
71 eine der Arithmetischen Nachgebildete Formelsprache des Reinen Denkens. Verlag von Louis Nebert, Halle. English translation in van Heijenoort 1967 as “Begriffsschrift, A formula language,
modeled upon that of arithmetic, for pure thought - Frege
56 The Mathematical Vernacular, a language for mathematics with typed sets - Bruijn - 1987
50 The Thirteen Books of Euclid’s Elements - Heath
24 Flexible encoding of mathematics on the computer - Kamareddine, Maarek, et al. - 2004
21 Stetigkeit und irrationale Zahlen. Vieweg, 1872. Reprinted in (Dedekind, 1968), Volume 3, Chapter L, pages 315–334. Translated by Wooster Beman as “Continuity and irrational numbers - Dedekind -
17 J.B.: Toward an object-oriented structure for mathematical text - Kamareddine, Maarek, et al. - 2006
15 Mathlang: Experience-driven development of a new mathematical language - Kamareddine, Maarek, et al.
12 A Modern Perspective on Type Theory From its Origins Until Today, volume 29 of Applied Logic Series - Kamareddine, Laan, et al. - 2004
10 Cours d’analyse de l’ École Royale Polytechnique. Première partie: Analyse algébrique - Cauchy
10 Computerizing Mathematical Text with MathLang. Electron - Kamareddine, Wells
9 Gradual computerisation/formalisation of mathematical texts into Mizar - Kamareddine, Maarek, et al. - 2004
8 J.B.: Restoring Natural Language as a Computerised Mathematics Input Method - Kamareddine, Lamar, et al. - 2007
4 Narrative structure of mathematical texts - Kamareddine, Maarek, et al. - 2004
3 Objectives of openmath - Abbott, Leeuwen, et al. - 1996
3 Grundgesetze der Arithmetik, volume 1 - Frege - 1962
3 Grundgesetze der Arithmetik, Volume 2. Jena: Verlag Herrman Pohle. Reprinted by Georg Olms Verlagsbuchhandlung - Frege - 1903
2 Wide Web Consortium). Mathematical markup language (MathML) version 2.0. W3C Recommendation - W3C - 2003
2 Wide Web Consortium). XQuery 1.0 and XPath 2.0 data model (XDM). W3C Recommendation - W3C - 2007
2 Digitised mathematics: Computerisation vs. formalisation - Kamareddine, Maarek, et al. - 2007 | {"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.150.354","timestamp":"2014-04-19T00:41:27Z","content_type":null,"content_length":"25614","record_id":"<urn:uuid:af6bf34e-bb97-4f9c-b651-7873739eee5b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
geometry in daily life project
Best Results From Wikipedia Yahoo Answers Youtube
From Wikipedia
Algebra Project
The Algebra Project is a national U.S.mathematics literacy effort aimed at helping low-income students and students of color successfully achieve mathematical skills that are a prerequisite for a
college preparatory mathematics sequence in high school. Partially, the Project's mission is to ensure "full citizenship in today's technological society." Founded by Civil Rights activist and Math
educator Robert Parris Moses in the 1980s, the Algebra Project has developed curricular materials, trained teachers and teacher-trainers, and provided ongoing professional development support and
community involvement activities to schools seeking to achieve a systemic change in mathematics education.
The Algebra Project reaches approximately 10,000 students and approximately 300 teachers per year in 28 local sites across 10 states.
The Algebra Project focuses on the Southern U.S., where the Southern Initiative of the Algebra Project is directed by David J. Dennis, Sr., and on the Young Peoples' Project (YPP), which recruits,
trains and deploys high school and college age "Math Literacy Workers" to work with their younger peers in a variety of math learning opportunities and engage "the demand side" of mathematics
education reform. The YPP is directed by Omowale Moses.
Increased student performance in mathematics, as well as greater numbers of students enrolling in college preparatory mathematics classes, is a well documented outcome of the project's work.
The Algebra Project was born out of one parent's concern with the mathematics education of his children in the public schools of Cambridge, Massachusetts. In 1982, Bob Moses was invited by Mary Lou
Mehrling, his daughter's eighth grade teacher, to help several students with the study of algebra. Moses, who had taught secondary school mathematics in New York City and Tanzania, decided that an
appropriate goal for those students was to have enough skills in algebra to qualify for honors math and science courses in high school. His success in producing the first students from the Open
Program of the Martin Luther King School who passed the city-wide algebra examination and qualified for ninth grade honors geometry was a testament to his skill as a teacher. It also highlighted a
serious problem: Most students in the Open Program were expected not to do well in mathematics.
Moses approached the problem at the Open Program in a similar manner to problems he and others had faced in the early sixties in helping the black community of Mississippi seek political power
through the vote. While on the surface the problem of the acquisition of political power looked like a simple issue of enticing people to vote, the problem would involve answering an interrelated set
of questions. "What is the vote for?" "Why do we want it in the first place?" What must we do right now to ensure that when we have the vote, it will work for us to benefit our communities? Answers
to these questions eventually resulted in an important context in which to ask people to vote. This context was the Mississippi Freedom Democratic Party, a community based political party.
Similarly, the everyday issues of students failing at mathematics in the Open Program would require a more complex set of issues and community of individuals. Moses, the parent-as-organizer in the
program, instinctively used the lesson he had learned in Mississippi transforming the everyday issues into a broader political question for the Open Program community to consider: What is algebra
for? Why do we want children to study it? What do we need to include in the mathematics education of every middle school student, to provide each of them with access to the college preparatory
mathematics curriculum in high school? Why is it important to gain such access? Within these questions, a context for understanding the problems of mathematics education emerged, and a possible
solution and effort at community organizing represented by the Algebra Project began to take shape.
The answers to the questions, "What is algebra for?" and "Why do we want children to study it?", play an important role in the Algebra Project. The project assumes that there is a new standard in
assessing mathematics education, a standard of mathematical literacy. In this not so far future, a broad range of mathematical skills will join traditional skills in reading and writing in the
definition of literacy. These mathematical skills will not only be important in gaining access to college and math and science related careers, but will also be necessary for full participation in
the economic life of this society. In this context, the Algebra Project has as a goal that schools embrace a standard of mathematics education that requires that children be mathematically literate.
This will require a community of educators including parents, teachers and school administrators who understand the paramount importance of mathematics education in providing access to the economic
life of this society. An answer to the question "What do we need to include in the mathematics education of every middle school student?" also frames the Algebra Project.
Student strike
From March 1, 2006 to March 4, 2006, Baltimore City Public School System students led by the Baltimore City Algebra Project and coming from high schools across Baltimore City held a three-day student
strike to oppose an imminent plan to "consolidate" many area high schools into fewer buildings. The school system claims these buildings are underutilized, but the students and other advocates
counter that the only reason there is extra space in these buildings is because class sizes often are about 40 students per class. Mayor O'Malley apparently gave an ear to the students' demands in
this latest round of strike actions, fearing it could affect his status with the general public in a gubernatorial election year.
The Young People's Project
Founded in 1996, the Young People’s Project (YPP) is an outgrowth of the Algebra Project. YPP has established sites in Jackson, MS, Chicago, IL, and the Greater Boston area of Massachusetts, and is
developing programs in Miami, FL, Petersburg, VA, Los Angeles, CA, Ann Arbor, MI, and Mansfield, OH. Through Math Literacy Worker trainings and development, workshops and community events, YPP
promotes math literacy as a tool for young people to demand of themselves, their
From Yahoo Answers
Question:me in class 10th and need to make a model on the application of 3d geometry in daily life...........please help....please..suggest some sites
Answers:3D geometry explains different object with three-dimensional shapes, that cannot be sketched on papers. Spheres, Cones are the example of 3D. A ball is used in daily life. Motor car tyres are
cylindrical and are also in daily use. You look at your TV which is a 3D object of daily use. A die is in the shape of a cube. A portable DVD player is in the shape of a rectangular prism.
Question:I am not asking you to do my homework for me, but I would highly appreciate it if you could help me out on this project. I have to take pictures of our geometry class's lesson concepts in
the real world. Could you help me find examples of the following: -points -planes -line segments -midpoint -segment bisector -acute angle -obtuse angle -right angle -acute triangle -obtuse triangle
-right triangle -scalene triangle -isosceles triangle -equilateral triangle
Answers:There are many examples of these items in your daily life. Look around your neighborhood, go to a supermarket. Look at billboards, at street signs (the yield is a good example of a
equilateral triangle, you may find obtuse angles in all sides of a stop sign). Look at your school supplies too (geometry rulers make excellent isocelles and escalene triangles). Most Rooftops are
angled. Check patterns on your clothing. All of these are good suggestions.
Question:does anyone know a geometry vocab. word that starts with F, J, N, Q, U,W, X, Y, or Z. I have to be able to show a picture of it in real life. Due Friday. P.S it's an ABC book of geometry.
Answers:F: Figure, face. N: Nonagon Q: Quadrilateral, Quadratic equation U: Undecagon, unit, union. W: wedge Hope that helped=)
Question:I have a project on chemical changes in our every day life. can some one specifically tell me some chemical changes and why they are chemical changes? example: a nail rusts it is a chemical
reaction because rusting is a chemical change/reaction and it is a new substance and it is irreversible. oh and is breaking down food with saliva a chemical chenga?if so why?
Answers:Burning a log of wood Mixing an acid with a base, producing water and a salt. Photosynthesis - a process in which carbon dioxide and water are changed into sugars by plants. Cracking heavy
hydrocarbons to create lighter hydrocarbons (part of the process of refining oil). Cooking examples: popcorn, cake, pancakes, and eggs Oxidation examples: rust or tarnishing Combustion Mixing
chemicals Rotting of fruit
From Youtube
Chemistry In Daily Life - Part 1 :Please do visit our websites; www.graveyardwarriors.webs.com http This is the project done by the SSLC students (Batch - 2009-2010) of St.Michael s' AIHSS,Kannur.
There where five members in this group and the leader among them was Adlin Antony.
Living My Life Faster - 8 years of JK's Daily Photo Project :UPDATE: Seems this video is the bait in a "likejacking" scheme on facebook: www.readwriteweb.com I DID NOT set up those pages, nor condone
them in any way. Oct 01 1998 - 2006 The video was done in After Effects 7 Pro. That's how the eyes were aligned. And thanks to YouTube's old FLV compression, this is half as fast (15 fps vs. 30 fps)
as it should be. So, you're missing out on half the images. Better image quality in the video this way If you want the total experience, visit the project site: www.c71123.com music derived from:
Jankenpopp jankenpopp.free.fr license Creative Commons Attribution-ShareAlike 2.0 creativecommons.org | {"url":"http://www.edurite.com/kbase/geometry-in-daily-life-project","timestamp":"2014-04-20T10:48:18Z","content_type":null,"content_length":"77982","record_id":"<urn:uuid:65960062-9812-48aa-9beb-63f802cd5f89>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
2097152.00 Byte to mb
You asked:
2097152.00 Byte to mb
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/2097152.00_byte_to_mb","timestamp":"2014-04-20T14:13:21Z","content_type":null,"content_length":"51223","record_id":"<urn:uuid:68e65a0f-f3c1-4db4-b411-e8a24ab3e0c9>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bibliography and References – Statistical Posts
The following are the reference sources from the posts on the homogeneity of variances being presented in the coming weeks.
1. Alexander, R. A. and Govern, D. M. (1994) A new and simpler approximation for ANOVA under variance heterogeneity. Journal of Educational Statistics, 19, 91–101.
2. Andrews, D. F. (1971) A note on the selection of data transformation. Biometrika, 58, 249–254.
3. Andrews, D. F., Gnanadesikan, R. and Warner, J. L. (1971) Transformations of multivariate data. Biometrics, 27, 825–840.
4. Anscombe, F. J. (1948) The transformation of poisson, binomial and negative binomial data. Biometrika, 35, 246–254.
5. Atkinson, A. C. (1985) Plots, Transformations and regression. Oxford University Press, Oxford.
6. Bartlett, M. S. (1937) Properties of sufficiency and statistical tests. Proceedings of the Royal Society of London, A, 160, 268–282.
7. Bartlett, M. S. and Kendall, D. G. (1946) The statistical analysis of variance-heterogeneity and the logarithmic transformation. J. Roy. Statist. Soc., Suppl. 8, 128–138.
8. Beauchamp, J. J. and Robson, D. S. (1986) Transformation considerations in discriminant analysis. Communication in Statistics–Simulation and Computation, 15(1), 147–179.
9. Bickel, P. J. (1965) On some robust estimates of location. Ann. Math. Statist., 36,847–858.
10. Bickel, P. J., and Doksum, K. A. (1981) An analysis of transformations revisited. J. American Statistical. Assoc., 76, 296–311.
11. Bishop, T. A. and Dudewicz, E. J. (1978) Exact analysis of variance with unequal variances: Test procedures and tables. Technometrics, 20, 419–430.
12. Boneau, C. A. (1960) The effects of violations of assumptions underlying the t-test. Psychological Bulletin, 57, 49–64.
13. Boos, D. D. and Brownie, C. (1989) Bootstrap methods for testing homogeneity of variances. Technometrics, 31(1), 69–82.
14. Box, G. E. P. (1953) Non-normality and tests on variances. Biometrika, 40, 318–335.
15. Box, G. E. P. (1954) Some theorems on quadratic forms applied in the study of analysis of variance problems, I. Effect of inequality of variance in one-way model. Ann. Math. Statist., 25, 290–302.
16. Box, G. P. E. and Andersen, S. L. (1955) Permutation theory in the derivation of robust criteria and the study of departures from assumptions. J. Roy. Statist. Soc., Ser. B, 17, 1–34.
17. Box, G. E. P., and Cox, D. R. (1964) An analysis of transformations (with discussion) J. Roy. Statist. Soc., 26, 211–246.
18. Brown, M. B. and Forsythe, A. B. (1974) Robust tests for the equality of variances. J. American Statistical. Assoc., 69, 364–367.
19. Brown, M. B. and Forsythe, A. B. (1974a) The small sample behaviour of some statistics which test the equality of several means. Technometrics, 16, 129–132.
20. Bryk, A. S. and Raudenbush, S. J.(1987) Heterogeneity of variance in experimental studies: A challenge to conventional interpretations. Psychological Bulletin, 104, 396–404.
21. Camdeviren, H. and Mendes, M. (2005) A simulation study for type III error rates of some variance homogeneity tests. Pak. J. Statist., 21(2), 223–234.
22. Carroll, R. J. (1980) A robust method for testing transformation to achieve approximate normality. J. Roy. Statist. Soc., Series B, 42, 71–78.
23. Carroll, R. J. (1982a) Tests for regression parameters in power transformation models Scand. J. Statist., 9, 217–222.
24. Carroll, R. J., and Ruppert, D. (1984) Power transformations when fitting theoretical models to data. J. American Statistical. Assoc., 79, 321–328.
25. Carroll, R. J. and Ruppert, D. (1987) Diagnostics and robust estimation when transforming the regression model and the response. Technometrics, 28, 287–299.
26. Cassella, G & Berger, R (2002), Statistical Interference, 2^nd Ed. Duxbury Press
27. Chang, H. S. (1977a) A computer program for Box-Cox transformation and estimation technique. Econometrica, 45, 1741.
28. Chen, H. (1995) Tests following transformations. Ann. Statist., 23, 1587–1593.
29. Chen, H. and Loh, W. Y. (1992) Bounds on ARE’s of tests following Box-Cox transformations. Ann. Statist., 20, 1485–1500.
30. Chen, S.-Y. and Chen, H. J. (1998) Single-stage analysis of variance under heteroscedasticity. Communication in Statistics–Simulation and Computation, 27, 641–666.
31. Chohen, A., and Sackrowitz, H. B. (1987) An approach to inference following model selection with applications to transformation-based and adaptive inference. J. Amer. Statist. Assoc., 82,
32. Cochran, W. G. (1941) The distribution of the largest of a set of estimated variance as a fraction of their total. Ann. Eug., 11, 47–52.
33. Cochran, W. G. and Cox, G. M. (1957) Experimental design. New York: John Willey and Sons Inc.
34. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum.
35. Conover, W. J., Johnson, M. E. and Johnsons, M. M. (1981) A comparative study of tests for homogeneity of variances with applications to the outer continental shelf bidding data. Technometrics,
23, 351–361.
36. Crowder, M. J. (2000) Tests for a family of survival models based on extremes. In Recent Advances in Reliability Theory, N. Limnios and M. Nikulin, Eds., pp. 307–321. Birkhauser, Boston.
37. Dunn, J. E. and Tubbs, J. D. (1980) VARSTAB: A procedure for determining homoscedastic transformations of multivariate normal populations. Communications in Statistics–Simulation and Computation
B, 9(6), 589–598.
38. Draper, N. R. and Cox, D. R. (1969) On distributions and their transformations to normality. J. Roy. Statist. Soc., Series B, 31, 472–476.
39. Draper, N. R. and Hunter, W. G. (1969) Transformations: Some examples revisited. Technometrics, 11, 23–40.
40. Efron, B. (1982) Transformation theory: how normal is a family of distributions? Ann. Statist., 10, 323-339.
41. Ehri, L. C., Nunes, S. R., Stahl, S. A. & Willows, D. M. (2001). Systematic phonics instruction helps students learn to read: Evidence from the National Reading Panel’s meta-analysis. Review of
Educational Research, 71(3), 393-447.
42. Fellers, R. R. (1972) The effects of non-normality and sample size on the robustness of tests of homogeneity of variance. Paper presented at the meeting of the Northeast Educational Research
43. Fisher, R. A. and Mather, K. (1943) The inheritance of style length “Lythrum Salicano”. Ann. Eug., 12, 1–23.
44. Fleming, T. R., O’Fallon, J. R., O’Brien, P. C. and Harrington, D. P. (1980) Modified Kolmogorov-Smirnov test procedures with application to arbitrary right censored data. Biometrics, 36, 607–626.
45. Freiman, J. A., Chalmers, T. C., Smith, H.. & Kuebler, R. R. (1978). The importance of beta, the type II error and sample size in the design and interpretation of the randomized control trial.
The New England Journal of Medicine, 299(13), 690-694.
46. Games, P., Winkler, H. and Probert, D. (1972) Robust tests for homogeneity of variance. Educational and Psychological Measurement. 32, 887–909.
47. Gartside, P. S. (1972) A study of methods for comparing several variances. J. American Statistical. Assoc., 67, 342–346.
48. Glass, G. V., Peckham, P. D. and Sanders, J. R. (1972) Consequences of failure to meet assumptions underlying analysis of variance and covariance. Review of Educational Research, 42, 237–288.
49. Goodman, S. N. & Berlin, J. (1994). The use of predicted confidence intervals when planning experiments and the misuse of power when interpreting results. Annals of Internal Medicine, 121, 201-6.
50. Graybill, F. A. (1976) The Theory and Applications of the Linear Model. Duxbury Press, London.
51. Gupta, A. K. and Rathie, A. K. (1983) Distribution of the likelihood ratio criterion for the problem of k samples. Metron, 40, 147–156.
52. Gupta, A. K., Harrar, S. and Pardo, L. (2004) On testing homogeneity of variances for non-normal models using entropy. Department of Mathematics and Statistics, Bowling Green State University,
Technical Report, No. 04-11.
53. Hair, Joseph F., Jr; Anderson, Rolph E.; Tatham, Ronald L.; and Black, William C. 1998. Multivariate Data Analysis, Fifth Edition. Englewood Cliffs, New Jersey: Prentice Hall.
54. Han, A. K. (1987) A non-parametric analysis of transformation. Journal of Econometrics, 35, 191–209.
55. Hartley, H. O. (1940) Testing the homogeneity of a set of variances. Biometrika, 31, 249–255.
56. Hartung, J. & Argac, D. (2001), Testing for homogeneity in combining two-armed trials with normally distributed responses. Sankhya; the Indian Journal of Statistics Vol. B. Pp 298-310
57. Hartley H. O. (1950) The maximum F -ratio as a short cut test for heterogeneity of variances. Biometrika, 37, 308–312.
58. Hayes, J. P. & Steidl, R. J. (1997). Statistical power analysis and amphibian population trends. Conservation Biology, 11, 273-275.
59. Hedges, L. V. & Pigott, T. D. (2001). The power of statistical tests in meta-analysis. Psychological Methods, 6(3), 203-217.
60. Hedges, L. V. & Pigott, T. D. (2004). The power of statistical tests for moderators in meta-analysis. Psychological Methods, 9(4), 424-445.
61. Hernandez, F., and Johnson, R. A. (1980) The large-sample behaviour of transformations to normality. J. American Statistical. Assoc., 75, 855–861.
62. Hinkley, D. V. (1975) On Power transformations to symmetry. Biometrika, 62, 101–111.
63. Hinkley, D. V. (1985) Transformation diagnostics for linear models. Biometrika, 72, 487–496.
64. Hotelling, H. (1953) New Light on the correlation coefficient and its transform. J. Royal Statistical Soc., Series B, 15, 193–232.
65. Hoyle, M. H. (1973) Transformations: An introduction and a bibliography. The International Statistical Review, 41, 203–223.
66. Hsiung, T. C. and Olejnik, S. (1996) Type I error rates and statistical power for James second-order test and the univariate F test in two-way ANOVA models under heteroscedasticity and/or
non-normality. Journal of Experimental Education, 65, 57–71.
67. Huang, C L., Moon, L. C. and Chang, H. S. (1978) A computer program using the Box-Cox transformation technique for the specification of functional form. The American Statistician, 32, 144.
68. Huber, P. J. (1972) Robust statistics: A review. Ann. Math. Statist., 43, 1041–1067.
69. JMP Software (2007) JMP Statistics and Graphics Guide, SAS Institute
70. John, J. A., and Draper, N. R. (1980) An alternative family of power transformations. Applied Statistics, 29, 190–197.
71. Kendall, M. G. and Stuart, A. (1966) Advanced Theory of Statistics, 2, Griffin, London.
72. Keselman, H. J., Carrier, K. C. and Lix, L. M. (1995) Robust and powerful orthogonal analysis. Psychometrika, 60, 395–418.
73. Keselman, H. J., Wilcox, R. R., Othman, A. R. and Fradette, K. (2002) Trimming, transforming statistics and bootstrapping: Circumventing the biasing effects of heteroscedasticity and
non-normality. Journal of Modern Applied Statistical Methods, 1(2), 288–309.
74. Lawless, J. F. (2000) Introduction to two classics in reliability theory. Technometrics, 42, 5–6.
75. Lawless, J. F. (2003) Statistical Models and Methods for Lifetime Data. Second Edition, Wiley Series in Probability and Statistics.
76. Layard, M. W. J. (1973) Robust large sample tests for homogeneity of variances. J. American Statistical. Assoc., 68, 195–198.
77. Levene, H. (1960) Robust tests for equality of variances. Contributions to probability and statistics. edited by I. Olkin. Stanford University Press. Palo Alto. CA., 278–292.
78. Levy, K. J. (1975a) An empirical comparison of several multiple range tests for variances. J. American Statistical. Assoc., 70, 180–183.
79. Levy K. J. (1975b) An empirical comparison of Z-variance and Box-Scheff´e tests for homogeneity of variance. Psychometrika, 40, 519–524.
80. Lim, T. S. and Loh, W. Y. (1996) A comparison of tests of equality of variances. Computational Statistics and Data Analysis, 22, 287–301.
81. Loh, W. Y. (1987) Some modifications of Levene’s test of variance homogeneity. Journal of Computational Statistics and Simulation, 28, 213–226.
82. Luh, W. M. and Guo, J. H. (2000) Approximate transformation trimmed mean methods to the test of simple linear regression slope equality. Journal of Applied Statistics, 27, 843–858.
83. Manly, B. F. (1976) Exponential data transformation. The Statistician, 25, 37–42.
84. Mendes, M. (2003) The comparison of Levene, Bartlett, Neymann-Pearson and Bartlett 2 tests in terms of actual type I error rates. Journal of Agriculture Sciences, 9(2), 143–146.
85. Mehrota, D. V. (1997), Improving the Brown-Forsythe solution to generalised Behrens-Fisher Problem, Communications in statistics, Series B 26: 1139-1145
86. Miller, R. G. Jr. (1968) Jackknifing variances. Ann. Math. Statist., 39, 567–582.
87. Miller, R. G. Jr. (1986) Beyond ANOVA, Basics of Applied Statistics. Wiley, New York.
88. Murphy, K. R. & Myors, B. (2004). Statistical power analysis (2nd ed.). Mahwah, NJ: Lawrence Erlbaum.
89. Nelson, L. S. (2000) Comparing two variances from normal populations. J. Qual. Technol., 32, 79–80.
90. O’Brien, R. G. (1978) Robust techniques for testing heterogeneity of variance effects in factorial designs. Psychometrika, 43, 327–342.
91. Oshima, T. C. and Algina, J. (1992a) Type I error rates James’s second-order test and Wilcox’s Hmtest under heteroscedasticity and non-normality. British Journal of Mathematical and Statistical
Psychology, 45, 255–263.
92. Oshima, T. C and Algina, J. (1992b) A SAS program for testing the hypothesis of equal means under heteroscedasticity: James’s second-order test. Educational and Psychological Measurement, 52,
93. Oshima, T. C., Algina, R. A. and Lin, W. Y. (1994) Type I error rates for Welch’s test and James’s second-order test under non-normality and inequality of variance when there are two groups.
Journal of Educational and Behavioural Statistics, 19, 275–291.
94. Peechawanich, V. (1992), Probability theory and applications. Prakypueg, Bangkok
95. Phil, E. (1999), Checking Assumptions, Education 230 B/C Linear Statistical Models
96. Proschan, F. (1963) Theoretical explanation of observed decreasing failure rate. Technometrics, 5, 375–383.
97. Rayner, J.C.W. (1997) The Asymptotically Optimal Tests, The Statistician, Vol. 46, No. 3, (1997), pp. 337-346
98. Rayner, J.C.W. and Best, D.J. (1989) Smooth Tests of Goodness of Fit, New York, Oxford University Press.
99. Reed, J. M. & Blaustein, A. R. (1995). Assessment of “nondeclining” amphibian populations using power analysis. Conservation Biology, 9, 1299-1300.
100. Rogan, J. C., and Keselman, H. J. (1977) Is the ANOVA F -test robust to variance heterogeneity when sample sizes are equal? An investigation via a co-efficient of variation. American Educational
Research Journal, 14, 493–498.
101. Sattertwaite, F. (1941), Synthesis of Variance, Psychometrica, Vol 6, Pp 309-316.
102. Scheff´e, H. (1959) The Analysis of Variance. Wiley, New York.
103. Schneider, P. J. and Penfield, D. A. (1997) Alexnader and Govern’s approximation: Providing an alternative to ANOVA under variance heterogeneity. Journal of Experimental Education, 65, 271–286.
104. Serfling, R. J. (2002) Approximate Theorem of Mathematical Statistics. John Wiley & Sons, Inc.
105. Sharma, S. C. (1991) A new jacknife test for homogeneity of variances. Communications in Statistics-Simulation and Computation, 20(2-3), 479–495.
106. Snee, R. D. (1986) An alternative approach to fitting models when re-expression of the response is useful. J. Qual. Technol. 18, 211–225.
107. Sokal, R. R. and Rholf, F. J. (1995) Biometry, New York: W.H. Freeman and Company.
108. Solomon, P. J. (1985) Transformation for components of variance and covariance. Biometrika, 72, 233–239.
109. Speed, T. P. (1987) Rejoinder: What is an Analysis of Variance? The Annals of Statistics, Vol. 15, No. 3 (Sep., 1987), pp. 937-941
110. Stigler, S. M. (1973) The asymptotic distribution of the trimmed mean. Ann. Statist., 1, 472–477.
111. Tang, J. and Gupta, A. K. (1987) On testing homogeneity of variances for Gaussian models. J. Statist. Comput. Simul., 27, 155–173.
112. Taylor, M. J. G. (1985a) Power transformation to symmetry. Biometrika, 72, 145-152.
113. Taylor, M. J. G. (1985b) Measure of location of skew distributions obtained through Box-Cox transformations. J. American Statistical. Assoc., 80, 427–432.
114. Taylor, D. J. & Muller, K. E. (1995). Computing confidence bounds for power and sample size of the general linear model. The American Statistician, 49(1), 43-47.
115. Thoeni, H. (1967) Transformation of variables used in the analysis of experimental and observational data: a review. Technical Report No. 7, Iowa State University, Ames.
116. Thomas, L. (1997). Retrospective power analysis. Conservation Biology, 11(1), 276-280.
117. Tippett, L. H. C. (1934) Statistical methods in textile research. Part 2. Uses of the binomial and poisson distributions. Shirley Inst. Mem., 13, 35–72.
118. Tukey, J. W. (1957) The comparative anatomy of transformations. Ann. Math. Statist., 28, 602–632.
119. Weerahandi, S. (1995) ANOVA under unequal error variances. Biometrics, 51, 589–599.
120. Welch, B. L. (1951) On the comparison of several mean values: An alternative approach. Biometrika, 38, 330–336.
121. Wei-ming, L. (1999) Developing trimmed mean test statistics for two-way fixed effects ANOVA models under variance heterogeneity and non-normality. The Journal of Experimental Education, 67(3),
122. Wilcox, R. R., Charlin, V. L. and Thomson, K. L. (1986) New Monte Carlo results on the robustness of the ANOVA F , W and F ∗ statistics. J. Statist. Comput. Simul., 15, 33–43.
123. 114. Wilcox, R. R. (1988) A new alternative to the ANOVA F and new results on James’s second-order method. British Journal of Mathematical and Statistical Psychology, 41, 109–117.
124. Wilcox, R. R. (1989) Adjusting for unequal variances when comparing means in one-way and two-way effects ANOVA models. Journal of Educational Statistics, 14, 269–278.
125. Wilcox, R. R. (1995) ANOVA: A paradigm for low power and misleading measures of effect size. Review of Educational Research, 65, 51–77.
126. Wilcox, R. R. (1997) A bootstrap modification of the Alexander-Govern ANOVA method, plus comments on comparing trimmed means. Educational and Psychological Measurement, 57(4), 655-665.
127. Wilcox, R. R. (2002) Comparing variances of two independent groups. British Journal of Mathematical and Statistical Psychology, 55, 169–175.
128. Wilk, M. B., Gnanadesikan, R. and Huyett, M. J. (1962) Estimation of parameters of the gamma distribution using order statistics. Biometrika, 49, 525–545.
129. Winer, B. J., Brown, D. R. and Michael, K. M. (1991) Statistical Principles in Experimental Design. New York: McGraw-Hill Book Company.
130. Zar, J. H. (1999) Biostatistical Analysis. New Jersey: Prentice–Hall Inc. Simon and Schuster/ A Viacom Company.
131. Zumbo, B. D. & Hubley, A. M. (1998). A note on misconceptions concerning prospective and retrospective power. The Statistician, 47(2), 385-388.
No comments: | {"url":"http://gse-compliance.blogspot.com/2009/12/bibliography-and-references-statistical.html","timestamp":"2014-04-17T06:49:37Z","content_type":null,"content_length":"151290","record_id":"<urn:uuid:454212e7-589a-4a13-846d-dbdcee00450d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Sound travels about 343 meters per second. The function d(t) = 343t gives the distance d(t) in meters that sound travels in t seconds. How far does sound travel in 8 seconds? A.)343 meters B.)686
meters C.)2,744 meters D.)3,430 meters For this problem would i just have to multiply 343 by 8 ?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50a16166e4b0e22d17ef0741","timestamp":"2014-04-20T01:02:11Z","content_type":null,"content_length":"39539","record_id":"<urn:uuid:2c619d74-f2f7-4f57-9af3-a1cf853d08a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00248-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. Introduction2. Theory2.1. Identifying Interaction by Correlated Motion2.2. Correlation Threshold2.3. Scanning Window Concept2.4. Numerical Determination of ρmin and P2.5. Scanning Window Method3. Results3.1. Validation by Simulations3.2. Intracellular Trafficking of Nanomedicines4. Discussion5. Materials and Methods5.1. Validations Simulations5.2. Live-Cell Sample Preparation5.3. Experimental Set-Up5.4. SPT Experiments and Analysis6. ConclusionsAcknowledgmentsAppendix1. The Effect of the Localization and Overlay Precision on the Correlation2. Correlation between Trajectories of Interacting Objects3. The Influence of a High Localization Error4. Negative Control ExperimentConflict of InterestReferencesFigures and TablesSupplementary Files
Consider a one-dimensional trajectory X[A] and a one-dimensional trajectory X[B]. The Pearson correlation R between both trajectories is given by:
R = cov ( X A , X B ) var ( X A ) var ( X B ) .
The numerator is called the covariance and is defined as:
cov ( X A , X B ) = E [ ( X A - E [ X A ] ) ( X B - E [ X B ] ) ] ,
where E[X] is the expected value of X. The denominator in Equation (8) is the square root of the product of two variances, defined by:
var ( X A ) = E [ ( X A - E [ X A ] ) 2 ] var ( X B ) = E [ ( X B - E [ X B ] ) 2 ] .
Assume now that the observed trajectories x[A] and x[B] deviate from the real trajectories X[A] and X[B], respectively, because of experimental uncertainty:
x A = X A + δ A x B = X B + δ B ,
with δ[A] and δ[B] deviations caused by the finite localization and overlay precision. The part coming from the localization precision follows a distribution around zero with standard deviation σ[A]
and σ[B], respectively. The deviations caused by the overlay process are not strictly defined, besides that their difference is following a distribution around zero with standard deviation σ[o],
which is called the overlay precision. For mathematical convenience, it is therefore assumed that δ[A] and δ[B] are distributed around zero with a standard deviation σ A ′ = σ A 2 + σ o 2 / 2 and σ B
′ = σ B 2 + σ o 2 / 2, respectively. Combining Equations (9) and (11), the covariance between x[A] and x[B] is given by:
cov ( x A , x B ) = cov ( X A , X B ) .
The variance of x[A] and x[B] follow from Equations (10) and (11):
var ( x A ) = var ( X A ) + σ A ′ 2 var ( x B ) = var ( X B ) + σ B ′ 2
The Pearson correlation ρ between the observed trajectories x[A] and x[B] is thus given by:
ρ = cov ( X A , X B ) var ( X A ) var ( X B ) + σ B ′ 2 var ( X A ) + σ A ′ 2 var ( X B ) + σ B ′ 2 σ A ′ 2 .
Consider now the special situation σ[A]′ = σ[B]′ = σ, in this case the correlation becomes:
ρ = cov ( X A , X B ) var ( X A ) var ( X B ) + σ 2 var ( X A ) + σ 2 var ( X B ) + σ 4 .
Both correlations will be equal if the following condition for σ is fulfilled:
σ B ′ 2 var ( X A ) + σ A ′ 2 var ( X B ) + σ B ′ 2 σ A ′ 2 = σ 2 var ( X A ) + σ 2 var ( X B ) + σ 4 .
This is a quadratic equation in σ^2, with solution:
σ 2 = - var ( X A ) + var ( X B ) 2 + ( var ( X A ) + var ( X B ) ) 2 + 4 { σ B ′ 2 var ( X A ) + σ A ′ 2 var ( X B ) + σ A ′ 2 σ B ′ 2 } 2 .
Using Equation (13) and considering the definitions of σ[A]′ and σ[B]′, this can be rewritten as:
σ 2 = - var ( x A ) + var ( x B ) - σ A 2 - σ B 2 - σ o 2 2 + ( var ( x A ) + var ( x B ) ) 2 + ( σ A 2 - σ B 2 ) 2 - 2 ( var ( x A ) - var ( x B ) ) ( σ A 2 - σ B 2 ) 2 .
This expression is more useful than Equation (17), since the variances var(X[A]) and var(X[B]) cannot be determined experimentally.
In reality, the complete trajectories x[A] and x[B] are not known, only discrete positions x[A](t[i]) and x[B](t[i]) at different time points t[i] (i = 1,2, …, l) are measured, from which the sample
variances can be determined:
var ( x A ) = 1 l - 1 ∑ i = 1 l ( x A ( t i ) - 〈 x A 〉 ) 2 var ( x B ) = 1 l - 1 ∑ i = 1 l ( x B ( t i ) - 〈 x B 〉 ) 2 ,
with <x[A]> and <x[B]> the average positions of the observed trajectories x[A] and x[B], respectively:
〈 x A 〉 = ∑ i = 1 l x A ( t i ) l 〈 x B 〉 = ∑ i = 1 l x B ( t i ) l .
Consider a one-dimensional trajectory X[A] of one object and a one-dimensional trajectory X[B] of another object. Assume that both objects are interacting, resulting in identical trajectories, aside
from a constant displacement d:
X A = x X B = x + d .
The observed trajectories x[A] and x[B] deviate from the real trajectories, because of experimental uncertainty:
x A = x + δ A x B = x + d + δ B ,
with δ[A] and δ[B] deviations caused by the finite localization and overlay precision. As explained above, both can be assumed to be distributed around zero with equal standard deviation σ defined in
Equation (17). According to Equation (15), the Pearson correlation between the observed trajectories x[A] and x[B] is thus given by:
ρ = cov ( x , x + d ) var ( x ) var ( x + d ) + σ 2 var ( x ) + σ 2 var ( x + d ) + σ 4 .
According to Equations (9) and (10), the covariance cov(x, x + d) and the variance var(x, x + d) are equal to:
cov ( x , x + d ) = var ( x ) var ( x + d ) = var ( x ) .
This allows to rewrite Equation (23) as:
ρ = 1 1 + 2 σ 2 var ( x ) + ( σ 2 var ( x ) ) 2 .
The correlation between observed trajectories of interacting objects is thus completely determined by the ratio of σ^2/var(x). Assume for instance that the interacting objects are undergoing Brownian
motion with diffusion coefficient D. If the trajectories are observed during a time t, the variance is given by [34]:
var ( x ) = 1 3 D t .
The mean step in the trajectory over a time interval τ < t is known to be [37]:
S = 2 D τ .
Combining Equations (26) and (27) immediately results in:
var ( x ) = t 6 τ S 2 .
Another example is linear motion with velocity v. If the trajectories are observed during a time t, the variance is given by:
var ( x ) = 1 12 v 2 t 2 .
The (mean) step in the trajectory over a time interval τ < t is:
S = v τ .
Combining Equations (29) and (30) immediately results in:
var ( x ) = t 2 12 τ 2 S 2 .
Linear and Brownian motion thus give rise to the following relationship between trajectory variance and mean step:
var ( x ) = f S 2 ,
where f is a factor that depends on the ratio t/τ between the observation time and time interval for the step. Inserting this expression in Equation (25) gives:
ρ = 1 1 + 2 f ( σ S ) 2 + 1 f 2 ( σ S ) 4 .
In other words, for a certain ratio t/τ, the observed correlation between two interacting objects undergoing Brownian or linear motion is completely determined by the following ratio, termed the
relative localization error:
r = σ S .
However, the mean step S cannot be determined experimentally. In reality, the actual trajectory x is not known, only discrete positions x[A](t[i]) and x[B](t[i]) different time points t[i] (i = 1, 2,
..., l) are measured. In this case the time interval is given by τ = t[i] − t[i][−1] (i = 2, ..., l) and the total observation time by t = lτ, from which immediately follows that the ratio t/τ = l.
From the trajectories the sample mean steps can be determined as:
S A B - 1 2 l ∑ i = 2 l { | x A ( t i ) - x A ( t i - 1 ) | + | x B ( t i ) - x B ( t i - 1 ) | } .
These are estimations of the mean steps defined in Equations (27) and (30). All observed trajectories with length l of interacting objects that are undergoing Brownian or linear motion will thus have
the same expectation value for the correlation if they have the same relative localization error r. This result is valid for all types of motion that fulfill the condition in Equation (32), i.e., the
variance of the trajectories of the interacting objects should be linearly related to the square of the mean step.
In case of low localization precision or low mobility, the relative localization error r will be large (see Equation (34)). Simulations were performed to investigate the influence of a large relative
localization error r = 0.5 on the performance of the scanning window method.
Two sets of 1000 pairs of two-dimensional Brownian motion trajectories with length l = 20 and time interval τ = 0.1 s between successive positions were simulated in the Matlab programming environment
(The Mathworks, Natick, MA, USA). The Brownian motion step in each dimension was simulated with the Matlab function randn, assuming a standard deviation equal to the mean step S = 2 D τ. The
diffusion coefficient was taken D = 1 μm^2/s, resulting in S = 0.447 μm. In both sets, the two trajectories of each simulated pair start at the same position. The subsequent positions are also
identical in the first set (i.e., interaction), while they are independent from each other in the second set (i.e., no interaction). A normally distributed value was added to each coordinate of each
trajectory separately, again using the Matlab function randn. The standard deviation of this normal distribution is the localization precision σ, which is equal for both trajectories. The value of
the localization precision was chosen σ = 223.6 nm, in order to obtain a relative localization error r = 0.5. The overlay was taken to be perfect, i.e., σ[o] = 0. The scanning window method is
applied to each pair of simulated trajectories.
The results in the situation of complete interaction are shown in Figure A1a, where for each position along the trajectories the percentage of trajectories where the scanning window method has
detected interaction is shown. The scanning window method finds 99% of the time interaction in the middle of the trajectories. Towards the trajectory extremities, the method performs worse, reaching
80% at the trajectory start and end point. This can be explained by the smaller number of windows that correspond to the trajectory extremities.
As shown in Figure A1a, these trajectories were also analysed with an earlier reported object based colocalization method that makes use of a maximum distance d max = 1.65 2 σ to decide whether or
not there is interaction at a particular position [26]. At almost all positions, the colocalization method finds interaction 81% of the time.
Similarly, it was tested if the scanning window method can correctly detect the absence of interaction, the results of which are shown in Figure A1b. The scanning window method finds that less than
1% of the trajectories are interacting at the trajectory start and end points (i.e., false positives). However, away from the trajectory extremities, the scanning window method performs worse, going
up to 11% in the middle of the trajectories. The object based method with maximum distance d max = 1.65 2 σ finds that 81% of the trajectories are interacting at the first position, since the
trajectories were simulated to start in the same position. From position 2, this percentage drops and becomes smaller than 10% from position 5.
For large relative localization errors, the performance of the scanning window method can thus be affected, leading to a somewhat higher probability to detect false positives. In this situation, the
results of the scanning window method should thus be interpreted with care.
Dual colour SPT measurements were performed on a mixture of yellow-green and dark red fluorescently labelled 0.1 μm diameter beads (FluoSpheres, Molecular Probes, Gent, Belgium). Afterwards, the
scanning window method was used to search for interaction between the bead trajectories. This provides a negative control, since no interaction is expected between the yellow-green and dark red
The microscope sample was prepared by diluting the bead mixture in water and applying 5 mL between a microscope slide and a cover slip with a double-sided adhesive spacer of 120 μm thickness
(Secure-Seal Spacer, Molecular Probes, Bleiswijk, The Netherlands) in between. The dual colour SPT experiments were carried out on a custom-built laser widefield epi-fluorescence microscope set-up
that is described elsewhere in detail [33]. Ten movies of 6 seconds were recorded at a speed of 35 frames per second and with an image acquisition time of 6 ms. The camera frame rate was obtained by
selecting a subregion on the CCD chip of 256 by 512 pixels. After recording the dual colour SPT movies, the images in the two different colours were aligned using an affine transformation, calculated
from an image of immobilized beads (TetraSpeck, Molecular Probes, Gent, Belgium) that are fluorescent in both colours. The individual beads were identified in each image of all movies by image
processing performed in Matlab, as explained in detail elsewhere [33]. After determining the bead positions by an intensity weighted centre algorithm, the bead trajectories were reconstructed by a
nearest neighbour algorithm. The average localization precision for the intensity weighted centres was calculated as explained in detail elsewhere [34].
The scanning window method was applied to all possible pairs of trajectories. Note that no restriction was imposed on the distance between the trajectories, which is possible with the scanning window
method because correlation is translation independent. Using the scanning window method, a pair of trajectories was considered to interact when interaction was found in at least one window. The
average percentage of trajectory pairs in a dual colour SPT experiment that are found to interact by the scanning window is shown in Figure A2a, together with the results obtained with the full
trajectory method [28]. The scanning window method identified only 0.5% of the trajectory pairs as interacting, and the full trajectory method performs even slightly better with only 0.1% of
trajectory pairs detected as interacting. The scanning window method thus correctly predicts the absence of interaction.
Validation simulations for interaction and no interaction in case of a large relative localization error. The percentage of 1000 pairs of simulated Brownian motion trajectories where the scanning
window method has found interaction is shown for each position along the trajectories (black line), in case of (a) interaction, and (b) no interaction. All simulated trajectories have a length l =
20, a diffusion coefficient D = 1 μm^2/s, and a time interval τ = 0.1 s between successive positions. The localization precision was chosen σ = 223.6 nm, corresponding to a relative localization
error of r = 0.5. The same trajectories were also analysed with an object based colocalization method with d max = 1.65 2 σ as maximum distance (purple line).
The scanning window method applied to dual colour SPT measurements on a mixture of yellow-green and dark red 0.1 μm diameter beads diffusing in water as an experimental negative control. (a) The
average percentage of trajectory pairs in a dual colour SPT experiment that are found to interact by the scanning window and the full trajectory method. Both values were calculated from 10 dual
colour SPT experiments and the error bars correspond to the standard deviation. All possible trajectory pairs were analysed, i.e., there was no restriction on the distance between two trajectories.
Using the scanning window method, a pair of trajectories was considered to interact when interaction was found in at least one window; (b) An overlay image and the corresponding trajectories of one
dual colour SPT measurement are shown (see Supplementary Movie 3). The yellow-green beads are represented by green trajectories and the dark red beads are represented by red trajectories. The
scanning window method did not find interactions between the trajectory pairs for this particular movie. | {"url":"http://www.mdpi.com/1422-0067/14/8/16485/xml","timestamp":"2014-04-19T12:15:10Z","content_type":null,"content_length":"187425","record_id":"<urn:uuid:0e272c0b-73db-479d-ab80-8957dcf5d7a0>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
Neural Networks and Clojure
Neural Networks
(ANNs) attempt to "learn" by modelling the behaviour of neurons. Although neural networks sound cool, there is no magic behind them!
Invented in 1957, by
Frank Rosenblatt
, the single layer perceptron network is the simplest type of neural network. The single layer perceptron network is able to act as a binary classifier for any linearly separable data set.
The SLP is nothing more than a collection of weights and an output value. The Clojure code below allows you to create a network (initially with zero weights) and get a result from the network given
some weights and an input. Not very interesting.
(defn create-network
(repeat in 0))
(defn run-network
[input weights]
(if (pos? (reduce + (map * input weights))) 1 0))
The clever bit is adapting the weights so that the neural network learns. This process is known as training and is based on a set of data with known expectations. The learning algorithm for SLPs is
shown below. Given an error (either 1 or -1 in this case), adjust the weights based on the size of the inputs. The
decides how much to vary the weights; too high and the algorithm won't converge, too low and it'll take forever to converge.
(def learning-rate 0.05)
(defn- update-weights
[weights inputs error]
(fn [weight input] (+ weight (* learning-rate error input)))
weights inputs))
Finally, we can put this all together with a simple training function. Given a series of samples and the expected values, repeatedly update the weights until the training set is empty.
(defn train
([samples expecteds] (train samples expecteds (create-network (count (first samples)))))
([samples expecteds weights]
(if (empty? samples)
(let [sample (first samples)
expected (first expecteds)
actual (run-network sample weights)
error (- expected actual)]
(recur (rest samples) (rest expecteds) (update-weights weights sample error))))))
So we have our network now. How can we use it? Firstly, let's define a couple of data sets both linearly separable and not.
adds some random noise to each sample. Note the cool # syntax for a short function definition (I hadn't seen it before).
(defn jiggle [data]
(map (fn [x] (+ x (- (rand 0.05) 0.025))) data))
(def linearly-separable-test-data
(take 100 (repeatedly #(jiggle [0 1 0])))
(take 100 (repeatedly #(jiggle [1 0 0]))))
(repeat 100 0)
(repeat 100 1))])
(def xor-test-data
(take 100 (repeatedly #(jiggle [0 1])))
(take 100 (repeatedly #(jiggle [1 0])))
(take 100 (repeatedly #(jiggle [0 0])))
(take 100 (repeatedly #(jiggle [1 1]))))
(repeat 100 1)
(repeat 100 1)
(repeat 100 0)
(repeat 100 0))])
If we run these in the REPL we can see that the results are perfect for the linearly separable data.
> (apply train ls-test-data)
(0.04982859491606148 -0.0011851610388172009 -4.431771581539448E-4)
> (run-network [0 1 0] (apply train ls-test-data))
> (run-network [1 0 0] (apply train ls-test-data))
However, for the non-linearly separable they are completely wrong:
> (apply train xor-test-data)
(-0.02626745010362212 -0.028550312499346104)
> (run-network [1 1] (apply train xor-test-data))
> (run-network [0 1] (apply train xor-test-data))
> (run-network [1 0] (apply train xor-test-data))
> (run-network [0 0] (apply train xor-test-data))
The neural network algorithm shown here is really just a
gradient descent
optimization that only works for linearly separable data. Instead of calculating the solution in an iterative manner, we could have just arrived at an optimal solution in one go.
More complicated networks, such as the multi-layer perceptron network have more classification power and can work for non linearly separable data. I'll look at them next time! | {"url":"http://www.fatvat.co.uk/2009/05/neural-networks-and-clojure.html","timestamp":"2014-04-18T10:34:34Z","content_type":null,"content_length":"81402","record_id":"<urn:uuid:ee7b2a8c-3674-4c8f-85e5-c5f65ed08503>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
La Mesa, CA
Find a La Mesa, CA ACT Tutor
...As a Kansas-credentialed teacher, I have taught Trigonometry in the classroom and to tutors for pay. I'm interested in helping your student do well on the SAT and ACT tests. As a result of my
test scores, I received scholarships to two universities.
9 Subjects: including ACT Math, geometry, algebra 1, algebra 2
...I received 5's in both of the AP calculus tests, and am a UCSD biology student and so use Calculus on a regular basis in my classes. I have gone through math courses up until the higher levels
of calculus. I still use geometry in many of my science courses as part of my biology major at UCSD I took honors precalculus in high school, and passed with an A.
42 Subjects: including ACT Math, reading, English, writing
...I specialize in tutoring mathematics. I have tutored all math subjects from basic arithmetic past advanced calculus and differential equations since 2009. I love helping students find their
own learning style, and giving them the tools to learn the subject on their own without me!
37 Subjects: including ACT Math, calculus, geometry, statistics
...There are few jobs more rewarding than tutoring, and I have been lucky enough to tutor a diverse group of students throughout my career. I specialize in tutoring English at all levels,
particularly in writing skills and reading comprehension. I also specialize in tutoring for GRE, SAT, and ACT ...
29 Subjects: including ACT Math, chemistry, English, algebra 1
...Math is just one of my interests and accomplishments. I am well versed in Spanish as well, including grammar and literature. I studied art and history in Spain at the University of Barcelona
for a year.I have taken and passed the CBEST exam.
11 Subjects: including ACT Math, Spanish, algebra 2, SAT math
Related La Mesa, CA Tutors
La Mesa, CA Accounting Tutors
La Mesa, CA ACT Tutors
La Mesa, CA Algebra Tutors
La Mesa, CA Algebra 2 Tutors
La Mesa, CA Calculus Tutors
La Mesa, CA Geometry Tutors
La Mesa, CA Math Tutors
La Mesa, CA Prealgebra Tutors
La Mesa, CA Precalculus Tutors
La Mesa, CA SAT Tutors
La Mesa, CA SAT Math Tutors
La Mesa, CA Science Tutors
La Mesa, CA Statistics Tutors
La Mesa, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/la_mesa_ca_act_tutors.php","timestamp":"2014-04-17T15:55:38Z","content_type":null,"content_length":"23666","record_id":"<urn:uuid:e7a854cd-c013-4430-801e-841ce10c9f59>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
pgfplots - Create normal/logarithmic plots in two and three dimensions for LaTeX/TeX/ConTeXt. pgfplotstable - Loads, rounds, formats and postprocesses numerical tables. PGFPlots draws high--quality
function plots in normal or logarithmic scaling with a user-friendly interface directly in TeX. The user supplies axis labels, legend entries and the plot coordinates for one or more plots and
PGFPlots applies axis scaling, computes any logarithms and axis ticks and draws the plots. It supports line plots, scatter plots, piecewise constant plots, bar plots, area plots, mesh-- and surface
plots, patch plots, contour plots, quiver plots, histogram plots, polar axes, ternary diagrams, smith charts and some more. Pgfplots is based on Till Tantau's package PGF/TikZ (pgf). Pgfplotstable
displays numerical tables rounded to desired precision in various display formats, for example scientific format, fixed point format or integer, using TeX's math facilities for pretty printing.
Furthermore, it provides methods for table postprocessing. Please take a look at doc/latex/pgfplots/pgfplots.pdf and doc/latex/pgfplots/pgfplotstable.pdf. REMARK The virus killer Avira Antivir
reports the virus HTML/Malicious PDF.Gen in pgfplots.pdf. This is a FALSE ALARM caused by the pgfplots library for interactive, clickable plots (based on javascript). It is virus-free. Copyright
2007-2012 by Christian Feuersaenger. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software
Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied
warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this
program. If not, see . HISTORY: 1.10: - new feature: fill between plots (library fillbetween) - new feature: concatenate intersection segments (library fillbetween) - fixed bug: xelatex failed to run
contour external - fixed incompatibility with \label and \usepackage{mcaption} - fixed bug: histograms produced wrong point meta - fixed bug: histograms reported the wrong 'plot name' and confused
shifts of bar plots 1.9: - new feature: asymmetric error bars - new feature: activated math parser for axis limit arguments, arguments in axis cs, and domain argument in log plots - new feature:
stacked bar plots place their 'nodes near coords' correctly in the middle and print the increment (compat=1.9) - new feature: stacked bar plots suppress empty increments (compat=1.9). - new feature:
'scatter/position=relative|absolute' allow to position 'nodes near coords' absolutely. use-case: bar plots + nodes near coords which are at, say, y=0 rather than their y value - new feature:
integration of smooth shadings & auto-CMYK conversion \usepackage[cmyk]{xcolor} or \selectcolormodel{cmyk} will reconfigure pgfplots to use CMYK (document-wide) - new feature (advanced audience
only): programmatic access to data coordinates during the visualization phase -> allows much more customization for error bars, stacked plots, nodes near coords. - wrote beginner tutorials - fixed
bug: error bars and point meta did not work together - fixed bug: stacked plots did not respect 'visualization depends on' - fixed bug: luatex 0.76 is not backwards compatible; added version switch -
fixed bug: ternary library precision has been improved - fixed bug: problem with axis limits very close to 0 - fixed bug: colormap specification limit case produced out of bounds exception 1.8: - new
feature: tight bounding box even if the axis is no box and bb excludes clip path - new feature: mesh/color input=explicit - new feature: shader=interp now has drivers for both dvipdfmx and xetex -
new feature: support for more color spaces in colormap definitions - new feature: shader=interp and device-level gray colorspaces - new feature: 'contour/contour dir=[xyz]' to draw contours in
different directions - new feature: statistics library with boxplot handler (both boxplot prepared and automatic computation) - fixed bug: 3d centered axis lines and label placement (requires compat=
1.8 or higher) - fixed bug: axis lines and placement of labels, tick scale labels, and reversed axes (requires compat=1.8 or higher) - fixed bug: filtering out coords from a mesh plot failed - fixed
bug: every legend image post was not respected inside of \ref{plotlabel} - fixed bug: high-order patches computed the shader=flat mean in a wrong way. - fixed bug: remember picture inside of pgfplots
axes failed (due to cell picture) - fixed bug: now, the tick scale label will be omitted if there are no ticks - fixed bug: axis box path was not closed - fixed bug: the bounding box was non-empty
even if the axis was hidden. - fixed bug: auto-alignment of nodes near coords failed for xbar plots - fixed bug: providing bar width / bar shift in terms of axis units did not work with [xy]bar and
nodes near coords - fixed bug: transformation 'data cs=cart' -> polar is more robust now - fixed bug: code did not compile against pgf 2.00 - fixed bug: patch plot lib and shader=interp,patch type=
biquadratic - fixed bug: context path searching issue (pgfplots.lua) - fixed bug: shader=interp and dvips driver - fixed bug: error bars with explicit relative input failed 1.7 - added feature: 'bar
shift' and 'bar width' can now be expressed in terms of axis units (compat=1.7 or higher) - fixed incompatibility regression pgfplots 1.6.1 pgf 2.10: layers - fixed incompatibility pgfplots and
imakeidx - added feature: 'enlargelimits={abs=1cm}', i.e. enlarge by dimension rather than unit - patchplots lib: added patch type=bicubic - patchplots lib: added support for global paths (fillable)
- patchplots lib: added patch type sampling feature - patchplots lib: improved usability (documentation and improvements) - fixed path issues in context: moved lua input file to tex/generic - fixed
bug: \ref{legendimage} inside of legend text was wrong. 1.6.1: - fixed incompatibility lualatex,shader=interp, and german package (introduced in 1.6) 1.6: - added support for layered graphics (main
use case: multiple axes and layers) - added support for second colormap in mesh plots (mesh/interior colormap name) - added support for scopes inside of axes - contour plots: added ability to provide
list of discrete labels (mesh/levels) - empty lines are interpreted as interruptions in data plots (was undocumented since 1.4) - added more scaling options to 'scale mode=scale uniformly' (affects
axis equal in 3d and \addplot3 graphics) - fixed wrong implementation of 'axis equal' and 'unit vector ratio' in 3d (backwards compatible for 2d, but not for 3d - the 3d implementation was plain
wrong) - fixed incompatibility of lualatex and shader=interp - fixed bugs/added features around \addplot3 graphics - fixed bug: colorbar did not support ymode=log - fixed a couple of minor bugs -
fixed bounding box computation for clip=false,axis lines=none 1.5.1: - more operations for FPU library (==, !=,<=,>=,?) - fixed bug in usage of decorations in \addplot - bugfix for contour prepared
format=matlab - added 'const plot mark mid' and 'jump mark mid' plot handlers - nodes on a plot (\addplot ... node[pos=] {};) - 'trim axis group left' and 'trim axis group right' - bugfixes for polar
axes and log+stacked plots - added style 'log ticks with fixed point' - introduced patched tikz paths to simplify circles and ellipses within an axis - patchplots lib: patch type=polygon - some more
bugfixes 1.5: - Contour plots, - Histograms, - Quiver plots, - patch plots (library) - Triangle Meshes - Bilinear Elements - Quadratic Triangles - Biquadratic Quadrilaterals - Coons Patches -
Discrete colorbars, - Table sorting, - Linear regression, - Ternary diagrams, - Tieline Plots - Smith Charts - Polar axes, - Empty lines in input files result in interrupted plots, - PDF user defined
coordinate mouse popups - CMYK colormaps and shadings, - new markers and cycle lists - access to axis limits, - \addplot3 graphics: pgfplots draws an appropriate axis for a three-dimensional(!)
external png graphics - 3D axes: support to provide explicit unit vectors: - explicit unit vectors - explicit unit vectors which are uniformly rescaled to match width/height - 3D axes: improved
support for unit vector ratios - improvements of the groupplot styles - preliminary support for (2d) bar plots in 3d axes - new shader 'faceted interp' - table package: - 'every nth row' style -
'comment chars' key to define comment characters in input files - 'skip first n' style - lots of smaller bugfixes (see ChangeLog for details) 1.4.1: - improved compatibility to gnuplot 4.4 1.4:
Version 1.4 contains several new features, mostly work on details. It fixes many bugs and provides the following improvements: - detached legends - detached colorbars - ybar (and similar plots) can
now be mixed with other plot types like line plots. - improved legend formatting - added 'restrict x to domain*' which cups coordinates outside of a specified domain (same for y and z) - Added
support for linear regression - Inline tables, - Lots of bug fixes The next version will make a greater step when it is stable. 1.3.1: Version 1.3.1 is a bugfix release containing - improved
parametric plots with gnuplot - improved normalsize, small and footnotesize scale styles and added tiny - a lot of bugfixes 1.3: - improvements for two dimensional visualization, among them - axis
equal, - color bars, - nodes near coords, - jumps in plots, - improved description positioning, - reverseable axis directions, - simpler alignment of adjacent axes, - units and a simplified user
interface, - new three dimensional line, scatter, mesh and surface plots, - a copy of the automatic pdf externalization library, - an improved manual enhanced with a lot of pdf cross references.
1.2.2: - fixed a problem with the samples key, - provides some smaller fixes and some manual improvements. - added plot graphics. 1.2: - completely rewritten math expression parser with extended data
range, - colormaps for scatter plots - fine tuning for plot parameters. - table package has been extended and is now a fully featured table typesetting, computing and postprocessing tool. | {"url":"http://ctan.math.utah.edu/ctan/tex-archive/graphics/pgf/contrib/pgfplots/README","timestamp":"2014-04-19T09:56:53Z","content_type":null,"content_length":"10843","record_id":"<urn:uuid:faaa019a-549c-4846-9519-f6f8c11127f9>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Berwyn Heights, MD SAT Math Tutor
Find a Berwyn Heights, MD SAT Math Tutor
...Writing: During my previous job I was required to write multitudes of technical reports. I am very skilled with technical writing, and I can certainly help with analytic writing. I achieved a
750 on the SAT math section.
32 Subjects: including SAT math, English, reading, calculus
I recently graduated from UMD with a Master's in Electrical Engineering. I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a
single B, regardless of the subject. I did this through perfecting a system of self-learning and studyi...
15 Subjects: including SAT math, calculus, physics, GRE
...At Williams College I studied Statistics, and the course of study included probability. I have tutored both statistics and math students in the field of probability. I have studied
econometrics at Williams College as part of my Economics major.
21 Subjects: including SAT math, statistics, geometry, algebra 1
I have received my BA from The George Washington University few years ago and now am attending George Mason University pursuing a chemistry degree to finish medical school pre-requisites. I was
on Dean's List in Spring 2010 and hope to be on it again this semester. I would like to help student...
17 Subjects: including SAT math, chemistry, physics, calculus
...As an undergraduate, I tutored peers in Spanish including grammar, writing, and speaking skills. I studied abroad for 4 months in Madrid, Spain. While there I volunteered with Helenski Espana,
a human rights group.
17 Subjects: including SAT math, Spanish, writing, physics
Related Berwyn Heights, MD Tutors
Berwyn Heights, MD Accounting Tutors
Berwyn Heights, MD ACT Tutors
Berwyn Heights, MD Algebra Tutors
Berwyn Heights, MD Algebra 2 Tutors
Berwyn Heights, MD Calculus Tutors
Berwyn Heights, MD Geometry Tutors
Berwyn Heights, MD Math Tutors
Berwyn Heights, MD Prealgebra Tutors
Berwyn Heights, MD Precalculus Tutors
Berwyn Heights, MD SAT Tutors
Berwyn Heights, MD SAT Math Tutors
Berwyn Heights, MD Science Tutors
Berwyn Heights, MD Statistics Tutors
Berwyn Heights, MD Trigonometry Tutors
Nearby Cities With SAT math Tutor
Berwyn, MD SAT math Tutors
Brentwood, MD SAT math Tutors
College Park SAT math Tutors
Colmar Manor, MD SAT math Tutors
Cottage City, MD SAT math Tutors
Edmonston, MD SAT math Tutors
Greenbelt SAT math Tutors
Landover Hills, MD SAT math Tutors
Mount Rainier SAT math Tutors
North Brentwood, MD SAT math Tutors
North College Park, MD SAT math Tutors
Riverdale Park, MD SAT math Tutors
Riverdale Pk, MD SAT math Tutors
Riverdale, MD SAT math Tutors
University Park, MD SAT math Tutors | {"url":"http://www.purplemath.com/Berwyn_Heights_MD_SAT_math_tutors.php","timestamp":"2014-04-21T04:51:58Z","content_type":null,"content_length":"24341","record_id":"<urn:uuid:e988471d-3471-4c31-b796-1cf1f0f0a580>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathFiction: The Girl with the Celestial Limb (Pauline Melville)
Although recognized as mathematically talented in school, Jane Cole hid from all things intellectual after having a frightening epiphany regarding infinity. Math, however, seemingly exacts its
revenge on her when her leg turns into a "dazzlingly intricate, three-dimensional network of geometric shapes" in this short story.
Aside from the obviously fantastical aspect of her leg becoming a mathematical object, the characters who come to Jane's aid in the story also are obviously somewhat unreal. The first is a stranger
who offers a business card reading "J.F. WIDDERSHINS / GEOMETER AND MAKER OF DIVIDERS AND POLYHEDRAL SUNDIALS". He somehow recognizes her as a person who should know mathematics (though she has
become used to the catch phrase "dunno really" since intentionally suppressing her intellect):
(quoted from The Girl with the Celestial Limb)
`Well, well," said Widdershins, kneeling to inspect the mathematical limb more closely. `No wonder hte shoe and sock were painful -- all those acute angles. Has the pain abated somewhat?' ,P. Jane
nodded. Widdershins regarded her quizzically: `You seem surprised. Why is that?' he asked.
Jane searched for the vocabulary she had jettisoned. It failed to arrive: `Dunno really,' she said.
`Come, come.' His tone was mildly reproving. `You are one of us. I think you have always known that there is nothing real except mathematics. May I help myself to one of your chips?'
Jane still held the warm, greasy packet of fish and chips to her chest. She opened it and he took one, pulling up one of the armchairs to sit beside her in the evening light:
`I am a pure mathematician. A classicist. The pure mathematician is the only one in direct contact with relaity. The reality of pure mathematics lies outside us. The area of a circle is πr^2 not
because our minds are shaped one way or another but because it is so, because mathematical reality is built that way....'
Widdershins calls for assistance from an acquaintance named Hoodlum who describes himself as a "Rock and roller...anarcho-syndicalist and collector of irrational numbers." At one point, he points to
one of the perfect unit squares in what was formerly Jane's leg and says `Which means that the diagonal of your one centimetre-sided square is the square root of two. An irrational number. It don't
exist.' (I am not sure what the author meant us to make of this remark. Since the Pythagoreans, there have not been many people who insist that irrational numbers do not exist!)
The next `expert' to get a look at Jane's leg is a self-described `nuts and bolts man', differentiating himself from the `airy-fairy' mathematicians. But, he brings questionable interpretations of
quantum mechanics into it, claiming for example that unless she observes her leg it does not exist. General relativity also comes into it when he suggests that her leg has become a black hole.
This story appeared in Shape-Shifter, the first collection of short stories by the actress Pauline Melville. The collection won some awards, and the story was entertaining enough to read. However, at
least from my perspective of viewing it as an example of mathematical fiction, this story was not particularly special.
In fact, the only thing it really makes me think about is other stories that are somehow similar. In the way that it describes the suffering of a person who has developed a mathematical "illness", it
reminds me of Connie Willis' haunting Schwarzschild Radius. Widdershins' description of mathematics as reality made me think of The Mathenauts. Moreover, the caricatured representation of the experts
and the humorous encroachment of the mathematics of infinity into daily life reminded me of Art Thou Mathematics?.
Though the sentences in Melville's story are as beautifully written (if not more so) than those in the other stories mentioned above, the ideas themselves are not presented as interestingly here as
in those previous works. | {"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf973","timestamp":"2014-04-16T16:11:13Z","content_type":null,"content_length":"12926","record_id":"<urn:uuid:7539b9db-5443-4558-a0a8-8750d4dbb402>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00628-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about 100000 baccarat shoes on ImSpirit
A friend sent me a computer application Baccarat Advantage (baccaratadvantage.com), a black box which outputs bet placement instructions based on inputted Banker and Player hand total values.
The creator of Baccarat Advantage calls himself “Dr. Jacob Steinberg,” and claims to have achieved a positive expectation after running 1 million live shoes through his program. Because a user of
the program only accesses the program’s interface, where he enters the total hand values and receives the outputted betting instructions, the actual method of determining the bet placement is never
revealed, and he can only use the application to play at online casinos, never at physical, brick-and-mortar casinos. In addition to the purchase price, Dr. Steinberg also requires his customers to
share a percentage of their winnings from using the program, and the exact purchase price depends on what percentage the customer agrees to pay. In this way, he justifies selling the program, since
he can in principle exponentially increase his income through his customers’ winnings while never actually revealing to them the exact method of his supposed holy grail.
Despite what Dr. Steinberg claims, my tests of his Baccarat Advantage objectively demonstrate that it quickly yields negative expectancies, and if he really did win a million live shoes with the
method in his program, those shoes must have been unbelievably favorably biased to it. On the surface, his procedure appears to utilize information from hand totals and not just the (mathematically
doomed to fail) pattern of P/B wins. Thus, there is a faint glimmer of card-counting potential, though the program does not fully account for the values of each card. However, whatever his method
is, my tests show that it consistently yields negative expectancies and can no better win at baccarat than any other method tested to date.
A picture of the Baccarat Advantage user interface:
Upon activating the program, a brief splash window encourages the player with an affirmative tagline, Because winning feels so good. Play commences as follows: At the beginning of each shoe, the
user enters the first four hand totals of the first two decisions in the “Second-to-Last Hand” and “Last Hand” rows. The program then outputs a bet placement instruction in the “Now Bet” row. The
user then presses the “Won,” “Tie,” or “Lost” button according to the result of the bet, upon which the values in the “Last Hand” row moves up to the “Second-to-Last Hand” row, clearing the “Last
Hand” row and making it available for fresh input. He then enters the next set of hand totals in the now emptied “Last Hand” row and repeats the process. After the end of each shoe, the “Reset”
button is pressed, all the input boxes clear, and the process begins again for the next shoe.
There are two money management options: 1) Flat Betting and 2) Progression Betting. Based on $1 units, flat betting requires a $50 bankroll, while progression betting requires a $164 bankroll. The
progression (as seen in the picture above) is a simple 7-level Martingale, padded at certain levels to cover Banker commissions. Flat betting is described as a “slow winner,” but requiring “lower
bankroll,” while progressive betting is touted as an “infallible, quick winner,” but requiring “higher bankroll.” Either way, Dr. Steinberg claims his method consistently beats the house edge and is
the only true (and easy) way to win baccarat.
For the most part, the program consistently outputs the same bet placement instructions per set of inputs. I write, “for the most part,” because there were some minor glitches which would sometimes
result in the same set of inputs resulting in different bet placement instructions. The apparent glitches may occur when one changes an inputted value, and then back to the original value again,
whereupon the opposite bet placement instruction would sometimes appear. This is clearly an operational bug on the part of the programmer, assuming there is supposed to be a one-to-one
correspondence between unique input values and output bet placement instructions. Otherwise, the program appeared to be consistent enough for the most part to be tested against data sets of baccarat
To examine the long term performance of Dr. Steinberg’s baccarat black box, I created a script to automate the inputting and reading of the outputted bet placement instructions for 12,100 baccarat
shoes. Roughly 3-4 shoes per minute could thus be accurately inputted and analyzed, and the entire testing took place over several days and nights. My automation script performs exactly what a
human would do, entering the total hand values decision by decision, reading the outputted bet placement instructions, pressing the “Won,” Tie,” or “Lost” button according to the result of the bet,
and pressing “Reset” after the end of the shoe before starting the next one. For the testing, I used the Flat Bet mode, since no progression will be able to consistently help a method which cannot
win flat betting. For verification, I sent the data output to my friend, who double-checked and confirmed he was getting the same outputted bet placements from the program.
A sample of the data output follows:
1st column: shoe number
2nd column: decision number
3rd column: Banker hand total
4th column: Player hand total
5th column: decision P/B/T winner
6th column: Baccarat Advantage outputted bet placement
1 1 5 2 B -
1 2 1 3 P -
1 3 9 5 B P
1 4 3 8 P P
1 5 6 2 B B
1 6 5 5 T B
1 7 0 9 P -
1 8 8 7 B B
1 9 9 5 B B
1 10 0 6 P P
1 11 5 9 P B
1 12 6 6 T P
1 13 9 2 B -
1 14 6 7 P P
1 15 9 3 B P
1 16 0 5 P P
1 17 0 0 T P
1 18 2 1 B -
1 19 0 9 P P
1 20 5 8 P B
1 21 6 8 P P
1 22 7 9 P P
1 23 0 9 P P
1 24 8 1 B B
1 25 9 5 B B
1 26 8 6 B B
1 27 6 9 P B
1 28 1 9 P B
1 29 6 5 B B
1 30 3 8 P B
1 31 0 5 P B
1 32 3 9 P B
1 33 0 9 P B
1 34 8 4 B B
1 35 3 7 P B
1 36 8 6 B P
1 37 9 7 B B
1 38 3 8 P B
1 39 5 9 P B
1 40 4 6 P P
1 41 8 8 T P
1 42 3 9 P B
1 43 9 2 B B
1 44 7 5 B P
1 45 9 6 B B
1 46 0 2 P P
1 47 1 7 P P
1 48 9 0 B B
1 49 9 6 B P
1 50 8 0 B B
1 51 2 2 T P
1 52 4 7 P B
1 53 8 8 T B
1 54 9 5 B B
1 55 5 6 P P
1 56 9 6 B P
1 57 5 8 P P
1 58 4 4 T B
1 59 7 6 B B
1 60 6 5 B P
1 61 0 9 P P
1 62 1 7 P B
1 63 0 8 P P
1 64 1 6 P B
1 65 0 9 P P
1 66 8 6 B B
1 67 6 6 T B
1 68 4 6 P P
1 69 4 3 B B
1 70 0 9 P B
1 71 5 1 B B
1 72 3 4 P B
1 73 4 2 B P
1 74 4 8 P P
1 75 8 7 B B
1 76 7 8 P B
1 77 2 2 T P
1 78 9 4 B P
1 79 8 6 B P
1 80 6 6 T B
1 81 6 4 B P
1 82 3 7 P P
Notice that sometimes the program does not bet after a Tie, and sometimes it does.
The numerical and graphical results of the testing are presented in Simulation Series 31 Results. Three sets of data were examined, each one consisting of slightly different Banker (B) and Player
(P) compositions. As the following plots show, the qualitative behavior of the Baccarat Advantage Player’s Advantages (P.A.’s, the net units won after commissions divided by the total amount bet)
depend on the B/P compositions of the data set, though all are consistently negative.
Data Set 1 had a slightly lower-than-average numbers of Bankers, 50.61% Bs and 49.39% Ps (not counting Ties). In this set, the P.A.’s of the Baccarat Advantage bet placements are always consistently
worse than the expectancies for B, while the P.A.’s of the opposite of the Baccarat Advantage bet placements are always consistently better than those of P. (Because of the lower-than-average
numbers of Bs in Set 1, the expectancies for B is always more negative than that of P.) The following plot which graphs the evolution of the P.A.’s over the numbers of shoes tested shows that in Set
1, Baccarat Advantage P.A.’s (blue for the program’s output, and red for the opposite of the program’s output) mostly lie to the outside of the Banker and Player P.A.s (yellow for B, and green for
Data Set 2 has slightly more Bs overall than Data Set 1, 50.69% Bs and 49.31% Ps (not counting Ties), which is closer to what are the “average” proportions. As the following plot shows, the blue and
red Baccarat Advantage P.A. lines begin to fall in-between the yellow and green Banker and Player P.A. lines.
Data Set 3 has 50.83% Bs and 49.17% Ps (not counting Ties), which is more Bs than “average,” and for the most part, the Baccarat Advantage P.A.’s are always comparable to or no better than the
standard B/P expectancies. Thus, in the plot below, the blue and red lines are mostly contained within the yellow and green lines. In various, brief stretches, the Baccarat Advantage P.A’s became
slightly more positive that that of B, and the opposite bet placement’s P.A.’s correspondingly became slightly worse than P, which is the opposite of what occurred in Data Set 1.
Notice that the line of symmetry between the opposing pairs in the above P.A. graphs is about half way between -1.0 and -1.5. This line of symmetry is due to the built-in tilt of the game in the
house’s favor, and the fact that it is negative is the reason why “just bet opposite” a losing method does not win either. In a perfectly fair 50/50 game, the line of symmetry would be at exactly
P.A.=0, meaning in such a zero-expectancy game in the long run, you always have equal chances of being net positive or negative, and on average, break-even. However, in baccarat, the house’s edge
skews everything in the house’s favor, and you expect to always be net negative in the long run, no matter how you play. In other words, if you can’t consistently win a perfectly fair 50/50,
zero-expectancy game in the long run, you have even less mathematical hope of winning a negative expectancy game in the long run.
Despite the above qualitative differences due to B and P compositions in the shoe, the resulting P.A.s are always negative, and the Baccarat Advantage method itself loses all data sets quite
unceremoniously, as the following plots of the net score versus shoe clearly show:
I did not test Baccarat Advantage’s progression mode systematically, but I compared a few shoes to demonstrate that the bet placement output from the program in progression mode is exactly the same
as in flat bet mode; only the bet amount differs according to a win or loss, and the progression is the straightforward 7-level Martingale.
Just for fun, I let my automation script run through a small data set in the progression betting mode. In these runs, the required bet amounts would bust past the 7 levels of the progression every
few dozen shoes, and a rather sad pop-up box would announce, “Sorry, you have lost. Start over.” Tracking the exact results of the 7-level progression over my testing results would have been a
routine exercise, but completely not worth the time or effort.
In conclusion, my tests show that Dr. Steinberg’s Baccarat Advantage yields standard negative expectations and is quantitatively no different than any other baccarat method I have tested, except that
in certain situations when the B/P compositions are above or below average, the P.A.’s of the Baccarat Advantage bet placements or its opposite are slightly more positive than expected when always
betting B or P. Unfortunately, the slight improvements in odds in these situations are not enough to be profitably exploited.
Baccarat Advantage is a black box that is at best broken and at worst bogus.
Disclaimer: The betting strategies and results presented are for educational and entertainment purposes only. Gambling involves substantial risks, and the odds are not in the player’s favor by
design. The author does not state nor imply any system, method, or approach offers users any advantage, and he shall not be held liable under any circumstances for any losses whatsoever. | {"url":"https://imspirit.wordpress.com/tag/100000-baccarat-shoes/","timestamp":"2014-04-19T14:39:48Z","content_type":null,"content_length":"192897","record_id":"<urn:uuid:7068acc3-6399-405e-992c-1d9021428cdd>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00124-ip-10-147-4-33.ec2.internal.warc.gz"} |
equation for transformed function.
September 24th 2008, 02:38 PM
equation for transformed function.
The function h(x) =3(x-3)(x+2)(x-5)
is translated 4 units to the left and 5 units down. Write an equation for the transformed function.
How do I get this function into y = a [ k (x - d) ] ^n + c format?
Or is there another way?
September 24th 2008, 02:41 PM
mr fantastic | {"url":"http://mathhelpforum.com/algebra/50498-equation-transformed-function-print.html","timestamp":"2014-04-17T15:47:21Z","content_type":null,"content_length":"4232","record_id":"<urn:uuid:4957a0b8-8c29-400c-b8e6-93075b52e8ca>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
The logic of bunched implications
Results 1 - 10 of 163
, 2001
"... . We describe an extension of Hoare's logic for reasoning about programs that alter data structures. We consider a low-level storage model based on a heap with associated lookup, update,
allocation and deallocation operations, and unrestricted address arithmetic. The assertion language is based ..."
Cited by 272 (30 self)
Add to MetaCart
. We describe an extension of Hoare's logic for reasoning about programs that alter data structures. We consider a low-level storage model based on a heap with associated lookup, update, allocation
and deallocation operations, and unrestricted address arithmetic. The assertion language is based on a possible worlds model of the logic of bunched implications, and includes spatial conjunction and
implication connectives alongside those of classical logic. Heap operations are axiomatized using what we call the \small axioms", each of which mentions only those cells accessed by a particular
command. Through these and a number of examples we show that the formalism supports local reasoning: A speci- cation and proof can concentrate on only those cells in memory that a program accesses.
This paper builds on earlier work by Burstall, Reynolds, Ishtiaq and O'Hearn on reasoning about data structures. 1
- Journal of Applied Logic , 2007
"... Abstract. The practice of first-order logic is replete with meta-level concepts. Most notably there are meta-variables ranging over formulae, variables, and terms, and properties of syntax such
as alpha-equivalence, capture-avoiding substitution and assumptions about freshness of variables with resp ..."
Cited by 183 (21 self)
Add to MetaCart
Abstract. The practice of first-order logic is replete with meta-level concepts. Most notably there are meta-variables ranging over formulae, variables, and terms, and properties of syntax such as
alpha-equivalence, capture-avoiding substitution and assumptions about freshness of variables with respect to metavariables. We present one-and-a-halfth-order logic, in which these concepts are made
explicit. We exhibit both sequent and algebraic specifications of one-and-a-halfth-order logic derivability, show them equivalent, show that the derivations satisfy cut-elimination, and prove
correctness of an interpretation of first-order logic within it. We discuss the technicalities in a wider context as a case-study for nominal algebra, as a logic in its own right, as an
algebraisation of logic, as an example of how other systems might be treated, and also as a theoretical foundation
- THEORETICAL COMPUTER SCIENCE , 2004
"... In this paper we show how a resource-oriented logic, separation logic, can be used to reason about the usage of resources in concurrent programs. ..."
Cited by 158 (5 self)
Add to MetaCart
In this paper we show how a resource-oriented logic, separation logic, can be used to reason about the usage of resources in concurrent programs.
, 2000
"... Reynolds has developed a logic for reasoning about mutable data structures in which the pre- and postconditions are written in an intuitionistic logic enriched with a spatial form of
conjunction. We investigate the approach from the point of view of the logic BI of bunched implications of O'Hearn an ..."
Cited by 149 (14 self)
Add to MetaCart
Reynolds has developed a logic for reasoning about mutable data structures in which the pre- and postconditions are written in an intuitionistic logic enriched with a spatial form of conjunction. We
investigate the approach from the point of view of the logic BI of bunched implications of O'Hearn and Pym. We begin by giving a model in which the law of the excluded middle holds, thus showing that
the approach is compatible with classical logic. The relationship between the intuitionistic and classical versions of the system is established by a translation, analogous to a translation from
intuitionistic logic into the modal logic S4. We also consider the question of completeness of the axioms. BI's spatial implication is used to express weakest preconditions for object-component
assignments, and an axiom for allocating a cons cell is shown to be complete under an interpretation of triples that allows a command to be applied to states with dangling pointers. We make this
latter a feature, by incorporating an operation, and axiom, for disposing of memory. Finally, we describe a local character enjoyed by specifications in the logic, and show how this enables a class
of frame axioms, which say what parts of the heap don't change, to be inferred automatically.
- Millennial Perspectives in Computer Science , 2000
"... Drawing upon early work by Burstall, we extend Hoare's approach to proving the correctness of imperative programs, to deal with programs that perform destructive updates to data structures
containing more than one pointer to the same location. The key concept is an "independent conjunction" P & ..."
Cited by 107 (5 self)
Add to MetaCart
Drawing upon early work by Burstall, we extend Hoare's approach to proving the correctness of imperative programs, to deal with programs that perform destructive updates to data structures containing
more than one pointer to the same location. The key concept is an "independent conjunction" P & Q that holds only when P and Q are both true and depend upon distinct areas of storage. To make this
concept precise we use an intuitionistic logic of assertions, with a Kripke semantics whose possible worlds are heaps (mapping locations into tuples of values).
- Theoretical Computer Science , 2004
"... Abstract. We present a denotational semantics based on action traces, for parallel programs which share mutable data and synchronize using resources and conditional critical regions. We
introduce a resource-sensitive logic for partial correctness, adapting separation logic to the concurrent setting, ..."
Cited by 80 (1 self)
Add to MetaCart
Abstract. We present a denotational semantics based on action traces, for parallel programs which share mutable data and synchronize using resources and conditional critical regions. We introduce a
resource-sensitive logic for partial correctness, adapting separation logic to the concurrent setting, as proposed by O’Hearn. The logic allows program proofs in which “ownership ” of a piece of
state is deemed to transfer dynamically between processes and resources. We prove soundness of this logic, using a novel “local ” interpretation of traces, and we show that every provable program is
race-free. 1
- IN PROC. 22ND ANNUAL IEEE SYMPOSIUM ON LOGIC IN COMPUTER SCIENCE (LICS’07 , 2007
"... Separation logic is an extension of Hoare’s logic which supports a local way of reasoning about programs that mutate memory. We present a study of the semantic structures lying behind the logic.
The core idea is of a local action, a state transformer that mutates the state in a local way. We formula ..."
Cited by 76 (10 self)
Add to MetaCart
Separation logic is an extension of Hoare’s logic which supports a local way of reasoning about programs that mutate memory. We present a study of the semantic structures lying behind the logic. The
core idea is of a local action, a state transformer that mutates the state in a local way. We formulate local actions for a general class of models called separation algebras, abstracting from the
RAM and other specific concrete models used in work on separation logic. Local actions provide a semantics for a generalized form of (sequential) separation logic. We also show that our conditions on
local actions allow a general soundness proof for a separation logic for concurrency, interpreted over arbitrary separation algebras.
- In Proc. of ICALP, volume 2380 of LNCS , 2001
"... We study a spatial logic for reasoning about labelled directed graphs, and the application of this logic to provide a query language for analysing and manipulating such graphs. We give a graph
description using constructs from process algebra. We introduce a spatial logic in order to reason loca ..."
Cited by 62 (5 self)
Add to MetaCart
We study a spatial logic for reasoning about labelled directed graphs, and the application of this logic to provide a query language for analysing and manipulating such graphs. We give a graph
description using constructs from process algebra. We introduce a spatial logic in order to reason locally about disjoint subgraphs. We extend our logic to provide a query language which preserves
the multiset semantics of our graph model. Our approach contrasts with the more traditional set-based semantics found in query languages such as TQL, Strudel and GraphLog.
, 2002
"... We develop an explicit two level system that allows programmers to reason about the behavior of effectful programs. The first level is an ordinary ML-style type system, which confers standard
properties on program behavior. The second level is a conservative extension of the first that uses a logic ..."
Cited by 62 (5 self)
Add to MetaCart
We develop an explicit two level system that allows programmers to reason about the behavior of effectful programs. The first level is an ordinary ML-style type system, which confers standard
properties on program behavior. The second level is a conservative extension of the first that uses a logic of type refinements to check more precise properties of program behavior. Our logic is a
fragment of intuitionistic linear logic, which gives programmers the ability to reason locally about changes of program state. We provide a generic resource semantics for our logic as well as a
sound, decidable, syntactic refinement-checking system. We also prove that refinements give rise to an optimization principle for programs. Finally, we illustrate the power of our system through a
number of examples.
- IN ESOP’05, LNCS , 2005
"... We present a precise correspondence between separation logic and a simple notion of predicate BI, extending the earlier correspondence given between part of separation logic and propositional
BI. Moreover, we introduce the notion of a BI hyperdoctrine and show that it soundly models classical and in ..."
Cited by 57 (22 self)
Add to MetaCart
We present a precise correspondence between separation logic and a simple notion of predicate BI, extending the earlier correspondence given between part of separation logic and propositional BI.
Moreover, we introduce the notion of a BI hyperdoctrine and show that it soundly models classical and intuitionistic first- and higher-order predicate BI, and use it to show that we may easily extend
separation logic to higher-order. We also demonstrate that this extension is important for program proving, since it provides sound reasoning principles for data abstraction in the presence of | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=40439","timestamp":"2014-04-18T07:16:53Z","content_type":null,"content_length":"37273","record_id":"<urn:uuid:50426734-a724-4fea-ab05-35aba473d473>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about fundamental theorem of algebra on Not About Apples
Now there’s completeness and then there’s completeness…
15 October 2009
This post achieves a fortuitous segue from the last post into my serious of articles on the beauty of Galois theory.
In the previous post I introduced Dedekind cuts as a means of constructing the real number line, and I said that this perspective is responsible for the completeness of the real numbers $\mathbb{R}$.
Now, that was completeness in the topological sense. There is another, very different notion of algebraic completeness.
A number system is called algebraically complete if every polynomial equation in one variable with coefficients from that number system can be solved in that number system.
Leave a Comment » | Articles | Tagged: algebraic completeness, fundamental theorem of algebra | Permalink | {"url":"http://notaboutapples.wordpress.com/tag/fundamental-theorem-of-algebra/","timestamp":"2014-04-21T12:25:05Z","content_type":null,"content_length":"21290","record_id":"<urn:uuid:1dfd6755-fdd5-49b0-89dc-9e161933759f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
HP Fortran for OpenVMS
Language Reference Manual
Chapter 4
Expressions and Assignment Statements
This chapter describes:
4.1 Expressions
An expression represents either a data reference or a computation, and is formed from operators, operands, and parentheses. The result of an expression is either a scalar value or an array of scalar
If the value of an expression is of intrinsic type, it has a kind type parameter. (If the value is of intrinsic type CHARACTER, it also has a length parameter.) If the value of an expression is of
derived type, it has no kind type parameter.
An operand is a scalar or array. An operator can be either intrinsic or defined. An intrinsic operator is known to the compiler and is always available to any program unit. A defined operator is
described explicitly by a user in a function subprogram and is available to each program unit that uses the subprogram.
The simplest form of an expression (a primary) can be any of the following:
• A constant; for example, 4.2
• A subobject of a constant; for example, ' LMNOP ' (2:4)
• A variable; for example, VAR_1
• A structure constructor; for example, EMPLOYEE(3472, "JOHN DOE")
• An array constructor; for example, (/12.0,16.0/)
• A function reference; for example, COS(X)
• Another expression in parentheses; for example, (I+5)
Any variable or function reference used as an operand in an expression must be defined at the time the reference is executed. If the operand is a pointer, it must be associated with a target object
that is defined. An integer operand must be defined with an integer value rather than a statement label value. All of the characters in a character data object reference must be defined.
When a reference to an array or an array section is made, all of the selected elements must be defined. When a structure is referenced, all of the components must be defined.
In an expression that has intrinsic operators with an array as an operand, the operation is performed on each element of the array. In expressions with more than one array operand, the arrays must be
conformable (they must have the same shape). The operation is applied to corresponding elements of the arrays, and the result is an array of the same shape (the same rank and extents) as the
In an expression that has intrinsic operators with a pointer as an operand, the operation is performed on the value of the target associated with the pointer.
For defined operators, operations on arrays and pointers are determined by the procedure defining the operation.
A scalar is conformable with any array. If one operand of an expression is an array and another operand is a scalar, it is as if the value of the scalar were replicated to form an array of the same
shape as the array operand. The result is an array of the same shape as the array operand.
The following sections describe numeric, character, relational, and logical expressions; defined operations; a summary of operator precedence; and initialization and specification expressions.
For More Information:
4.1.1 Numeric Expressions
Numeric expressions express numeric computations, and are formed with numeric operands and numeric operators. The evaluation of a numeric operation yields a single numeric value.
The term numeric includes logical data, because logical data is treated as integer data when used in a numeric context. The default for .TRUE. is --1; .FALSE. is 0.
Numeric operators specify computations to be performed on the values of numeric operands. The result is a scalar numeric value or an array whose elements are scalar numeric values. The following are
numeric operators:
┃Operator │ Function ┃
┃** │Exponentiation ┃
┃* │Multiplication ┃
┃/ │Division ┃
┃+ │Addition or unary plus (identity) ┃
┃-- │Subtraction or unary minus (negation) ┃
Unary operators operate on a single operand. Binary operators operate on a pair of operands. The plus and minus operators can be unary or binary. When they are unary operators, the plus or minus
operators precede a single operand and denote a positive (identity) or negative (negation) value, respectively. The exponentiation, multiplication, and division operators are binary operators.
Valid numeric operations must have results that are defined by the arithmetic used by the processor. For example, raising a negative-valued real to a real power is invalid.
Numeric expressions are evaluated in an order determined by a precedence associated with each operator, as follows (see also Section 4.1.6):
┃ Operator │Precedence ┃
┃** │ Highest┃
┃* and / │ .┃
┃Unary + and -- │ .┃
┃Binary + and -- │ Lowest┃
Operators with equal precedence are evaluated in left-to-right order. However, exponentiation is evaluated from right to left. For example, A**B**C is evaluated as A**(B**C). B**C is evaluated first,
then A is raised to the resulting power.
Normally, two operators cannot appear together. However, HP Fortran allows two consecutive operators if the second operator is a plus or minus.
In the following example, the exponentiation operator is evaluated first because it takes precedence over the multiplication operator:
A**B*C is evaluated as (A**B)*C
Ordinarily, the exponentiation operator would be evaluated first in the following example. However, because HP Fortran allows the combination of the exponentiation and minus operators, the
exponentiation operator is not evaluated until the minus operator is evaluated:
A**-B*C is evaluated as A**(-(B*C))
Note that the multiplication operator is evaluated first, since it takes precedence over the minus operator.
When consecutive operators are used with constants, the unary plus or minus before the constant is treated the same as any other operator. This can produce unexpected results. In the following
example, the multiplication operator is evaluated first, since it takes precedence over the minus operator:
X/-15.0*Y is evaluated as X/-(15.0*Y)
4.1.1.1 Using Parentheses in Numeric Expressions
You can use parentheses to force a particular order of evaluation. When part of an expression is enclosed in parentheses, that part is evaluated first. The resulting value is used in the evaluation
of the remainder of the expression.
In the following examples, the numbers below the operators indicate a possible order of evaluation. Alternative evaluation orders are possible in the first three examples because they contain
operators of equal precedence that are not enclosed in parentheses. In these cases, the compiler is free to evaluate operators of equal precedence in any order, as long as the result is the same as
the result gained by the algebraic left-to-right order of evaluation.
4 + 3 * 2 - 6 / 2 = 7
<uparrow symbol> <uparrow symbol> <uparrow symbol> <uparrow symbol>
(4 + 3) * 2 - 6 / 2 = 11
<uparrow symbol> <uparrow symbol> <uparrow symbol> <uparrow symbol>
(4 + 3 * 2 - 6) / 2 = 2
<uparrow symbol> <uparrow symbol> <uparrow symbol> <uparrow symbol>
((4+3) * 2 - 6) / 2 = 4
<uparrow symbol> <uparrow symbol> <uparrow symbol> <uparrow symbol>
Expressions within parentheses are evaluated according to the normal order of precedence. In expressions containing nested parentheses, the innermost parentheses are evaluated first.
Nonessential parentheses do not affect expression evaluation, as shown in the following example:
4 + (3*2) - (6/2)
However, using parentheses to specify the evaluation order is often important in high-accuracy numerical computations. In such computations, evaluation orders that are algebraically equivalent may
not be computationally equivalent when processed by a computer (because of the way intermediate results are rounded off).
Parentheses can be used in argument lists to force a given argument to be treated as an expression, rather than as the address of a memory item.
4.1.1.2 Data Type of Numeric Expressions
If every operand in a numeric expression is of the same data type, the result is also of that type.
If operands of different data types are combined in an expression, the evaluation of that expression and the data type of the resulting value depend on the ranking associated with each data type. The
following table shows the ranking assigned to each data type:
┃ Data Type │Ranking ┃
┃LOGICAL(1) and BYTE │ Lowest┃
┃LOGICAL(2) │ .┃
┃LOGICAL(4) │ .┃
┃LOGICAL(8) │ .┃
┃INTEGER(1) │ .┃
┃INTEGER(2) │ .┃
┃INTEGER(4) │ .┃
┃INTEGER(8) │ .┃
┃REAL(4) │ .┃
┃REAL(8) ^1 │ .┃
┃REAL(16) │ .┃
┃COMPLEX(4) │ .┃
┃COMPLEX(8) ^2 │ .┃
┃COMPLEX(16) │ Highest┃
The data type of the value produced by an operation on two numeric operands of different data types is the data type of the highest-ranking operand in the operation. For example, the value resulting
from an operation on an integer and a real operand is of real type. However, an operation involving a COMPLEX(4) or COMPLEX(8) data type and a DOUBLE PRECISION data type produces a COMPLEX(8) result.
The data type of an expression is the data type of the result of the last operation in that expression, and is determined according to the following conventions:
• Integer operations: Integer operations are performed only on integer operands. Note that logical entities used in a numeric context are treated as integers. In integer arithmetic, any fraction
resulting from division is truncated, not rounded. For example, the result of 1/4 + 1/4 + 1/4 + 1/4 is 0, not 1.
• Real operations: Real operations are performed only on real operands or combinations of real, integer, and logical operands. Any integer operands present are converted to real data type by giving
each a fractional part equal to zero. The expression is then evaluated using real arithmetic. However, in the statement Y = (I/J)*X , an integer division operation is performed on I and J, and a
real multiplication is performed on that result and X.
If any operand is a higher-precision real (REAL(8) or REAL(16)) type, all other operands are converted to that higher-precision real type before the expression is evaluated.
When a single-precision real operand is converted to a double-precision real operand, low-order binary digits are set to zero. This conversion does not increase accuracy; conversion of a decimal
number does not produce a succession of decimal zeros. For example, a REAL variable having the value 0.3333333 is converted to approximately 0.3333333134651184D0 . It is not converted to either
0.3333333000000000D0 or 0.3333333333333333D0 .
• Complex operations: In operations that contain any complex operands, integer operands are converted to real type, as previously described. The resulting single-precision or double-precision
operand is designated as the real part of a complex number and the imaginary part is assigned a value of zero. The expression is then evaluated using complex arithmetic and the resulting value is
of complex type. Operations involving a COMPLEX(4) or COMPLEX(8) operand and a DOUBLE PRECISION operand are performed as COMPLEX(8) operations; the DOUBLE PRECISION operand is not rounded.
These rules also generally apply to numeric operations in which one of the operands is a constant. However, if a real or complex constant is used in a higher-precision expression, additional
precision will be retained for the constant. The effect is as if a DOUBLE PRECISION (REAL(8)) or REAL(16) representation of the constant were given. For example, the expression 1.0D0 + 0.3333333 is
treated as if it is 1.0D0 + 0.3333333000000000D0 .
4.1.2 Character Expressions
A character expression consists of a character operator (//) that concatenates two operands of type character. The evaluation of a character expression produces a single value of that type.
The result of a character expression is a character string whose value is the value of the left character operand concatenated to the value of the right operand. The length of a character expression
is the sum of the lengths of the values of the operands. For example, the value of the character expression 'AB'//'CDE' is 'ABCDE' , which has a length of five.
Parentheses do not affect the evaluation of a character expression; for example, the following character expressions are equivalent:
Each of these expressions has the value ' ABCDEF ' .
If a character operand in a character expression contains blanks, the blanks are included in the value of the character expression. For example, 'ABC '//'D E'//'F ' has a value of 'ABC D EF ' .
4.1.3 Relational Expressions
A relational expression consists of two or more expressions whose values are compared to determine whether the relationship stated by the relational operator is satisfied. The following are
relational operators:
┃Operator │ Relationship ┃
┃.LT.│or│< │Less than ┃
┃.LE.│or│<=│Less than or equal to ┃
┃.EQ.│or│==│Equal to ┃
┃.NE.│or│/=│Not equal to ┃
┃.GT.│or│> │Greater than ┃
┃.GE.│or│>=│Greater than or equal to ┃
The result of the relational expression is .TRUE. if the relation specified by the operator is satisfied; the result is .FALSE. if the relation specified by the operator is not satisfied.
Relational operators are of equal precedence. Numeric operators and the character operator // have a higher precedence than relational operators.
In a numeric relational expression, the operands are numeric expressions. Consider the following example:
This expression states that the sum of APPLE and PEACH is greater than the sum of PEAR and ORANGE. If this relationship is valid, the value of the expression is .TRUE.; if not, the value is .FALSE..
Operands of type complex can only be compared using the equal operator (== or .EQ.) or the not equal operator (/= or .NE.). Complex entities are equal if their corresponding real and imaginary parts
are both equal.
In a character relational expression, the operands are character expressions. In character relational expressions, less than (< or .LT.) means the character value precedes in the ASCII collating
sequence, and greater than (> or .GT.) means the character value follows in the ASCII collating sequence. For example:
This expression states that ' ABZZZ ' is less than ' CCCCC ' . In this case, the relation specified by the operator is satisfied, so the result is .TRUE..
Character operands are compared one character at a time, in order, starting with the first character of each operand. If the two character operands are not the same length, the shorter one is padded
on the right with blanks until the lengths are equal; for example:
'ABC' .EQ. 'ABC '
'AB' .LT. 'C'
The first relational expression has the value .TRUE. even though the lengths of the expressions are not equal, and the second has the value .TRUE. even though ' AB ' is longer than ' C ' .
A relational expression can compare two numeric expressions of different data types. In this case, the value of the expression with the lower-ranking data type is converted to the higher-ranking data
type before the comparison is made.
For More Information:
On the ranking of data types, see Section 4.1.1.2.
4.1.4 Logical Expressions
A logical expression consists of one or more logical operators and logical, numeric, or relational operands. The following are logical operators:
┃Operator│ Example │ Meaning ┃
┃.AND. │A .AND. B │Logical conjunction: the expression is true if both A and B are true. ┃
┃.OR. │A .OR. B │Logical disjunction (inclusive OR): the expression is true if either A, B, or both, are true. ┃
┃.NEQV. │A .NEQV. B│Logical inequivalence (exclusive OR): the expression is true if either A or B is true, but false if both are true. ┃
┃.XOR. │A .XOR. B │Same as .NEQV. ┃
┃.EQV. │A .EQV. B │Logical equivalence: the expression is true if both A and B are true, or both are false. ┃
┃.NOT. ^1│.NOT. A │Logical negation: the expression is true if A is false and false if A is true. ┃
^1.NOT. is a unary operator.
Periods cannot appear consecutively except when the second operator is .NOT. For example, the following logical expression is valid:
A+B/(A-1) .AND. .NOT. D+B/(D-1)
Data Types Resulting from Logical Operations
Logical operations on logical operands produce single logical values (.TRUE. or .FALSE.) of logical type.
Logical operations on integers produce single values of integer type. The operation is carried out bit-by-bit on corresponding bits of the internal (binary) representation of the integer operands.
Logical operations on a combination of integer and logical values also produce single values of integer type. The operation first converts logical values to integers, then operates as it does with
Logical operations cannot be performed on other data types.
Evaluation of Logical Expressions
Logical expressions are evaluated according to the precedence of their operators. Consider the following expression:
A*B+C*ABC == X*Y+DM/ZZ .AND. .NOT. K*B > TT
This expression is evaluated in the following sequence:
(((A*B)+(C*ABC)) == ((X*Y)+(DM/ZZ))) .AND. (.NOT. ((K*B) > TT))
As with numeric expressions, you can use parentheses to alter the sequence of evaluation.
When operators have equal precedence, the compiler can evaluate them in any order, as long as the result is the same as the result gained by the algebraic left-to-right order of evaluation (except
for exponentiation, which is evaluated from right to left).
You should not write logical expressions whose results might depend on the evaluation order of subexpressions. The compiler is free to evaluate subexpressions in any order. In the following example,
either (A(I)+1.0) or B(I)*2.0 could be evaluated first:
Some subexpressions might not be evaluated if the compiler can determine the result by testing other subexpressions in the logical expression. Consider the following expression:
A .AND. (F(X,Y) .GT. 2.0) .AND. B
If the compiler evaluates A first, and A is false, the compiler might determine that the expression is false and might not call the subprogram F(X,Y).
For More Information:
On the precedence of numeric, relational, and logical operators, see Section 4.1.6.
┃Previous │Next│Contents │Index┃ | {"url":"http://h71000.www7.hp.com/doc/82final/6324/6324pro_007.html","timestamp":"2014-04-17T01:25:43Z","content_type":null,"content_length":"36670","record_id":"<urn:uuid:eeadb4ae-58db-4dcf-abc4-b8db6dc9dce9>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A 600-kg car is going over a curve with a radius of 120 meters that is banked at an angle of 25 degrees with a speed of 30 meters per second. The coefficient of static friction between the car and
the road is 0.3. What is the normal force exerted by the road on the car? a) 7240 N b) 1590 N c) 5330 N d) 3430 N e) 3620 N Note: The speed is not Vmax (roughly 32 m/s). I've been working under the
assumption that mgcos(theta) will be increased by the frictional component acting downward. I simply can't seem to get the solution.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b67682e4b0c789d50f6607","timestamp":"2014-04-18T18:37:35Z","content_type":null,"content_length":"41272","record_id":"<urn:uuid:8c06b6bc-60ec-45da-acba-7868d82da476>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
La Mesa, CA
Find a La Mesa, CA ACT Tutor
...As a Kansas-credentialed teacher, I have taught Trigonometry in the classroom and to tutors for pay. I'm interested in helping your student do well on the SAT and ACT tests. As a result of my
test scores, I received scholarships to two universities.
9 Subjects: including ACT Math, geometry, algebra 1, algebra 2
...I received 5's in both of the AP calculus tests, and am a UCSD biology student and so use Calculus on a regular basis in my classes. I have gone through math courses up until the higher levels
of calculus. I still use geometry in many of my science courses as part of my biology major at UCSD I took honors precalculus in high school, and passed with an A.
42 Subjects: including ACT Math, reading, English, writing
...I specialize in tutoring mathematics. I have tutored all math subjects from basic arithmetic past advanced calculus and differential equations since 2009. I love helping students find their
own learning style, and giving them the tools to learn the subject on their own without me!
37 Subjects: including ACT Math, calculus, geometry, statistics
...There are few jobs more rewarding than tutoring, and I have been lucky enough to tutor a diverse group of students throughout my career. I specialize in tutoring English at all levels,
particularly in writing skills and reading comprehension. I also specialize in tutoring for GRE, SAT, and ACT ...
29 Subjects: including ACT Math, chemistry, English, algebra 1
...Math is just one of my interests and accomplishments. I am well versed in Spanish as well, including grammar and literature. I studied art and history in Spain at the University of Barcelona
for a year.I have taken and passed the CBEST exam.
11 Subjects: including ACT Math, Spanish, algebra 2, SAT math
Related La Mesa, CA Tutors
La Mesa, CA Accounting Tutors
La Mesa, CA ACT Tutors
La Mesa, CA Algebra Tutors
La Mesa, CA Algebra 2 Tutors
La Mesa, CA Calculus Tutors
La Mesa, CA Geometry Tutors
La Mesa, CA Math Tutors
La Mesa, CA Prealgebra Tutors
La Mesa, CA Precalculus Tutors
La Mesa, CA SAT Tutors
La Mesa, CA SAT Math Tutors
La Mesa, CA Science Tutors
La Mesa, CA Statistics Tutors
La Mesa, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/la_mesa_ca_act_tutors.php","timestamp":"2014-04-17T15:55:38Z","content_type":null,"content_length":"23666","record_id":"<urn:uuid:e7a854cd-c013-4430-801e-841ce10c9f59>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vladimir Voevodsky
Владимир Воеводский (who publishes in English as Vladimir Voevodsky) (web site) is a Russian mathematician working in the Institute for Advanced Study.
He received a Fields medal in 2002 for a proof of the Milnor conjecture. The proof crucially uses A1-homotopy theory and motivic cohomology developed by Voevodsky for this purpose. In further
development of this in 2009 Voevodsky announced a proof of the Bloch-Kato conjecture.
After this work in algebraic geometry, cohomology and homotopy theory Voevodsky turned to the foundations of mathematics and is now working on homotopy type theory which he is advertising as a new
“univalent foundations” for modern mathematics with its emphasis on homotopy theory and higher category theory.
A list of video-recorded talks by Voevodsky in the context of homotopy type theory is here.
The transcript of another interview (in Russian) is available at: (part 1) and (part 2) | {"url":"http://www.ncatlab.org/nlab/show/Vladimir+Voevodsky","timestamp":"2014-04-21T02:01:08Z","content_type":null,"content_length":"17019","record_id":"<urn:uuid:1d5eb318-7f8c-4c52-a4bd-a98ce3e6fa82>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Richardson Trigonometry Tutor
Find a Richardson Trigonometry Tutor
...Prealgebra is that critical point between the student's prior math experiences and the more challenging upcoming courses in algebra and geometry. With my skill set in the basic handling of
numbers and the symbolic representation of numbers and ideas, I will build up the confidence level of the s...
17 Subjects: including trigonometry, chemistry, geometry, GRE
...For this reason, I am called Uncle Joe. At Christ Ambassador's Int. School, where I happened to be the science tutor, I was a household name because I covered the syllabus with accuracy.
16 Subjects: including trigonometry, chemistry, calculus, geometry
...I currently am working with a company on applying a transform of a first order Differential Equation with other functions involving the PID method an inputting the Laplace Transformation in
order to complete a looping energy flow as a mathematical description as a second order Differential Equati...
11 Subjects: including trigonometry, calculus, physics, geometry
...I passed my applied ordinary differential equations class with a B+ after acing 3rd semester calculus. I also wrote dozens of C/C++ computer programs to solve differential equations using
several iterative methods. Back in 1992 I was a college instructor teaching a C/C++ programming language course (CIS 233) at Sinclair Community College in Dayton, OH.
48 Subjects: including trigonometry, chemistry, physics, calculus
...From quiet in-class observation, to late night discussions with my older brother, to humble exhortation of high school professors -- I've always had a passionate interest in education stirring
within. With a love of kids, commitment, potential for growth as well as already cultivated ability, I ...
17 Subjects: including trigonometry, reading, biology, geometry | {"url":"http://www.purplemath.com/richardson_tx_trigonometry_tutors.php","timestamp":"2014-04-20T01:52:33Z","content_type":null,"content_length":"24166","record_id":"<urn:uuid:418b65a1-e574-4cd4-b162-e827d21437e5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help 24/7
Welcome to Math Help 24/7
Below you will find math courses that have links to video tutorials created by Valencia math professors. The videos use a variety of technologies, all simple to view, but some will require flash
player. If you need flash player installed, you will be given directions on how to install this at the time it is needed.
One type of video used is called a Pencast, created by the Livescribe pen. A tutorial video for getting the most out of viewing Pencast is also available.
Please answer a few questions using our feedback form. Any information will be helpful in improving the website and the content provided. | {"url":"http://valenciacollege.edu/math/livescribe.cfm","timestamp":"2014-04-20T16:10:53Z","content_type":null,"content_length":"298132","record_id":"<urn:uuid:f70d9cee-4d66-4ad4-a773-50dbdac50f2f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stockertown Science Tutor
...However, I took a semester off, so I am located in the Lehigh Valley area. While at Kutztown, I spent one year as a Supplemental Instructor and one-on-one tutor for introductory physics
classes. I also have one semester experience as a teaching assistant at Cornell.
16 Subjects: including physics, physical science, chemistry, calculus
...I got married, and we moved to this area for my husband's work. While my degree is in Biology, I also have experience in Mathematics. I am a very patient person, and I love helping people
through difficult material.
35 Subjects: including astronomy, anatomy, botany, nursing
...I earned my PhD in cellular and molecular biology focusing on heart development. I enjoy helping students learn that science can be both interesting and fun. Whether you are struggling to
understand the concepts or just need a little extra help, I am the tutor for you.
7 Subjects: including biochemistry, genetics, biology, chemistry
...I am currently teaching on-line classes for San Jose State University. I love teaching and tutoring gives me the luxury of working with one students at a time.Organizational skills are
necessary to make the most of your study time. You need to be able to outline the subject matter and identify the most important information in two ways.
8 Subjects: including biology, ecology, writing, GED
...I've also taught classes from middle school to the university level, all in either science or math. I have a bachelor's degree in Biochemistry from Albright College and I did graduate work in
Microbiology at the University of Connecticut. While I was at UConn, I taught three different classes: one semester of Anatomy and Physiology and two semesters of Biochemistry.
17 Subjects: including chemistry, algebra 2, calculus, geometry | {"url":"http://www.purplemath.com/stockertown_science_tutors.php","timestamp":"2014-04-20T08:43:52Z","content_type":null,"content_length":"23972","record_id":"<urn:uuid:1b401e0a-9b82-49f3-b304-814cf36acc88>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shenandoah, TX Math Tutor
Find a Shenandoah, TX Math Tutor
...I'm excited about tutoring and look forward to helping you learn!Took prealgebra growing up and made an A+ when I completed the course. This subject was never difficult for me. I've used it
all throughout college as well to solve much more difficult problems.
9 Subjects: including algebra 1, algebra 2, calculus, geometry
...Recently, I lived in Costa Rica, Central America, which has prepared me to teach Spanish or English as a second language. I have a Masters in Educational Administration, and I have a passion
to teach. Before coming to the Gulf Coast, I worked as a teacher in Miami, Florida.
24 Subjects: including algebra 1, prealgebra, reading, Spanish
...I believe it is my job as a tutor to help students learn which formulas to apply to solve problems. I also use a lot of drawings and solid model tools to help students visualize geometric
shapes. This is a subject where I find it is really important to make sure students have a firm foundation ...
7 Subjects: including algebra 1, algebra 2, biology, trigonometry
...Much like math, a language is easier to learn when you do it daily, even in small amounts if necessary. Third, not all countries speak English. In fact, one of the biggest complaints about
Americans is that they think everyone does (or should) speak English.
45 Subjects: including algebra 1, algebra 2, prealgebra, Spanish
...Every student that I have tutored has significantly improved their performance in their math classes. Many I have brought from failing to A's and B's. I have been told many times that parents
appreciate that I build a personal relationship with the student and use methods that work with their interests, personalities and learning style.
32 Subjects: including SAT math, piano, study skills, finance
Related Shenandoah, TX Tutors
Shenandoah, TX Accounting Tutors
Shenandoah, TX ACT Tutors
Shenandoah, TX Algebra Tutors
Shenandoah, TX Algebra 2 Tutors
Shenandoah, TX Calculus Tutors
Shenandoah, TX Geometry Tutors
Shenandoah, TX Math Tutors
Shenandoah, TX Prealgebra Tutors
Shenandoah, TX Precalculus Tutors
Shenandoah, TX SAT Tutors
Shenandoah, TX SAT Math Tutors
Shenandoah, TX Science Tutors
Shenandoah, TX Statistics Tutors
Shenandoah, TX Trigonometry Tutors | {"url":"http://www.purplemath.com/shenandoah_tx_math_tutors.php","timestamp":"2014-04-17T01:28:59Z","content_type":null,"content_length":"23925","record_id":"<urn:uuid:2b9f80fd-e9a3-434b-8701-5359bc3782d8>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Strategy that Fosters Translating Between Multiple Representations
Mathematical communication (Schoenfeld et. al., 1992, Koedinger & Nathan, in press) is increasingly seen as a goal for students, and in particular, translating between algebraic (e.g. "m/s+h"),
English (e.g. "the total trip time") and concrete (e.g. "7 hours") representations. So far we have discussed students translating from the English to algebraic or concrete representations. The next
two strategies explore asking students to translate in different ways including: 1) translating from the algebraic to the English and 2) within the English representation.
Subgoal Reification Strategy
The subgoal reification strategy is inspired by Corbett & Anderson s (1995) lisp tutor that showed learning gains when the students we asked to post (or reify) their subgoals before attempting to
write lisp code. We have added to our tutor the ability to choose a similar strategy. At T2 below, the student is asked for the name of the subgoal. At T3 the student is asked to explain in English
how to achieve that subgoal. At T4 the tutor asks for the subgoal in algebra. T5 and T6 repeat the pattern from T3 and T4.
T1: Hello. [Again, doing the "bike-trip" problem] Please write an expression for the total time the trip took.
S1: help!
T2: In order to find the total time, what do you first have to find out? [The student is presented with pull-down menu to select a quantity-noun phrase]
S2: [menu:] The time actually on the bikes
T3: Correct. Can you explain how you would find the time actually on the bikes)? [Presented with the following sentence to fill in the blanks: "The time actually on the bikes is equal to (select
quantity) (select operator) (select quantity)." The student can select the 2 quantities and the operator from menus. ]
S4: The time actually on the bikes is equal to [1^st menu:] the distance biked [2^nd menu:] divided by [3^rd menu:] the speed of the bikes.
T4: Good. Now say what that [note the simple pronoun substitution] in symbols.
S5: m/s
T5: Correct, Can you explain how you would find the total time of the trip using the time on the bikes?
S6: The total time of the trip is equal to [1^st menu:] the amount of time for the break [2^nd menu:] added to [3^rd menu:] the time actually on the bikes.
T6: Can you say that in symbols?
S6: b+m/s
T7: Correct.
Back to The Ms. Lindquist Algebra Tutoring Web Site. | {"url":"http://www.cs.cmu.edu/~neil/example_verbal.html","timestamp":"2014-04-18T19:22:31Z","content_type":null,"content_length":"4052","record_id":"<urn:uuid:c214087c-f513-4f57-96a1-91b6302b3064>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
rxDTree(): a new type of tree algorithm for big data
by Joseph Rickert
The rxDTree() function included in the RevoScaleR package distributed with Revolution R Enterprise is an an example of a new class of algorithms that are being developed to deal with very large data
sets. Although the particulars differ, what these algorithms have in common is the use of approximations, methods of summarizing or compressing data and built-in parallelism. I think that it is
really interesting to see something as basic to modern statistics as a tree algorithm rejuvenated this way.
In the nearly thirty years since Breiman et al. introduced classification and regression trees they have become part of the foundation for modern nonparametric statistics, machine learning and data
mining. The basic implementation of these algorithms in R’s rpart() function (recursive partitioning and regression trees) and elsewhere have proved to be adequate for many large scale, industrial
strength data analysis problems. Nevertheless, today’s very large data sets (“Big Data”) present significant challenges for decision trees. In part this is due to the need to sort all the numerical
attributes used in a model in order to determine the split points. One approach to dealing with the issue is to avoid sorting the raw data altogether by working with an approximation of the data.
In a 2010 paper, Ben-Haim and Yom-Tov introduce a novel algorithm along these lines by using histograms to build trees. This algorithm, explicitly designed for parallel computing, takes the approach
of implementing horizontal parallelism: each processing node sees all of the variables for a subset (chunk) of the data. These “compute” nodes build histograms of the data and the master node
integrates the histograms and builds the tree. The details of the algorithm, its behavior and performance characteristics are described in a second, longer paper by the same authors.
One potential downside of the approach is that since the algorithm only examines a limited number of split points (the boundaries of the histogram bins), for a given data set, it may produce a tree
that is different from what rpart() would build. In practice though, this is not as bad as it sounds. Increasing the number of bins improves the accuracy of the algorithm. Moreover, Ben-Haim and
Yom-Tov provide both an analytical argument and empirical results that show the error rate of trees built with their algorithm approaches the error rate of the standard tree algorithm.
rxDTree() is an implementation of the Ben-Haim and Yom-Tov algorithm designed for working with very large data sets in a distributed computing environment. Most of the parameters controlling the
behavior of rxDTree() are similar to those of rpart(). However, rxDTree() provides an additional parameter: maxNumBins specifies the maximum number of bins to use in building histograms and hence,
controls the accuracy of the algorithm. For small data sets where you can test it out, specifying a large number of bins will enable rxDTree() to produce exactly the same results as rpart().
Because of the computational overhead involved with the histogram building mechanisms of rxDTree() you might expect it to be rather slow with small data. However, we have found that rxDTree performs
well with respect to rpart() even for relatively small data sets. The following script gives some idea of the performance that can be expected from running on a reasonably complex data set. (All 59
explanatory variables are numeric.). The script reads in the segmentationData set from the caret package, replicates the data to produce a file containing 2,021,019 rows, specifies a model and then
runs it using both rpart() and rxDTree().
########## BENCHMARKING rxDTree ON A CLUSTER #############
# This script was created to show some simple benchmarks for the RevoScaleR
# rxDTree function for building classification and regression trees on large data sets.
# The benchmarks were run on a 5 node HPC cluster comprised of Intel 16 GB of RAM per node)
# The script does the following:
# 1. Fetch the 2,019 row by 61 columns segmentationData set from the caret package
# 2. Set up a compute context to run the code on a Microsoft HPC Cluster
# 3. Replicate the SegmentationData to create a file with 2,021,019 rows
# 4. Set up the formula and other parameters for the model
# 5. Run the rxDTree to build a classification model
# Get SegmentationData from caret package
data(segmentationData) #dim: [1] 2019 61
rxOptions(reportProgress = 0)
# Set up comput Contect for HPC Cluster
grxTestComputeContext <- RxHpcServer(
dataPath = c("C:/data"),
shareDir=paste("AllShare\\", Sys.getenv("USERNAME"), sep=""),
nodes = "COMPUTE11,COMPUTE12,COMPUTE13,COMPUTE10",
# Replicate the data to build a larger file
times <- 1000 # how many time to replicate the data set
segmentationDataBig <- do.call(rbind, sapply(rep("segmentationData", times), as.name))
#elapsed time: 136.652
rxDataStep(inData = segmentationDataBig, outFile = "segmentationDataBig", overwrite = TRUE)
# Build the formula and set parameters
allvars <- names(segmentationData)
xvars <- allvars[-c(1, 2, 3)]
(form <- as.formula(paste("Class", "~", paste(xvars, collapse = "+"))))
cp <- 0.01 # Set the complecity parameter for the tree
xval <- 0 # Don't do any cross validation
maxdepth <- 5 # Set the maximum tree depth
# Run the model with rxDtree on the big xdf file
dtree.model.xdf <- rxDTree(form,
data = "segmentationDataBig",
maxDepth = maxdepth,
cp = cp,
xVal = xval, blocksPerRead = 250)
## 50.52
Created by Pretty R at inside-R.org
On an Intel Quad core, 64-bit, system Q9300 with 2.5GHz processors and 8 GB of RAM the elapsed time for rpart()to build the tree was 312.27 seconds, while rxDTree() which took advantage of the
parallelism possible with four cores ran in 71.39 seconds. The real payoff, however, comes from running rxDTree() on a cluster. The same model run on 5 node, Microsoft HPC assembled from similar
commodity hardware (4 cores per node, each running at 3.20 GHz with16GB of RAM) took 50.52 seconds to build a tree from data stored in a binary .xdf file. We expect that elapsed time will scale
linearly with the number of rows.
One last observation about the script is the use of the functions RxHpcServer() and rxSetComputeContext(). Together these provide a concrete example of how the distributed computing infrastructure of
RevoScaleR preserves R's ease of use and working efficiency in a "Big Data" environment. The first function defines the "compute context" to be a Microsoft HPC cluster. The second function tells
parallel external memory algorithms (PEMAs), like rxDTree, to use this cluster to execute. So, by merely changing one line of code PEMAs can be tested on a PC and then run on a cluster or other
production environment. For more details on rxDTree have a look at Big Data Decision Trees with R.
You can follow this conversation by subscribing to the comment feed for this post.
Any information regarding the publication related to this Algorithm? Thanks.
RxDTree function is in any of the current R packages? In which of these?
@Jose, rxDTree is available in the revoscaler package, which is only available in Revolution R Enterprise. If you're at a University, it's available through our Academic Program. | {"url":"http://blog.revolutionanalytics.com/2013/07/rxdtree-a-new-type-of-tree-algorithm.html","timestamp":"2014-04-24T15:29:31Z","content_type":null,"content_length":"45059","record_id":"<urn:uuid:50a24218-0c18-4436-b7a1-13418472c7d5>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00497-ip-10-147-4-33.ec2.internal.warc.gz"} |
Related rates of change.
June 19th 2010, 02:31 AM
Related rates of change.
A cylindrical container of fixed length 90 cm is being pressure tested, and the radious id increasing at a constant rate of .01 cm/min. WHen the redious is 25 cm, find the rate of the change of:
(a) the total surface area
(b) the Volume
June 19th 2010, 02:58 AM
What is the total surface area of the cylinder?
what is the volume of the cylinder?
June 20th 2010, 01:17 PM
So we have:
$V = (90)(\pi)(r^2)$
$SA = 2\pi r^2+2\pi r (90)$
For volume we have:
$\frac{dV}{dt} = (180)(\pi)(r)\frac{dr}{dt}$
At the values you gave we have:
$\frac{dV}{dt} = (180)(\pi)(25)(0.01)$
Thats the rate of change of the volume, can you figure it out the same way for surface area? | {"url":"http://mathhelpforum.com/calculus/148857-related-rates-change-print.html","timestamp":"2014-04-16T17:23:37Z","content_type":null,"content_length":"5670","record_id":"<urn:uuid:5bdf864e-5f95-4cf7-940c-44c30a41297c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
reflection (matrices)
Verify that M(theta) is orthogonal, and find a unit vector n such that the line fixed by the reflection is given by the equation
n . x = c,
for a suitable constant c, which should also be determined.
I did the verficiation part, by multiplying m(theta) by its transpose. But how do I do the 2nd part? (regarding the find a unit vector). | {"url":"http://www.physicsforums.com/showthread.php?t=408060","timestamp":"2014-04-17T21:40:04Z","content_type":null,"content_length":"27275","record_id":"<urn:uuid:db35d2c4-b1af-485c-baad-6bf6a5533305>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Subtraction - Properties
agreement associative closed example
It matters very much which of the two parts of a difference are named first. Taking $500 from an account with $300 in it is very different from taking $300 from $500. For this reason, subtraction is
not commutative: a - b does not equal b - a.
Subtraction is not associative either: (a - b) - c does not equal a - (b - c).
An example will demonstrate this: (20 - 10) - 3 is 7, but 20 - (10 - 3) is 13.
Often one encounters expressions such as 5x^2 - 2 - 3x^2, with no indication of which subtraction is to be done first. Since subtraction is non-associative, it matters. To avoid this ambiguity one
can agree that subtractions, unless otherwise indicated, are to be done left-to-right. This is a rather limiting agreement, therefore, it may be more convenient to use some other order. Another
agreement, which is the common agreement of algebra, is to treat the minus sign as a plus-the-opposite-of sign. Thus one would interpret the example above as 5x^2 + (-2) + (-3x^2). In this
interpretation it becomes a sum, whose terms can be combined in any order one pleases.
In certain sets subtraction is not a closed operation. The set of natural numbers, for instance, is not closed with respect to subtraction. If a merchant will not extend credit, one cannot buy an
article whose price is greater than the amount of money one has.
User Comments
over 3 years ago
what are the PROPERTIES OF SUBTRACTION ?? | {"url":"http://science.jrank.org/pages/6576/Subtraction-Properties.html","timestamp":"2014-04-17T18:23:12Z","content_type":null,"content_length":"14289","record_id":"<urn:uuid:896ccf12-9615-4ad4-afbe-0067b934f5b3>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
[R] ggplot2: can one have separate ylim for each facet?
Etches Jacob jacob.etches at utoronto.ca
Mon Nov 17 22:49:26 CET 2008
In lattice
#toy data
x <- rnorm(100)
y <- rnorm(100)
k <- sample(c("Weak","Strong"),100,replace=T)
j <- sample(c("Tall","Short"),100,replace=T)
w <- data.frame(x,y,j,k)
will give you a scale in each subplot with a range equal to the range
of y within each subplot.
Is this possible using ggplot2?
qplot(x,y,data=w) + facet_grid(j~k) + ylim(-2,2)
produces a plot with the same range in each subplot. Can the lattice
behaviour be reproduced in ggplot2?
Jacob Etches
More information about the R-help mailing list | {"url":"https://stat.ethz.ch/pipermail/r-help/2008-November/180190.html","timestamp":"2014-04-17T10:56:30Z","content_type":null,"content_length":"3330","record_id":"<urn:uuid:df1af06f-5282-43a7-89f3-63c7a094ef8c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
One has to wonder about photon interaction, and when to think of them
One has to wonder about photon interaction, and when to think of them as a particle and when as simply a wave. A friend of mine told me to think of them as a wave, because they are without mass. But
why should I view it that way, when even theoretical mass is mass, such as weak force. So how might one think of them? And what about this:
Energy of one photon:
m=(.00000000000000413566733)(299,792,458)(.0000000000000000111265006)(λ ^-1)
By this logic, and by no means do I claim it to be infallible, photons do have a theoretical mass inversely proportional to its wavelength and multiplied by the constant (1.37951014×10^-23) which I
dub, were it to have any scientific ground to it, Demitri constant. | {"url":"http://www.physicsforums.com/showthread.php?p=2744508","timestamp":"2014-04-16T10:31:40Z","content_type":null,"content_length":"30097","record_id":"<urn:uuid:7ebe222f-6a4b-445f-ae4a-d6af0fa06a96>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to draw a intersection point on two cures?
03-07-2010, 11:47 PM
How to draw a intersection point on two cures?
hi,I want to draw the point of intersection on two cures.For example,the point of inersection on Line y=x and parabola y=x*x (-2<=x<=2).But it is failure for the following program segment:
For x=-2 to 2 step 0.01
If x=x*x then
Next x
What's wrong with it? Hope to explain it.Thank you a lot.
03-08-2010, 10:43 AM
For one thing, x never gets incremented or decremented so it always has a value of -2
If you put a break on your code and step through it you will see that is ALWAYS hits the Else part, and x continues to equal -2
03-08-2010, 04:53 PM
Not if x is defined as Double (or Float), in that case the loop works ok
This is a classic rounding problem: the step is not a multiple of 2, therefore the test (x = x*x) will never be true (because incrementing by 0.01 x will never be zero)
This is one of the many ways (and not the best, but I did not want to change the code) to fix the problem:
Picture1.Scale (-2, 10)-(2, -10)
Dim x As Double
Dim dstep As Double
dstep = 2 ^ -6
For x = -2 To 2 Step dstep
If x = x * x Then
Picture1.ForeColor = vbRed
Picture1.ForeColor = vbBlack
End If
Picture1.PSet (x, x * x)
Next x | {"url":"http://forums.devx.com/printthread.php?t=173986&pp=15&page=1","timestamp":"2014-04-21T04:45:03Z","content_type":null,"content_length":"7315","record_id":"<urn:uuid:f1900e9c-7a6b-416c-af7d-715c67600d6f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why is the Fermi level a constant in thermal equilibrium?
just like PN diode (P=material 1 ; N=material 2)
the fermi levels are the same (Ef1=Ef2) in thermal equilibrium
i confused that
why there is no energy transfer
so that can judge each energy E will obey
rate from 1 to 2 ~ N1(E)f1(E)*N2(E)[1-f2(E)] ...(*)
rate from 2 to 1 ~ N2(E)f2(E)*N1(E)[1-f1(E)] ...(**)
does the electron hop from 1(Ea) to 2(Eb)
and another electron fell from 2(Ec) to 1(Ed)?
where Eb-Ea=Ec-Ed (←the energy differences are still the same, obey energy conservation)
so it is possible for electron to hop from material1's Ea to material2's Eb
it wont just transfer in the same energy
Why the book say "each energy" will obey eqs.(*) & (**)?
I think it should be
integral E1 and E2 {N1(E1)f1(E1)*N2(E2)[1-f2(E2)]}dE1 dE2 = integral {N2(E1)f2(E1)*N1(E2)[1-f1(E2)]}dE1 dE2
(it's possible to hop anywhere, just obey "rate from 1 to 2=rate from 2 to 1")
but i calculated it
i cant derive that Ef1=Ef2?
Is anything wrong?@@
Thanks for helping!! | {"url":"http://www.physicsforums.com/showthread.php?s=4705dd68e6116082ce3f35c10224738a&p=4558090","timestamp":"2014-04-17T07:28:39Z","content_type":null,"content_length":"21083","record_id":"<urn:uuid:46c1b210-7bf3-4675-b480-719c3c80503a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
This monograph presents a geometric theory for incompressible flow and its applications to fluid dynamics. The main objective is to study the stability and transitions of the structure of
incompressible flows and its applications to fluid dynamics and geophysical fluid dynamics. The development of the theory and its applications goes well beyond its original motivation of the study of
oceanic dynamics.
The authors present a substantial advance in the use of geometric and topological methods to analyze and classify incompressible fluid flows. The approach introduces genuinely innovative ideas to the
study of the partial differential equations of fluid dynamics. One particularly useful development is a rigorous theory for boundary layer separation of incompressible fluids.
The study of incompressible flows has two major interconnected parts. The first is the development of a global geometric theory of divergence-free fields on general two-dimensional compact manifolds.
The second is the study of the structure of velocity fields for two-dimensional incompressible fluid flows governed by the Navier-Stokes equations or the Euler equations.
Motivated by the study of problems in geophysical fluid dynamics, the program of research in this book seeks to develop a new mathematical theory, maintaining close links to physics along the way. In
return, the theory is applied to physical problems, with more problems yet to be explored.
The material is suitable for researchers and advanced graduate students interested in nonlinear PDEs and fluid dynamics.
Advanced graduate students and research mathematicians interested in nonlinear PDEs and fluid dynamics.
• Introduction
• Structure classification of divergence-free vector fields
• Structural stability of divergence-free vector fields
• Block stability of divergence-free vector fields on manifolds with nonzero genus
• Structural stability of solutions of Navier-Stokes equations
• Structural bifurcation for one-parameter families of divergence-free vector fields
• Two examples
• Bibliography
• Index | {"url":"http://ams.org/bookstore-getitem/item=SURV-119","timestamp":"2014-04-19T22:59:19Z","content_type":null,"content_length":"16541","record_id":"<urn:uuid:2832d5f2-dcce-4939-ae5f-cba84a21be54>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
A fun and interesting math puzzle
Here's a nice challenge that I made up thinking about compressing data one night...
Print out a number that you can read within it:
The numbers 1 to 20(easier) or 1 to 99(very hard)
→Can be read forward or backwards and use same numbers
eg. 123 you can read 1,2,3,11,12,22,21,23,32,33
→Cannot! have any two digit repeats
eg. 121 you can read 12 foward and also backwards!
exception! any double numbers like 11 22 33 etc.
The smaller the answer in digits the better.
Here is the answer I get going from 1 to 20 starting with writing a 0123456789 down.
and again starting with writing 8642013579,
Is there a better number out there that is shorter yet?
starting with the number 10 I get: (10 doesn't have to end up at the start)
This is the best attempt so far I've gotten... I see the number 24 repeated right off the bat so this is wrong.
This is turning out to be quite interesting and quite a fun challenge. I would love to know if there is only one unique answer when you start by writing the number 10 down.
What would happen if you started with the number 61?
I would love to see some answers (either right or wrong!). | {"url":"http://www.physicsforums.com/showthread.php?p=4272367","timestamp":"2014-04-20T14:10:35Z","content_type":null,"content_length":"29338","record_id":"<urn:uuid:93affbc5-d67d-445d-91ce-c73fbd0fa6bc>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 180
, 1995
"... A counterpart to von Neumann and Morgenstern' expected utility theory is proposed in the framework of possibility theory. The existence of a utility function, representing a preference ordering
among possibility distributions (on the consequences of decision-maker's actions) that satisfies a series ..."
Cited by 101 (25 self)
Add to MetaCart
A counterpart to von Neumann and Morgenstern' expected utility theory is proposed in the framework of possibility theory. The existence of a utility function, representing a preference ordering among
possibility distributions (on the consequences of decision-maker's actions) that satisfies a series of axioms pertaining to decision-maker's behavior, is established. The obtained utility is a
generalization of Wald's criterion, which is recovered in case of total ignorance; when ignorance is only partial, the utility takes into account the fact that some situations are more plausible than
others. Mathematically, the qualitative utility is nothing but the necessity measure of a fuzzy event in the sense of possibility theory (a so-called Sugeno integral). The possibilistic
representation of uncertainty, which only requires a linearly ordered scale, is qualitative in nature. Only max, min and order-reversing operations are used on the scale. The axioms express a
risk-averse behavior of the d...
- European Journal of Operational Research , 2000
"... This paper presents a justification of two qualitative counterparts of the expected utility criterion for decision under uncertainty, which only require bounded, linearly ordered, valuation sets
for expressing uncertainty and preferences. This is carried out in the style of Savage, starting with ..."
Cited by 51 (7 self)
Add to MetaCart
This paper presents a justification of two qualitative counterparts of the expected utility criterion for decision under uncertainty, which only require bounded, linearly ordered, valuation sets for
expressing uncertainty and preferences. This is carried out in the style of Savage, starting with a set of acts equipped with a complete preordering relation. Conditions on acts are given that imply
a possibilistic representation of the decision-maker uncertainty. In this framework, pessimistic (i.e., uncertainty-averse) as well as optimistic attitudes can be explicitly captured. The approach
thus proposes an operationally testable description of possibility theory. 1
, 2004
"... The paper focuses mainly on extraction of important topographic objects, like buildings and roads, that have received much attention the last decade. As main input data, aerial imagery is
considered, although other data, like from laser scanner, SAR and high-resolution satellite imagery, can be also ..."
Cited by 46 (0 self)
Add to MetaCart
The paper focuses mainly on extraction of important topographic objects, like buildings and roads, that have received much attention the last decade. As main input data, aerial imagery is considered,
although other data, like from laser scanner, SAR and high-resolution satellite imagery, can be also used. After a short review of recent image analysis trends, and strategy and overall system
aspects of knowledge-based image analysis, the paper focuses on aspects of knowledge that can be used for object extraction: types of knowledge, problems in using existing knowledge, knowledge
representation and management, current and possible use of knowledge, upgrading and augmenting of knowledge. Finally, an overview on commercial systems regarding automated object extraction and use
of a priori knowledge is given. In spite of many remaining unsolved problems and need for further research and development, use of knowledge and semi-automation are the only viable alternatives
towards development of useful object extraction systems, as some commercial systems on building extraction and 3D city modelling as well as advanced, practically oriented research have shown.
- Int. J. Approx. Reasoning , 2000
"... Belief functions, possibility measures and Choquet capacities of order 2, which are special kinds of coherent upper or lower probability, are amongst the most popular mathematical models for
uncertainty and partial ignorance. I give examples to show that these models are not sufficiently general to ..."
Cited by 40 (0 self)
Add to MetaCart
Belief functions, possibility measures and Choquet capacities of order 2, which are special kinds of coherent upper or lower probability, are amongst the most popular mathematical models for
uncertainty and partial ignorance. I give examples to show that these models are not sufficiently general to represent some common types of uncertainty. Coherent lower previsions and sets of
probability measures are considerably more general but they may not be sufficiently informative for some purposes. I discuss two other models for uncertainty, involving sets of desirable gambles and
partial preference orderings. These are more informative and more general than the previous models, and they may provide a suitable mathematical setting for a unified theory of imprecise probability.
- In Proceedings of the Second IEEE Conference on Fuzzy Systems , 1993
"... This paper is meant to survey the literature pertaining to this debate, and to try to overcome misunderstandings and to supply access to many basic references that have addressed the
"probability versus fuzzy set" challenge. This problem has not a single facet, as will be claimed here. Moreover it s ..."
Cited by 39 (5 self)
Add to MetaCart
This paper is meant to survey the literature pertaining to this debate, and to try to overcome misunderstandings and to supply access to many basic references that have addressed the "probability
versus fuzzy set" challenge. This problem has not a single facet, as will be claimed here. Moreover it seems that a lot of controversies might have been avoided if protagonists had been patient
enough to build a common language and to share their scientific backgrounds. The main points made here are as follows. i) Fuzzy set theory is a consistent body of mathematical tools. ii) Although
fuzzy sets and probability measures are distinct, several bridges relating them have been proposed that should reconcile opposite points of view ; especially possibility theory stands at the
cross-roads between fuzzy sets and probability theory. iii) Mathematical objects that behave like fuzzy sets exist in probability theory. It does not mean that fuzziness is reducible to randomness.
Indeed iv) there are ways of approaching fuzzy sets and possibility theory that owe nothing to probability theory. Interpretations of probability theory are multiple especially frequentist versus
subjectivist views (Fine [31]) ; several interpretations of fuzzy sets also exist. Some interpretations of fuzzy sets are in agreement with probability calculus and some are not. The paper is
structured as follows : first we address some classical misunderstandings between fuzzy sets and probabilities. They must be solved before any discussion can take place. Then we consider
probabilistic interpretations of membership functions, that may help in membership function assessment. We also point out nonprobabilistic interpretations of fuzzy sets. The next section examines the
literature on possibility-probability transformati...
- Computational Statistics & Data Analysis Vol , 2006
"... Numerical possibility distributions can encode special convex families of probability measures. The connection between possibility theory and probability theory is potentially fruitful in the
scope of statistical reasoning when uncertainty due to variability of observations should be distinguished f ..."
Cited by 26 (2 self)
Add to MetaCart
Numerical possibility distributions can encode special convex families of probability measures. The connection between possibility theory and probability theory is potentially fruitful in the scope
of statistical reasoning when uncertainty due to variability of observations should be distinguished from uncertainty due to incomplete information. This paper proposes an overview of numerical
possibility theory. Its aim is to show that some notions in statistics are naturally interpreted in the language of this theory. First, probabilistic inequalites (like Chebychev’s) offer a natural
setting for devising possibility distributions from poor probabilistic information. Moreover, likelihood functions obey the laws of possibility theory when no prior probability is available.
Possibility distributions also generalize the notion of confidence or prediction intervals, shedding some light on the role of the mode of asymmetric probability densities in the derivation of
maximally informative interval substitutes of probabilistic information. Finally, the simulation of fuzzy sets comes down to selecting a probabilistic representation of a possibility distribution,
which coincides with the Shapley value of the corresponding consonant capacity. This selection process is in agreement with Laplace indifference principle and is closely connected with the mean
interval of a fuzzy interval. It sheds light on the “defuzzification ” process in fuzzy set theory and provides a natural definition of a subjective possibility distribution that sticks to the
Bayesian framework of exchangeable bets. Potential applications to risk assessment are pointed out. 1
- Proc. 5th Int. Workshop on Artificial Intelligence and Statistics, 233--244, Fort Lauderdale , 1996
"... We introduce a method for inducing the structure of (causal) possibilistic networks from databases of sample cases. In comparison to the construction of Bayesian belief networks, the proposed
framework has some advantages, namely the explicit consideration of imprecise (setvalued) data, and the rea ..."
Cited by 25 (16 self)
Add to MetaCart
We introduce a method for inducing the structure of (causal) possibilistic networks from databases of sample cases. In comparison to the construction of Bayesian belief networks, the proposed
framework has some advantages, namely the explicit consideration of imprecise (setvalued) data, and the realization of a controlled form of information compression in order to increase the efficiency
of the learning strategy as well as approximate reasoning using local propagation techniques. Our learning method has been applied to reconstruct a non-singly connected network of 22 nodes and 24
arcs without the need of any a priori supplied node ordering. 14.1 Introduction Bayesian networks provide a well-founded normative framework for knowledge representation and reasoning with uncertain,
but precise data. Extending pure probabilistic settings to the treatment of imprecise (set-valued) information usually restricts the computational tractability of the corresponding inference
mechanisms. It is t...
- In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI , 1995
"... In this paper we propose a propositional temporal language based on fuzzy temporal constraints which turns out to be expressive enough for domains like many coming from medicine where knowledge
is of propositional nature and an explicit handling of time, imprecision and uncertainty are require ..."
Cited by 22 (2 self)
Add to MetaCart
In this paper we propose a propositional temporal language based on fuzzy temporal constraints which turns out to be expressive enough for domains like many coming from medicine where knowledge is of
propositional nature and an explicit handling of time, imprecision and uncertainty are required. The language is provided with a natural possibilistic semantics to account for the uncertainty issued
by the fuzziness of temporal constraints. We also present an inference system based on specific rules dealing with the temporal constraints and a general fuzzy modus ponens rule whereby behaviour is
shown to be sound. The analysis of the different choices as fuzzy operators leads us to identify the well-known Lukasiewicz implication as very appropriate to define the notion of possibilistic
entailment, an essential element of our inference system.
- in: Proc. 14th Conf. on Uncertainty in Arti cial Intelligence , 1998
"... This paper presents an axiomatic framework for qualitative decision under uncertainty in a finite setting. The corresponding utility is expressed by a sup-min expression, called Sugeno (or
fuzzy) integral. Technically speaking, Sugeno integral is a median, which is indeed a qualitative counter ..."
Cited by 22 (11 self)
Add to MetaCart
This paper presents an axiomatic framework for qualitative decision under uncertainty in a finite setting. The corresponding utility is expressed by a sup-min expression, called Sugeno (or fuzzy)
integral. Technically speaking, Sugeno integral is a median, which is indeed a qualitative counterpart to the averaging operation underlying expected utility. The axiomatic justification of Sugeno
integral-based utility is expressed in terms of preference between acts as in Savage decision theory. Pessimistic and optimistic qualitative utilities, based on necessity and possibility measures,
previously introduced by two of the authors, can be retrieved in this setting by adding appropriate axioms. 1
- Fuzzy measures and integrals , 2000
"... In this paper, we introduce the Choquet integral as a general tool for dealing with multiple criteria decision making. After a theoretical exposition giving the fundamental basis of the
methodology, practical problems are addressed, in particular the problem of determining the fuzzy measure. We give ..."
Cited by 22 (3 self)
Add to MetaCart
In this paper, we introduce the Choquet integral as a general tool for dealing with multiple criteria decision making. After a theoretical exposition giving the fundamental basis of the methodology,
practical problems are addressed, in particular the problem of determining the fuzzy measure. We give an example of application, with two different approaches, together with their comparison. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=207455","timestamp":"2014-04-19T06:21:25Z","content_type":null,"content_length":"40496","record_id":"<urn:uuid:e1cc4d7f-bf17-4a24-8c48-f7f0788a3cf9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00018-ip-10-147-4-33.ec2.internal.warc.gz"} |