content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Englewood, FL 34223
Private Math Tutor since 1979
...92) and M.S. (4.0) degrees in mechanical engineering and have scored perfect 800 SAT math scores several times. I tutor algebra, geometry,
, and physics, and I specialize in math SAT topics. There are dozens of important topics that need to be addressed before...
Offering 9 subjects including calculus | {"url":"http://www.wyzant.com/North_Port_calculus_tutors.aspx","timestamp":"2014-04-17T04:56:14Z","content_type":null,"content_length":"59933","record_id":"<urn:uuid:3aebe79a-e434-418d-b3f5-86e033ac7fb0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to test XCOM “dice rolls” for fairness
December 11th, 2012
XCOM: Enemy Unknown is a turn based video game where the player choses among actions (for example shooting an alien) that are labeled with a declared probability of success.
Image copyright Firaxis Games
A lot of gamers, after missing a 80% chance of success shot, start asking if the game’s pseudo random number generator is fair. Is the game really rolling the dice as stated, or is it cheating? Of
course the matching question is: are player memories at all fair; would they remember the other 4 out of 5 times they made such a shot?
This article is intended as an introduction to the methods you would use to test such a question (be it in a video game, in science, or in a business application such as measuring advertisement
conversion). There are already some interesting articles on collecting and analyzing XCOM data and finding and characterizing the actual pseudo random generator code in the game, and discussing the
importance of repeatable pseudo-random results. But we want to add a discussion pointed a bit more at analysis technique in general. We emphasize methods that are efficient in their use of data. This
is a statistical term meaning that a maximal amount of learning is gained from the data. In particular we do not recommend data binning as a first choice for analysis as it cuts down on sample size
and thus is not the most efficient estimation technique.In this article we are going to ignore issues that are unique to pseudo random number generators such as “save scumming” and solving for the
hidden generator state.
Save scumming is noticing the sequence of coin flips is in fact deterministic, so by re-starting from a save and using a bad flip on an event we don’t care about can allow the player to move a good
flip to an event they do care about. Statisticians are fairly clever about avoiding this by ensuring that separate processes use separate random number sources, so a change in behavior of one process
can’t introduce a change in behavior in another by changing what random numbers the second process sees.
Solving for the hidden state of the generator is when, after watching a sequence of outputs of the generator, you collect enough information to efficiently recover the complete state of the
generator. So no coin flip from that point forward will ever by surprising. For example see “Reconstructing Truncated Integer Variables Satisfying Linear Congruences”, Frieze, Hastad, Kannan,
Lagarias, Shamir, Siam J. Comput., Vol. 17, No. 2, April 1988. These are indeed powerful and interesting questions, but are too related to computers games and simulations to apply to data that comes
from real world situations (such as advertisement conversion rates). So we will leave these to the experts.
A proper analysis needs at least: a goal, method and data. Our goal is to see if there is any simple systematic net bias in the XCOM dice rolls. We are not testing for a bias varying in a clever way
depending on situation or history. In particular we want to see if we are missing “near sure thing” shots more than we should (so we want to know if the bias varies as the reported probability of
success changes). Our method will be to test if observed summary statistics have surprisingly unlikely values. We collected data on one partial “classic ironman” run of XCOM: Enemy Unknown on an XBox
360. The data is about 250 rows, is all of the first 20 missions and can be found in “strong TSV format” here: XCOMEUstats.txt. We will use R to analyze the data.
A foremost question for the analyst is: is the data appropriate for the questions I need to answer? For example all of this data is from a single game, so it is completely inappropriate for testing
if there is any per-game bias (some state set that makes some play throughs “net lucky” and others “net unlucky”). There are, however, around 250 usable rows in the data set: so the data should be
sufficient to test if there is a large unchanging bias (that is assumed to not depend on the play through, game state or history). To test for smaller biases or more complicated theories you would
need more data and to record more facts. As an aside: notice that I do not talk about a treatment and control set. I have found slavish experimental set-up (I won’t call it design) to always appeal
to “treatment and control” is absolutely no substitute for actually taking the responsibility of thinking through if your data actually supports the type of analysis you are attempting. Just because
you “have a control” does not mean you have a usable experimental design, and many legitimate experiments do not have a useful group labeled as “control.”
To make the data collection simple and reliable I recorded only a few facts per row:
• mission number: this is which mission we are on
• shot number: this was easier to track than player turn, as it is the data-set row number
• hit probability: the game-reported chance of success of the shot, this is the quantity we are trying to validate, reported as a percentage from 0 to 100
• hit: the actual outcome of the shot, 1 if we hit 0 if we missed.
• grenade: 1 if the action was “throws grenade,” blank otherwise (could be used to condition on these rows from later analysis).
• rocket: 1 if the action was “rocket firing,” blank otherwise (could be used to condition on these rows from later analysis).
• headshot: 1 if the action was a “sniper headshot,” blank otherwise (could be used to condition on these rows from later analysis).
• weapon type: what type of weapon used, right now always “projectile”, “laser” and “arc thrower” (I have not yet unlocked plasma weapons).
The initial goal was to get about 100 observations per major weapon type (arc thrower is a specialist weapon, so it would take a very long time to collect a lot of data on it) from about 10 missions.
No analysis was performed prior to stopping at ten missions of data collected. This is a simple (but not entirely necessary) method of avoiding a “stopping bias” as we would expect even a fair coin
sequence to appear somewhat unfair on some prefixes (see, for example, the law of the iterated logarithm). So an inspection that played “until something looked off” would have a large bias for false
alarms (this is in fact, unfortunately, how most commercial research is done: see Why Most Published Research Findings Are False). We will mention the nature of the false alarm effect when we discuss
significance. Like “control groups” this stopping bias isn’t something mystical that can only be avoided through certain rituals- but a real and measurable effect that you need to account for.
First we load the data into R:
d <- read.table('http://www.win-vector.com/dfiles/XCOM/XCOMEUstats.txt',
d[is.na(d)] <- 0 # replace all NA (which came from blanks) with 0
The basic plan for analysis is: chose a summary statistic and compute the significance of the value you observe of that statistic. For our first summary statistics we just use “total number of hits.”
Which turns out to be 191. In our data set “hit” is a variable that is written as 1 if we hit and 0 if we missed. We chose this representation because if hit.probability were the actual correct
percent chance of hitting then we should have:
sum(d$hit) nearly equals sum(d$hit.probability/100.0).
That is because a probability of a hit is just the expected value of the process that gives you 1 point for a hit and 0 for a miss plus the remarkable fact that expected values always add. The fact
that expected values always add is both remarkable and an immediate consequence of the definition of expected value (“The Probabilistic Method” by Noga Alon and Joel H. Spencer calls this the
“Linearity of Expectation” and devotes an entire chapter to clever uses of this fact). So what is the sum of reported hit probability in our data set?
Which turns out to be 179.73. So in my single game I actually hit a bit more often (191 times) than the game claimed I would (179.73 times). A quick question is could this be do to rounding or
truncation? We check that the difference in percentage points:
(sum(100*d$hit) - sum(d$hit.probability))/length(d$hit)
Which is 4.49, too large for rounding (which would by at most +/-1 and hopefully +/- 0.5 on average).
This brings us to significance. We want to know is: if this difference of about 11 hits is large or small? We in fact want to know if it was large or small in the special sense: was such a sum likely
or unlikely to happen (and this is significance). The question is usually formed as follows: if I assume exactly what I am trying to disprove (that the game is fair) how often when I played would I
see a difference (from an assumed fair game, also called the null-hypothesis) a difference as large as what I saw? If what I saw is rare (or hard to produce from a fair game), then I may reject the
null hypothesis and say I don’t believe my original assumption that the game is fair (which was my intent in setting up the experiment in the first place). Now you can never “prove the null
hypothesis” with this sort of experimental design (you can only reject the null hypothesis or fail to reject the null hypothesis). If the null hypothesis were in fact true, every time you collected
more data you would get another equivocal result that you can’t quite reject the null hypothesis yet. “But more data may help.” However, for a true null each time you collect more data you will
likely get yet another non-definitive result. So the data scientist will have to use judgement and decide where to stop at some point.
This standard interpretation of significance is why you don’t want to allow “venue shopping” or “data scumming.” Suppose I secretly played 30 different games of XCOM: Enemy Unknown and then showed
you only the one play-through where “wow, that set of coin-flips was only 1 in 20 likely- the game must be unfair.” If you know only about the game I showed you the claim is you are seeing something
that is only 1/20 likely under the null hypothesis (so a p-value of 0.05) and perhaps decent evidence against the null hypothesis (that the game is fair). However if you are then informed I had to
play 30 games to find the bad example (and I only showed you the worst) the response would be: of course in 30 plays you would expect to see something that only happens one time in twenty by random
chance- as you took more than 20 trials Of course data scientists always perform more than one analysis. If it was always a-priori obvious what the exact right analysis would be the job would be a
lot easier. The saving fact is that we can use a very crude significance correction: if we ran k experiments and the best one had a significance of p (small being more interesting) then the
significance of the “cherry pick adjusted” experiment is no more than k*p. So if we run 100 experiments and the best has p-value of 0.0001 then even after the cherry picking correction we know we
have a significance of at least 100*0.0001 = 0.01 which is good. The second saving grace is that p-values decrease rapidly when you add more data. If we know we want to try k-experiments than
collecting a log(k) multiple more data is enough to defend against data scumming or venue shopping. The thing that is expensive in data is attempting to measure smaller clinical effect sizes. If you
halve what you think the size of the effect of some non-existent effect (like ESP) you are trying to measure (“oops I didn’t say I had a 5% advantage guessing wavy cards, I meant a 2.5% advantage”)
you need to quadruple the amount of data collected. Effect size you are trying to measure enters your required sample size as a square. This is why it is easy for somebody defending a non-effect to
run a cooperating data scientist ragged by revising their claimed expectations.
Back to our XCOM analysis. We said the strategy is to propose a summary and compute its significance. There are a few great ways to do this: empirical re-sampling, permutation tests and simulation.
We will use simulation. We will write new code to generate hit outcomes directly from the published probabilities:
simulateC <- function(x) { # x = probabilities
simHits <- ifelse(runif(length(x))<=x,1,0)
drawsC <- sapply(1:10000,function(x) simulateC(d$hit.probability/100.0))
sC = sum(d$hit)
ggplot() + geom_histogram(aes(x=drawsC)) + geom_vline(x=sC)
The above R-code runs the simulation 10,000 times and plots the histogram of how often different numbers of hits show up. Our game experience is added to the graph as a vertical line. The graph is
given below:
In the above graph the mass to right of the vertical line is how often a random re-simulation saw a count of at least as many hits as us. This is called the “one sided tail” and if there is a lot of
mass in this tail then we were not that unlikely (not very significant) and if there is not much mass in this tail our measurement was very rare and very significant. The R commands to compute the
mass in the tail are easy:
eC <- ecdf(drawsC)
1 - eC(sC)
This turns out to be 2.78%. The R-command “ecdf()” returns a function that computes the amount of mass below a given threshold. So eC(S) gives us the amount of mass not more than S (a “left tail” if
S is small), 1-eC(L) gives us the right tail and eC(S) + 1 - eC(L) gives us the mass in both tails (or the two-sided tail).
Note: trusting the simulation significance results means you are trusting the pseudo random generator used to produce them (in this case R’s generator). The only ways to avoid trusting your test
pseudo random generator is to use a trusted true-random entropy source or to deliberately pick a test where you know the exact expected theoretical shape of the cumulative distribution. Statisticians
are the masters of exact theoretical tests and usually pick from a very limited set of summary statistics (counts, means, standard deviations) so they can apply known theoretical test distributions
(t-tests, f-tests and so on).
Our p-value of 0.0278 is considered significant. The usual rule of thumb is that p ≤ 0.05 is considered significant). Notice we are using an empirical p-value (re-simulating generation of hits from
the assumed distribution) instead of a parametric p-value (assuming a distribution of the outcomes and using the theoretical mean and a theoretical variance). Empirical p-values much better to
explain (they are a sampling of what would exactly happen if you repeated the null-experiment again and again) and so easy to compute that there is really no reason to use the distributional methods
(Normal, Student-t, chi-Sq or so on) until you are repeating the calculation very many times. It saves one level of explanations to directly estimate the significance through re-simulation than to
bring in “the standard approximations” (and their attendant assumptions).
One important consideration is that we didn’t specify before running this experiment that we thought we would experience above-average luck (in fact we came in thinking we were getting ripped off, so
we were looking for a low hit count). So we should be looking either at “two sided tails” (accept mass from both counts ≤ of of the distribution measure how far we were from the mean in absolute
value terms) or at least double our p-value to 0.0556 to respect that we implicitly ran two experiments. The p-value for the two sided tail is gotten as follows:
expectation <- sum(d$hit.probability/100.0)
diff <- abs(sum(d$hit)-expectation)
eC(expectation-diff) + (1-eC(expectation+diff))
Which is 0.0646 (or even worse than the 2*p correction). What this means is that: if we had started the experiment with the hypothesis that XCOM was under-reporting hit probabilities (or equivalently
cheating in our favor) we had collected just enough data to reject the null hypothesis (that XCOM is perfectly fair) according to standard clinical standards (which I have never liked, as they are
far too lenient). However we started with the hypothesis that XCOM was over-reporting hit probabilities (or cheating in its own favor) and switched hypothesis when we saw our hit count was high.
Under this situation we did not collect enough data to reject the null hypothesis as the 2-side p-value is 0.0646 and the corrected 1-sided p-value becomes 0.0556 (both above the middling 0.05
standard). We would not expect to have to double our data to get better p-values (as p-values fall fast when you add data), but if we were to continue to collect data we should know our hypothesis
has not been taken from the data (so we should probably use the 2-sided p-value and still multiply by an additional 2 as we have already run a few experiments or done some venue shopping on this
data). Also, remember if XCOM is fair all experiments will look equivocal- fail to prove it is unfair but not quite look fair. So really we have seen nothing to be suspicious about at this point. It
is a strange but true fact that statistics is an intentional science: what you know and how much of the data you have snooped really does affect the actual objective significances you experience. If
you fail to put in some sort of compensation for how many experiments you have run and how often you switched measurement or hypothesis you will mis-report the ideal theoretical significance of a
single clean room experiment (that you really did not run) as the significance of the entangled combination of measurements you actually did implement.
Part of the reason we are being so cagey accepting differences (but you always should be so), is that we strongly suspect (due to the forensic science study of Yawning Angel) that the generator is in
fact fair. At least it is fair in a total sense (we are not testing for state-driven cheating or streaks).
Another summary we could look at (instead of total counts) is total surprise. This is a metric more sensitive to effects like “I swear I miss 80% shots half the time, how is that fair?” The surprise
of an outcome is the negative of the logarithm (base 2) of the probability of the given outcome. Hitting an 80% shot has low surprise: -log_2(0.8) = 0.32 whereas missing an 80% shot has a high
surprise -log_2(1-0.8) = 2.32. The total surprise for the shot sequence I observed is given by:
surprise <- function(x,o) { # x = probabilities, o=actual outcomes
s <- surprise(d$hit.probability/100.0,d$hit)
This turns out to be 153.7. So we have a new summary statistic, we now need to know if it is significantly large or small. The theoretical expected surprise of a sequence of probabilities is a
quantity called the entropy and this is given by:
entropy <- function(x) { # x = probabilities
ifelse(x<=0,0.0,ifelse(x>=1,0,-x*log(x,base-2) - (1-x)*log(1-x,base=2)))
The information theoretic entropy is 164.8. So our experienced surprise is in fact lower than expected, outcomes tended to go the majority direction slightly more often than expected (not less as
missing a lot of near sure things would entail). We can again use empirical simulation to get the distribution of expected entropies and estimate the signficance:
simulate <- function(x) {
simHits <- ifelse(runif(length(x))<=x,1,0)
draws <- sapply(1:10000,function(x) simulate(d$hit.probability/100.0))
ggplot() + geom_density(aes(x=draws),adjust=0.5) + geom_vline(x=s)
Again we see that we are not a very rare event in terms of the possible distributions of surprise:
In fact even the one-sided p-value is quite large (and poor) at 0.1 (e <- ecdf(draws); e(s)), let alone the more appropriate two-sided tail probability.
An additional thing to look for: is can we build a useful probability re-mapping table for the reported probabilities? We know the totals are mostly right and the outcomes of near-certain and rare
events are largely right. Could there be some band of predictions that is biased (say the 70% to 80% range)? This is also easy to do in R:
ggplot(data=d,aes(x=hit.probability/100.0,y=hit)) +
geom_point(size=5,alpha=0.5,position=position_jitter(w = 0.01, h = 0.01)) +
geom_smooth() + geom_abline(slope=1) + opts(aspect.ratio=1) +
scale_x_continuous(limits=c(0,1)) + scale_y_continuous(limits=c(0,1))
This produces the following figure:
The x-axis is the game-reported hit probability, the y-axis is the observed probabilities (always either 0 or 1 as each hit either happens or does not). Each black circle represents one of our
recorded observations. The blue line with error-band is the spline-fit relation. It is estimating the ratio of hits to misses as a function of the stated predicted hit probability. Early on the blue
curve is low because most black dots are at y=0, for higher x the curve pulls up proportional to fraction of points at y=1. Notice how close the blue curve is to the black line y=x and the error bar
hardly pulls off the black line except in the 0.5 to 0.7 region. So maybe mid-values are slightly under predicted, but we don’t have enough data to say so (and more data would probably just show a
new tighter correspondence instead of confirming this divergence). A similar plot can be made using the GAM package, but it is harder to get the error bars.
This graph, which is the kind of thing the data scientist should look at points out yet another data deficiency in our study. The distribution of shots probabilities attempted is given as best for
play, possibly not best for analysis (a property of all real data when you don’t get complete control over the experimental design). The distribution (again represented as a density) of the shots I
attempted is given below:
(for how to read density plots see My Favorite Graphs).
The core purpose of the article hasn’t so much been the analysis of XCOM itself, but to show how to analyze this type of data. We have emphasized methods that can deal with many different
probabilities at the same time (as opposed to binning) in the interest of “statistical efficiency.” That is: to get the most results out of what little data we have. This is always important when you
are producing annotated data, which is always going to be per-unit expensive, even in this “age of big data.” Finding usable relations and biases is the exciting part of data science, but one of the
responsibilities of the data scientist is protecting the rest of their organization from the ruinous effect of pursuing spurious relations. You really don’t want to report a relation where there was
1. December 11th, 2012 at 10:35 | #1
Saw this on HN browsing the new queue. Very well-crafted article; I might end up using it in my classroom some time in the future when my senior math classes talk about probability.
If you ever want to do a follow-up, it might be worthwhile to look into Nintendo’s Fire Emblem series of strategy games. Since they focus on knights and whatnot, characters counterattack when
attacked. Before you confirm your action, you’re presented with the probabilities that your character and the enemy character will land a hit. For both the “good guys” and the “bad guys”, the
“percentages” are incorrect. High percentages are much more likely to land, low ones much less so.
(The algorithm for how they do this is easy, but I thought you might want the fun of working it out on your own. If you don’t care to do so, feel free to shoot me an email.)
I can only imagine this is to make the game *feel* more fair to the statistically challenged. “Man, I missed my 67% chance, but the computer hit on its 33%? This game is so cheap!” In Fire
Emblem’s math, that 11% chance would instead happen 4.75% of the time. Missing a 90% then nailing a 10% wouldn’t be a 1 in 100 shot; instead, it’s 1 in 2,500.
2. December 11th, 2012 at 11:20 | #2
@Paul F
Thanks, I was hoping that XCOM was cheating. Because, there is a bunch of interesting questions about engineering player experience that you could then think about. I might look at this as an
excuse to int Fire Emblem (been meaning to, I liked Advance Wars).
3. December 11th, 2012 at 13:10 | #3
John, this is quite cool. And the whole time, I kept thinking: “John found a way to buy video games as a business expense.” Awesome.
As far as cheating games go, I recall hearing a talk at GDC a few years ago about the console game “Civilization Revolutions”… And how focus group players *hated* how often they lost battles
where they had an 80% success probability. And so the game designers changed the rules, making an 80% success an almost sure thing, where a loss happens about 1 out of 20 times, rather than 1 out
of 5 times.
Happy players tend to spend money on games, after all, so it became far more important to satisfy the players than the statisticians. ;)
4. December 11th, 2012 at 13:57 | #4
Oh cool, someone else that uses R and ggplot2. Nice writeup.
There’s a few things I explicitly did not address in my analysis (primarily since I was trying to keep it accessible to people who aren’t into math or theoretical computer science), mainly
centered around the specific algorithm they used.
The two main things I omitted are:
1) The algorithm used has serial coloration problems.
2) Because they used a power of 2 coefficient for m, the lower bits of the PRNG output have rather poor entropy.
I don’t particularly believe the first thing is a problem because the actual PRNG output is rather opaque. As long as the generator is sufficiently well equidistributed, and is not overly
streaky, I believe it is suitable for something like this. While weak, the XCOM PRNG does satisfy those criteria (though more time spent with dieharder or TestU01 is needed to conclusively prove
the “not overly streaky” part, though in my “sample output” it is passing diehard_operm5).
The 2nd point I suspect is one of implementation convenience It would be easy to correct (at a minimum, “foo.i = (foo.i >> 9) & 0x7FFFFF | 0x3F800000;” reduces the impact of their power of 2 m
choice), but at that point it’s also just as easy to use a PRNG that’s not the minimum standard in terms of quality (A combined xorshift/Weyl generator is quite trivial to implement). As it
stands, the deficiency is still something that would be near impossible for a end user to detect.
It was interesting to see how other people approach the problem. Due to past experiences I tend to reach for my programmer tools rather quickly when I want to figure out how things work.
5. December 11th, 2012 at 14:10 | #5
@Yawning Angel
Loved your article, it and your additional points are both excellent. Similar issue with this article: I had to say I am testing only gross totals (counts or surprise), not serial correlation.
You can test for serial correlation- but as you stated we already know the PRNG has some known problems there. Finally I explicitly call your work science, whereas mine was just statistics.
6. December 11th, 2012 at 14:39 | #6
@Yawning Angel
Okay, I have the itch. Without a *lot* of data (which you can produce since you tapped into the XCOM PRNG) it is hard to track the serial correlations in a simple principled manner (especially
with the varying level of censorship caused by the hit probability varying). In principle with access to the PRNG you can record streaks or every observed outcome sequence of length-k and get at
the issue. What I would suggest working from the outside (where it is expensive to produce data and the data is censored) is to upgrade the standard time-series methods used to detect serial
correlation to lean on Tobit regression (to deal with the fact you don’t see what was rolled, just if it was under the current hit threshold or not). Looks like the R package for that is this
one: http://cran.r-project.org/web/packages/censReg/vignettes/censReg.pdf .
7. December 11th, 2012 at 16:00 | #7
Unfortunately I have limited time I can dedicate to this, but since you’re interested, I ported the PRNG algorithm to R for you so you can generate as many data points as you want.
You’ll need bitops off CRAN, and the code in not spectacular but it does work. It reseeds the PRNG each time you call the routine, but the function returns a vector for a reason. It should be
trivial to modify if you need it to behave differently.
8. December 11th, 2012 at 21:22 | #8
@Yawning Angel
Having direct access to the pseudo random number generator makes things a lot easier. First I can insist that all simulated shots have the same hit percentage (which make things much easier) and
second I can generate a lot more data. As far as I can tell this is a weak serial correlation- but it doesn’t stick out like a sore thumb for my simplistic test until I got to around 50,000 data
points. Here is the R-code to find the serial correlation:
> probs <- xcomPrng(50000);
> dHit <- ifelse(probs<=0.5,1,0);
> tab <- table(dHit[1:(length(dHit)-1)],dHit[2:length(dHit)]);
> fisher.test(tab);
Fisher's Exact Test for Count Data
data: tab
p-value = 0.001404
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
0.9116370 0.9781879
sample estimates:
odds ratio
> tab
The table at the end is called a contingency table, and Fisher’s test tells us if the the significance of how far the counts are off from being independent. You read the table row by column: so
row 0 column 1 is how many hits were followed by a miss (later entries in the hit vector are newer). Notice there are too few misses followed by misses. The damning evidence is the low p-value.
The same code run with R’s own PRNG is looks okay:
> probs <- runif(50000);
> dHit <- ifelse(probs<=0.5,1,0);
> tab <- table(dHit[1:(length(dHit)-1)],dHit[2:length(dHit)]);
> fisher.test(tab);
Fisher's Exact Test for Count Data
data: tab
p-value = 0.9287
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
0.9638094 1.0341483
sample estimates:
odds ratio
> tab
So if your port of the XCOM PRNG to R is correct then we see the XCOM PRNG (as it is used with the re-seedign and so on) is of lower quality than R’s PRNG. However, I couldn’t find a problem
until I looked at a large amount of data- so I am not sure if players will see this or not (I may not have tried clever enough tests).
I agree with the conclusions of your article: the XCOM PRNG is not great, but I doubt the deficiencies are actually player visible.
And testing the sequence of the last few outcomes versus a most recent outcome:
> probs <- xcomPrng(50000);
> dHit <- ifelse(probs<=0.5,1,0);
> tab <- table(dHit[4:length(dHit)],
> tab
> fisher.test(tab,simulate.p.value=T)
Fisher's Exact Test for Count Data with
simulated p-value (based on 2000 replicates)
data: tab
p-value = 0.0004998
alternative hypothesis: two.sided
Note: these longer tests I am running here depend more and more on R’s pseudo random source being high quality. In reality we are testing if two pseudo-random sources have similar behavior. There
are ways we can fail to achieve a meaningful result. Two obvious bad possibilities are as follows. They could have same behavior and both be wrong (be related bad implementations) which we would
falsely record as “both good.” They R pseudo random simulator could be worse than XCOM’s causing us to misattribute a difference in behavior to an XCOM fault. At some point we would have to pick
a summary that we knew the theoretic distribution of. Thus we would avoid introducing a second pseudo random generator when performing tests.
9. December 11th, 2012 at 23:50 | #9
Thanks for the analysis. Since you’re only calling my routine once per test, the way I chose to seed the LCG is identical to a single game of XCOM (Seed once at the start of the routine, do not
re-seed), so the only issues would be bugs in my porting. The code is straight forward enough that I doubt there are any.
Unless you have any objections I’ll link to this as further reading from my initial writeup, since including the comments it covers most of what I wanted to look at if I ever revisited it.
10. December 11th, 2012 at 23:52 | #10
Nice article!
I don’t understand this: “In fact even the one-sided p-value is quite large (and poor) at 0.01.”
Why is 0.01 considered large in this case? I’m wondering if it’s a typo, since when eyeballing the graph for surprise, the area under the curve below the line looks a lot larger.
11. December 12th, 2012 at 07:14 | #11
@Brian Slesinsky
Typo on my part, the p is 0.1 which is “large” as it is greater than the traditional 0.05. thanks!
12. December 12th, 2012 at 07:14 | #12
@Yawning Angel
Wow, yes thanks!
13. December 22nd, 2012 at 12:54 | #13
Great article!
For games like this, I’ve always assumed that the display simply lies: if the displayed figure is 80%, the actual figure used is something else. I’m not sure of an easy way to test this, even
with the large amounts of raw data here.
The again, if the PRNG fails the streaky-ness test, then this doesn’t matter much
14. December 24th, 2012 at 12:31 | #14
And the conclusion of the XCOM: EU Xbox 360 Classic/Ironman run that donated this data. The game doesn’t have to do anything as subtle is mis-reporting probabilities to cheat
I finally won on Classic/Ironman and right after it played the victory cut scenes it switched over to the “too many countries have left the council” cut scenes (which was not the case, I had lost
only Egypt and all other countries were calm at 1-terror bar each) and scored the game as a defeat. The ironman save is right before this, so I watch this unfold again and again- but I can’t not
change the scored outcome.
Still, overall I give the game an “A.” It had its problems, may be simplified from its ancestors, and may not be to everybody’s taste. But, it felt like a game. And not all current games have
that feeling.
Update 1/25/2013- an update patch converted the lost game into a win. Yey!
15. December 27th, 2012 at 12:47 | #15
I’m currently working on my Masters in Methodology and Statistics, and found this article to be an amazingly well-crafted treatment of statistics as a whole — treating power, expected value,
interpretation of the null hypothesis, etc., in an easy-to-follow manner. Providing the data and the R code to analyze it — amazing.
I love it when my interests overlap! Thanks for this article.
16. December 27th, 2012 at 13:32 | #16
Wow thank you. Things like that were my goal, but you never know if you really get close in such things. | {"url":"http://www.win-vector.com/blog/2012/12/how-to-test-xcom-dice-rolls-for-fairness/","timestamp":"2014-04-16T13:08:36Z","content_type":null,"content_length":"108463","record_id":"<urn:uuid:863d8fca-91fc-4b4d-b9ac-bc80c6df8ad9>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof and linear combinations (self.cheatatmathhomework)
submitted ago by
sorry, this has been archived and can no longer be voted on
a) Write 2 in two different ways as a linear combination of 12 and 22.
b) Write -4 in two different ways as a linear combination of 12 and 22
c) What is the set of all linear combinations of 12 and 22?
Could I get step by help for all three parts? I'm having trouble with the concept of linear combination here...doing some practice and stumped with these more than proofs so far. Thanks.
2) Let m and n be integers. Then prove that m and n have different parity iff m² - n² is odd
What does different parity mean?
all 1 comments
[–]riemannzetajones0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
Let me answer (c) first since I think it sheds light on (a) and (b).
Ask yourself whether you could ever add and subtract multiples of 12 and 22 to get an odd number. As another example, could you ever add or subtract copies of 15 and 35 to get something that's not a
multiple of 5? It should be pretty clear that the answer in both cases is no. We can generalize this: at no point can we get anything that's not a multiple of the gcd of our two numbers. Think of the
gcd of the two as the biggest "block" that fits evenly into both of them. If we're adding and subtracting multiples of the two numbers (that's what linear combinations are), we're still always
dealing in a whole number of those "blocks", so at no point can we get something that's not a multiple of such a block.
The euclidean algorithm shows a way to always get a linear combination that gives you exactly the gcd. For instance, with 15 and 35, we can always find two other numbers a and b such that
15a + 35b = gcd(15,35) = 5
This shows that we can get any multiple of the gcd, since multiplying both sides of the equation still leaves us with a linear combination.
There's two ways you could use this to find (a) and (b). There's a surefire way using the Euclidean algorithm, which I can show you if you like, or perhaps a simpler way would be just to keep testing
successive multiples of 12, i.e.:
is 12 two more than some multiple of 22? No.
is 24 two more than some multiple of 22? No.
is 36 two more than some multiple of 22? No.
etc. Once you find an answer that's yes, simply subtract the right multiple of 22 to get 2. Same for -4.
For (2), parity simply means even/odd. | {"url":"http://www.reddit.com/r/cheatatmathhomework/comments/18cpze/proof_and_linear_combinations/","timestamp":"2014-04-19T15:11:10Z","content_type":null,"content_length":"50171","record_id":"<urn:uuid:8a58fe93-fbb4-4935-a9ef-a7434bd42c31>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
fibonnaci - inequality
If F(0) = 0 , F(1)= 1.... Solve for n F(n-1)< googol <F(n)
Use Binet's formula for the $n$-th Fibonacci number: $F(n)=\frac{\varphi^n-(-1/\varphi)^n}{\sqrt{5}}$ where $\varphi=(1+\sqrt{5})/2 \approx 1.618..$ is the Golden number As $n$ will be very large $-
(-1/\varphi)^n$ will be negligible so you need to find $n$ such that: $\frac{\varphi^{n-1}}{\sqrt{5}}<\text{googol }<\frac{\varphi^{n}}{\sqrt{5}}$ CB
but googol being 10^100 the n might be generalised solution
can you explain me the steps? i made a calculation error i suppose
I agree with austinvishal on this one. Taking logs to base 10 (as we old-timers used to do all the time before they came up with these newfangled electronic thingies), we want to find $\frac{\log(\
sqrt5\times10^{100})}{\log\varphi} \approx \frac{100.349485}{0.2089876}\approx480.17$. So we should take n = 481. | {"url":"http://mathhelpforum.com/discrete-math/81771-fibonnaci-inequality-print.html","timestamp":"2014-04-18T14:20:42Z","content_type":null,"content_length":"9771","record_id":"<urn:uuid:f73641cc-433e-42d7-a627-10e348bbd395>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
La Mesa, CA Statistics Tutor
Find a La Mesa, CA Statistics Tutor
...This results in a more practical understanding of the area. During this tenure, I mentored “students” from many walks of life and disciplines allowing them to become facile users of statistics
in their education or profession. While I used many statistical applications, SPSS is my primary data analysis tool.
8 Subjects: including statistics, SPSS, psychology, Microsoft Excel
...I also love tutoring computer science and physics. I create a different lesson plan for each student based on their needs and what they struggle with, and then set them up for success. I
understand that sometimes life gets hectic, so I only require 6 hours for cancellation.
37 Subjects: including statistics, calculus, geometry, algebra 1
...I LOVE to teach Science and MATH and in the end to prove to my students that learning and succeeding in Math and Science even most rigorous classes can be a great fun and much easier than they
initially imagine. I hold a MS degree in Physics and a PhD degree in Physics and Applied Math from the ...
32 Subjects: including statistics, calculus, physics, geometry
...I recently retook algebra in October of this year so I have refreshed my knowledge in this subject. I received an A in the original class and in the refresh class. I recently retook algebra in
October of this year so I have refreshed my knowledge in this subject.
13 Subjects: including statistics, chemistry, calculus, geometry
...I am a math PhD student, and basic algebra is integral to the work I do every day. I have acted as a teaching assistant for several classes in undergraduate calculus. I have taken a wide
variety of undergraduate physics courses and have tutored physics at the undergraduate level previously.
21 Subjects: including statistics, physics, calculus, geometry
Related La Mesa, CA Tutors
La Mesa, CA Accounting Tutors
La Mesa, CA ACT Tutors
La Mesa, CA Algebra Tutors
La Mesa, CA Algebra 2 Tutors
La Mesa, CA Calculus Tutors
La Mesa, CA Geometry Tutors
La Mesa, CA Math Tutors
La Mesa, CA Prealgebra Tutors
La Mesa, CA Precalculus Tutors
La Mesa, CA SAT Tutors
La Mesa, CA SAT Math Tutors
La Mesa, CA Science Tutors
La Mesa, CA Statistics Tutors
La Mesa, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/La_Mesa_CA_statistics_tutors.php","timestamp":"2014-04-18T21:37:15Z","content_type":null,"content_length":"24098","record_id":"<urn:uuid:3caec172-ef14-4413-99ac-d845544193dc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
More Complex Integration Please Help
October 4th 2009, 07:39 PM
The Power
More Complex Integration Please Help
I'm working through problems on my book and I have one problem, It seems like for the more complex functions they just memorize the table, which to me is pointless if you do not know how to get
to the shortcut without using some integration techinque so I have a few problems that I would like a "hint" on what would be the first thing to do like integrations by part or substitution or
the best way to expand or break down the integral so I can figure it out on my own
I seen one fellow expand the denom. out and have a square root come out I have no clue where that one came from.
No idea what to do here, I am not content with memorizing the shortcut or whatever it maybe be called in that table.
Those are all for now
October 4th 2009, 09:22 PM
I'm working through problems on my book and I have one problem, It seems like for the more complex functions they just memorize the table, which to me is pointless if you do not know how to get
to the shortcut without using some integration techinque so I have a few problems that I would like a "hint" on what would be the first thing to do like integrations by part or substitution or
the best way to expand or break down the integral so I can figure it out on my own
I seen one fellow expand the denom. out and have a square root come out I have no clue where that one came from.
No idea what to do here, I am not content with memorizing the shortcut or whatever it maybe be called in that table.
Those are all for now
For 1
$\int {\frac{{dy}}{{3 + {y^2}}}} .$
$\frac{1}<br /> {{3 + {y^2}}} = \frac{1}<br /> {3} \cdot \frac{1}<br /> {{1 + \frac{{{y^2}}}<br /> {3}}} = \frac{1}<br /> {3} \cdot \frac{1}<br /> {{1 + {{\left( {\frac{y}<br /> {{\sqrt 3 }}} \
So $\int {\frac{{dy}}{{3 + {y^2}}}} = \frac{1}{3}\int {\frac{{dy}}<br /> {{1 + {{\left( {\frac{y}{{\sqrt 3 }}} \right)}^2}}}} = \left\{ \begin{gathered} \frac{y}{{\sqrt 3 }} = u, \hfill \\dy = \
sqrt 3 du \hfill \\ <br /> \end{gathered} \right\} = \frac{{\sqrt 3 }}{3}\int {\frac{{du}}{{1 + {u^2}}}} =$
$= \frac{{\sqrt 3 }}{3}\arctan u + C = \frac{{\sqrt 3 }}{3}\arctan \frac{y}{{\sqrt 3 }} + C.$
October 4th 2009, 09:45 PM
The Power
I seen you try to explain this in a earlier thread but I do not see where the sqrt comes from
My problem is the reference for every problem is done by looking at the table of integrals my annoyance is how they get to these integrals in the table, or for test time sake's is it better just
to accept this is how the table of integrals is and make reference and plug in for corresponding values?
I guess I just want to learn the steps leading to the final integrals in the table for the "non elementary functions" | {"url":"http://mathhelpforum.com/calculus/106144-more-complex-integration-please-help-print.html","timestamp":"2014-04-18T19:58:43Z","content_type":null,"content_length":"8931","record_id":"<urn:uuid:93eccadc-fb1b-49e8-ac46-bbaaa9f894aa>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00096-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fredholm kernel
From Encyclopedia of Mathematics
A Fredholm kernel is a function completely-continuous operator
A Fredholm kernel that satisfies this condition is also called an
A Fredholm kernel is called degenerate if it can be represented as the sum of a product of functions of
The Fredholm kernels
[1] V.I. Smirnov, "A course of higher mathematics" , 4 , Addison-Wesley (1964) pp. Chapt. 1 (Translated from Russian)
A completely-continuous operator is nowadays usually called a compact operator.
In the main article above, no distinction is made between real-valued and complex-valued kernels. Usually, symmetry is defined for real-valued kernels, as is skew-symmetry:
About the terminology allied (transposed) and adjoint see also (the editorial comments to) Fredholm theorems.
A Fredholm kernel is a bivalent tensor (cf. Tensor on a vector space) giving rise to a Fredholm operator. Let Locally convex space), and let tensor product
where Adjoint space)
The concept of a Fredholm kernel can also be generalized to the case of the tensor product of several locally convex spaces. Fredholm kernels and Fredholm operators constitute a natural domain of
application of the Fredholm theory.
[1] A. Grothendieck, "La théorie de Fredholm" Bull. Amer. Math. Soc. , 84 (1956) pp. 319–384
[2] A. Grothendieck, "Produits tensoriels topologiques et espaces nucleaires" Mem. Amer. Math. Soc. , 5 (1955)
G.L. Litvinov
A set
How to Cite This Entry:
Fredholm kernel. B.V. Khvedelidze (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Fredholm_kernel&oldid=12278
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | {"url":"http://www.encyclopediaofmath.org/index.php/Fredholm_kernel","timestamp":"2014-04-18T15:40:34Z","content_type":null,"content_length":"24968","record_id":"<urn:uuid:45d529ea-87bb-4a56-95ec-1984afbf617b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
Greatest common divisor
September 16th 2013, 05:30 AM
Greatest common divisor
I have a math problem:
we know $gcd(x;y;z)=1$, $x eq y eq z$ and $x,y,z>1$.
may have value?
September 16th 2013, 07:29 AM
Re: Greatest common divisor
EDIT: whoops. sorry. gcd(x,y,z)=1 is given
Oh well, looks like if you multply every thing out and use gcd[a(x,y,z)] you might get there.
September 16th 2013, 08:24 AM
Re: Greatest common divisor
Did algebra and got as far as:
gcd(xz^2+yx^2+zy^2, xy^2+yz^2+zx^2, x+y+z)
September 16th 2013, 11:40 AM
Re: Greatest common divisor
Thanks for the reply but I question the current, still do not know how to prove it. for numbers in the form 3n+1; 3k+1; 3m+1 we have gcd=3.
September 18th 2013, 05:16 AM
Re: Greatest common divisor
This is a problem in divisibility of polynomials. In the form of post #3, neither of the first two terms is divisible by x+y+z, gcd =1. Perlis has a nice chapter on this.
From Google: gcd(a,b,c) = gcd(gcd(a,b),c) | {"url":"http://mathhelpforum.com/algebra/222021-greatest-common-divisor-print.html","timestamp":"2014-04-17T20:41:36Z","content_type":null,"content_length":"6034","record_id":"<urn:uuid:55b62829-0608-49eb-a5d4-cf36efc5cb1f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00021-ip-10-147-4-33.ec2.internal.warc.gz"} |
Procedure for Finding Prime Implicants
1) Find prime implicants by finding all permitted (integer power of 2) maximum sized groups of min-terms.
2) Find essential prime implicants by identifying those prime implicants that contain at least one min-term not found in any other prime implicant.
Answer: Z’
1) Every variable in an Σ Π that is a 0 in every square appears complemented in the final product term.
2) Every variable in an Σ Π that is a 1 in every square appear as is in the final product term.
3) A 0 in half of the square a 1 in half of the squares; in which case does not appear at all in the final term.
Q = f(a, b, c) = Σ(1, 2, 3, 6, 7)
- Two prime implicant {1, 3} {2, 3, 6, 7}
- Both are essential Q = Y + X’Z | {"url":"http://www.ee.sunysb.edu/~adoboli/ESE318/CL8.htm","timestamp":"2014-04-21T04:33:06Z","content_type":null,"content_length":"12058","record_id":"<urn:uuid:99777ed2-65c4-4142-a764-c7c71ab30c5d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
B.Sc. (University of British Columbia, 1967)
Ph.D. (University of British Columbia, 1971)
Postdoctoral Fellow (U. of Leiden, Holland, 1972-74)
Office: 224
Phone: (514)398-6930
Email: Bryan.Sanctuary@McGill.CA
Web Page: http://sanctuary-group.mcgill.ca/
Research Themes:
Research Description:
Undergraduate teaching:
Text eBook: Physical Chemistry by Laidler, Meiser and Sanctuary Publish 2011.
Introductory chemistry for high school
General chemistry for AP programs and college
Introductory physics, non-calculus for high school
General physics, calculus for AP programs and college
Organic chemistry at the college level
All the above have hundreds of interactions, between 6 to 8 hours of short voice comments to help explain the material, and each comprise between 100 and 200 hours of individual student study.
Yahoo Answers
See my personal web page for movies of spin and a description of my interests into physical chemistry and my blog on the foundation of quantum mechanics which also includes entries about physical
Web Page: Sanctuary Group
Blog: Foundations of Quantum Mechanics and Physical Chemistry
Multimedia: Science tutorials
Foundations of quantum mechanics and physical chemistry.
Quantum mechanics, likely one of the greatest achievements of modern physics, has been plagued by errors since its inception. These errors have to do primarily with the interpretation of quantum
mechanics and not with its successful application to many practical problems. However the interpretation of quantum mechanics is important for two main reasons: first a theory without interpretation
is only logic, or mathematics. All physical theories must be interpreted. Second, incorrect interpretations lead to incorrect conclusions and paradoxes. There are no paradoxes in science, only wrong
interpretations. The major errors in modern physics are the conclusion that it is impossible to describe all attributes of a system simultaneously, and that non-locality is a property of nature.
Non-locality means that over space like separations, two systems beyond the range of interaction between them have a “connectivity” due to “quantum channels” that acts instantaneously. This is
incorrect as well as be irrational.
So entrenched is non-locality in physics that journals, like Physical Review Letters will reject any paper without external peer review that questions the veracity of non-locality.
The other major error is the current view, or lack of, for a simple spin ½. What do a pair of spins look like when paired due to the Pauli principle? Movies give a visualization on my group page.
We have found that nature obeys Einstein Locality. We also have discovered that the intrinsic spin of ½ is really a two dimension system, not a one dimensional point particle.
First, let us state that Einstein, Podolsky and Rosen where right in 1935 when they assumed locality and showed that position and momentum, two non-commuting operators, are simultaneously elements of
physical reality. Usually, because of violation of Bell's Inequalities, and Bell's Theorem, people believe that the locality assumption is incorrect and so repudiate EPR. However Bell made an error
in his spin assumption, not the locality assumption.
The major errors in interpreting quantum mechanics:
1935: Bohr, in replying to EPR, gives a rambling and virtually incomprehensible account of complementarity. This states that quantum mechanics is the most fundamental theory and, since it cannot
describe position and momentum simultaneously, then, in contrast to EPR, Bohr concludes that not all observables are simultaneously elements of physical reality. He was never clear on this concept.
For example in 1962, someone came to his office and described his interpretation of complementarity to Bohr in the hope to test his ideas, to which Bohr responded “You still have it wrong.” but did
not say how (see Max Jammer's book, The Philosophical Foundations of Quantum Mechanics, 1973). Bohr was wrong but still today, people believe in complementarity and the wave particle duality.
1935 to present: Greek philosophers did not know about quantum mechanics. They agreed with Einstein that for a system, all it attributes are simultaneously elements of physical reality. Two areas of
philosophy are relevant to quantum theory: epistemology (how we can know) and ontology (what we can know). Because of the notion of complementarity, philosophers have been trying to reconcile how
nature can describe one thing and not the other, and vice versa. They have also been trying to reconcile non-locality. Since EPR are correct on both counts, the last 60 years of philosophical efforts
in these areas are now moot.
About this time, Born suggested that the wave function describes all we can know about a single system, say a particle. This is incorrect. The wave function describes a statistical ensemble of
similarly prepared states.
Another idea that evolved in the epistemological debate, wave function collapse, is completely incorrect. It states that a wave function describes a particle and since the wave function can exist
over a large region, then so does the particle. When we measure the particle, the wave function collapses into the state that happens to describe the pure state of the particle with a probability
obtained from quantum mechanics. These notions are incorrect. First a particle is a well defined system that is localized in one place. At any instant of time, all its elements of reality exist
simultaneously. If you believe in the tracks in a bubble chamber come from single particles, or in the ability to assemble nano-particles into nano-structures, then it should be clear a particle is
not delocalized over all space. Consider the famous Schrödinger's Cat paradox which Schrödinger introduced to show the absurdity of superposition at the macroscopic level. At the microscopic level,
the superposition is an expression of our ignorance. We do not know what state a system is in, so we assume it is in all of them with some probability distribution.
1936: John von Neumann incorrectly proved their can be no dispersion free states. If he had been correct it would mean there could be no deeper theories (like hidden variables) underpinning quantum
mechanics. That is Einstein's assertion in the EPR paper, that quantum mechanics is incomplete, could not be correct. Von Neumann, brilliant mathematician and father of the modern computer, was well
respected. His influence was so great that people believed his proof. Forty years later John Bell pointed out that von Neumann's proof was mathematically correct but he made an incorrect assumption:
that expectation values of observables are linea—is incorrect.
1964: In 1964 he derived his famous inequalities. The math is so straightforward that no errors are likely ever to be found, although people still try. Bell's error, ironically, was also in his
assumptions, just like he found for von Neumann's work. Bell made more assumptions than Einstein Locality, which he considered vital. The second is a spin has two values, +1/2 or -1/2 (to make it
simpler, lets just say ±1. I have found that a spin has a 2D structure and this leads to a new spin angular momentum that cannot be predicted from qm. It is a result of the indistinguishability of a
spins two axes of quantization. The new resonance or exchange spin is hermitian but has a magnitude with is √2 larger than the usual spin ½ from quantum mechanics. This extra correlation is the sole
cause of the violation of Bell's Inequalities. Therefore locality is restored to quantum mechanics.
The erroneous conclusion that nature is non-local has spawned some careful experiments in order to show quantum mechanics violate Bell's Inequalities. There is nothing wrong with these experiments
except that they are incorrectly interpreted as proving non-locality. Furthermore, no-one can has explained how non-locality works in EPR experiments (usually called “quantum weirdness”). The error
here is not in the experiments but in the assumption that violation of Bell's Inequalities means nature is non-local. When people start to use the words: weird, spook, magic and trickery to describe
physical processes, then you get the idea that some things do not add up.
1993: Teleportation. Sorry Sci-Fiction buffs, it cannot happen. It is believed that “quantum channels” exist over long distances, like Gisin's 10 km experiments. They cannot and quantum channels do
not exist beyond a few picometers.
1936 to present: entanglement. Schrödinger introduced the notion of entanglement in 1936. It is generally believed that entanglement is responsible for non-locality—wrong. There is no doubt that many
quantum states are entangled, but this entanglement cannot exist after particles have separated and are beyond the range of each other's interactions. In fact when an entangled pair of spins
separate, only a biparticle state remains. A biparticle state obeys Einstein locality and is a well defined mathematical system. It is composed, however, of non-hermitian coherent microstate
In my research, I have shown that a spin exists in a microstate that is beyond the range of direct measurement. It is manifest as a resonance state of four hermitian spins, each of which is
dispersion free but have the same eigenvalues and eigenvectors that differ only by a sign. In other words, these for an isolated particle with spin, these four degenerate orientations cannot be
distinguished. Quantum theory cannot resolve this degeneracy in physical reality, which directly leads to the Heisenberg Uncertainty Relations.
In experiments that directly measure, the system must first be prepared for measurement. To measure particles with spin, it is usual to use a magnetic field (but watch out for the Lorentz force if
you want to use electrons). Photons can be prepared by passing beams through devices like quarter wave plates. These disrupt the spins so that two of its three axes are randomized and leads to the
usual view of a spin being in a state of either +1/2 or -1/2 and consistent with being a point particle. In contrast, EPR experiments, using coincidence counting techniques, do not directly measure a
spin, but a pair of spins. Therefore these experiments can be sensitive to the spin's microstate and it is found to have two orthogonal axes of spin quantization. One end is +1/2 and the other is -1/
2, or vice versa. In addition to being a two dimensional quantity, a spin also has a quantum phase. This orients the two dimensional spin in three dimensional real space. The quantum phase is also
needed to produce the √2 . The quantum phase is defined by the spin commutation relations:
A surprising point about this approach is that the quantum phase leads to the state of a single spin being non-hermitian. Even so, due to indistinguishabiltiy, the states formed are always hermitian,
Non-hermitian states have non-orthogonal eigenstates, and so a single particle with spin can interfere with itself.
Since a single electron can interfere with itself it resolves the double slit experiment when single electrons, fired one at a time, build up an interference pattern. It also resolves the detection
loophole in EPR experiments.
Another observation is that there are four degenerate √2 in the body fixed frame of a single spin. These produce a resonance situation between these four dispersion free states producing a state of
zero net angular momentum. Could this be the elusive magnetic monopole?
The view here is that the wave function describes a statistical ensemble of microstates prepared for measurement. This view follows the way experiments are performed: that is a large number of events
are collected and averaged. Many issues are resolved and new ideas emerge:
• Restores locality; repudiates non-locality.
• Shows that a spin is a two dimensional system; not a one dimensional point particle.
• Discovers a new spin angular momentum of magnitude √2 larger than the usual spin. This cannot be predicted from quantum mechanics. It requires extending quantum theory to include non-hermitian
spin states.
• Introduces the concept of the biparticle (a separated formally entangled spin pair); entanglement cannot persist after entangled particles separate.
• Shows a spin has a superposed state of zero angular momentum—that is it is a resonance state that has the properties expected for a magnetic monopole.
• Resolves the double slit experiments.
• Shows that at the microstate level, the states are coherent and so the state operators are non-hermitian.
• Completes quantum mechanics so all attributes are simultaneously dispersion free.
• The violation of Bell's Inequalities is due to the √2 spin, not non-locality. These experiments distinguish the one and two dimensional spin. Violation of Bell's Inequalities do not distinguish
local from non-local events.
• Supports the statistical interpretation of quantum mechanics.
• Resolves the detection loophole.
• Gives an explanation for why maximum violation of Bell's Inequalities occurs when the three filter settings are 60 degrees apart. (of 45 degrees in the CHSH form)
• Gives an explanation for asymmetry in the EPR data (which cannot be explained if it is assumed that isotropic singlet state exists after the two spins separate.)
• Shows that quantum channels do not exist and teleportation cannot happen in the usual way suggested; non-locality and “EPR channels” are not responsible. EPR or quantum channels do not exist over
large distances.
• Introduces the concept of quantum phase, which orients a particle in its microstate.
• Introduces the concept of quantum correlations length, QCL, that is a measure of the degree of randomization that has occurred. A QCL of √3 is the longest for a spin ½ in its microstate; a QCL
√2, corresponds to a phase randomized spin; and when it is unity, the QCL corresponds to the usual one dimensional spin encountered in experiments that prepare spins for direct measurement.
• Suggests that changing the description of a spin from 1D to 2D will have repercussions to the structure of nuclear matter.
In order to include coherent microstates into quantum mechanics, the hermiticity postulate must be changed to allow for non-hermitian state operators. This change completes quantum mechanics in the
sense that EPR might have envisioned. Recall the famous statement by EPR: “If, without in any way disturbing a system, we can predict with certainty (i.e. with probability equal to unity) the value
of a physical quantity, then there exists an element of physical reality corresponding to this physical quality.”
Currently Teaching:
CHEM-203 Survey of Physical Chemistry
CHEM-233 Topics in Physical Chemistry | {"url":"http://www.chemistry.mcgill.ca/directory/people.php?p=32&n=Sanctuary","timestamp":"2014-04-17T19:05:03Z","content_type":null,"content_length":"26634","record_id":"<urn:uuid:a82d0260-18d2-4397-ad2e-381c955fa864>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
Payer, Tilman (2007): Modelling extreme wind speeds. Dissertation, LMU München: Faculty of Mathematics, Computer Science and Statistics
Metadaten exportieren
Autor recherchieren in
Very strong wind gusts can cause derailment of some high speed trains so knowledge of the wind process at extreme levels is required. Since the sensitivity of the train to strong wind occurrences
varies with the relative direction of a gust this aspect has to be accounted for. We first focus on the wind process at one weather station. An extreme value model accounting at the same time for
very strong wind speeds and wind directions is considered and applied to both raw data and component data, where the latter represent the force of the wind in a chosen direction. Extreme quantiles
and exceedance probabilities are estimated and we give corresponding confidence intervals. A common problem with wind data, called the masking problem, is that per time interval only the largest wind
speed over all directions is recorded, while occurrences in all other directions remain unrecorded for this time interval. To improve model estimates we suggest a model accounting for the masking
problem. A simulation study is carried out to analyse the behaviour of this model under different conditions; the performance is judged by comparing the new model with a traditional model using the
mean square error of high quantiles. Thereafter the model is applied to wind data. The model turns out to have desirable properties in the simulation study as well as in the data application. We
further consider a multivariate extreme value model recently introduced; it allows for a broad range of dependence structures and is thus ideally suited for many applications. As the dependence
structure of this model is characterised by several components, quantifying the degree of dependence is not straight forward. We therefore consider visual summary measures to support judging the
degree of dependence and study their behaviour and usefulness via a simulation study. Subsequently, the new multivariate extreme value model is applied to wind data of two gauging stations where
directional aspects are accounted for. Therefore this model allows for statements about the joint wind behaviour at the two stations. This knowledge gives insight whether storm events are likely to
be jointly present at larger parts of a railway track or rather occur localized.
Item Type: Thesis (Dissertation, LMU Munich)
Keywords: extreme value statistics; extreme wind speeds; directional extrems;
Subjects: 600 Natural sciences and mathematics > 510 Mathematics
600 Natural sciences and mathematics
Faculties: Faculty of Mathematics, Computer Science and Statistics
Language: English
Date Accepted: 8. March 2007
1. Referee: Küchenhoff, Helmut
Persistent Identifier (URN): urn:nbn:de:bvb:19-67547
MD5 Checksum of the PDF-file: 13a37f3b3d98ae43541dd70b922cb45d
Signature of the printed copy: 0001/UMC 16108
ID Code: 6754
Deposited On: 05. Apr 2007
Last Modified: 16. Oct 2012 08:05 | {"url":"http://edoc.ub.uni-muenchen.de/6754/","timestamp":"2014-04-20T23:31:02Z","content_type":null,"content_length":"25999","record_id":"<urn:uuid:1abbde38-4617-4ab6-89c4-e64630e71402>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shoreline, WA Precalculus Tutor
Find a Shoreline, WA Precalculus Tutor
...In addition, I successfully helped other students in class to get a better understanding of the subject. I discovered that most students that I helped had trouble in economics because they
have had trouble in math previously. If you feel this is your problem too, I am confident I can help you.
20 Subjects: including precalculus, reading, calculus, geometry
...As an aspiring physician I spent great amounts of time thoroughly studying Biology during my undergraduate career at the University of Washington and again when I studied for the MCAT (Medical
School Admission Exams). I have over five years of experience tutoring the subject both at the Universit...
27 Subjects: including precalculus, chemistry, biology, reading
...Being ahead of others in my grade at math shows my love and able to understand it. I won't just give my students the answers, but instead will push them to try and solve the problems on their
own after I have shown them how to solve other examples. Of course I will be there to support them the entire time.
15 Subjects: including precalculus, reading, Spanish, geometry
...I instill rigor and emphasize practice so moving ahead to new concepts is a breeze. I keep myself updated with teaching techniques by taking courses in math and computer science on Coursera
and Edx platforms Math is a language, with its own phrases and terminology. When these are in one's tool belt, one can draw out the right tool for the right situation confidently.
16 Subjects: including precalculus, geometry, algebra 1, algebra 2
...So- I split the class in two. I created an obstacle course. At the end of it there was a multiplication problem.
17 Subjects: including precalculus, calculus, statistics, geometry
Related Shoreline, WA Tutors
Shoreline, WA Accounting Tutors
Shoreline, WA ACT Tutors
Shoreline, WA Algebra Tutors
Shoreline, WA Algebra 2 Tutors
Shoreline, WA Calculus Tutors
Shoreline, WA Geometry Tutors
Shoreline, WA Math Tutors
Shoreline, WA Prealgebra Tutors
Shoreline, WA Precalculus Tutors
Shoreline, WA SAT Tutors
Shoreline, WA SAT Math Tutors
Shoreline, WA Science Tutors
Shoreline, WA Statistics Tutors
Shoreline, WA Trigonometry Tutors | {"url":"http://www.purplemath.com/shoreline_wa_precalculus_tutors.php","timestamp":"2014-04-16T10:40:19Z","content_type":null,"content_length":"24169","record_id":"<urn:uuid:d5694906-f362-466b-ab03-cb4879daeb50>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
15 January 2014, theimagi @ 6:11 am
One of the most fundamental questions in physics and cosmology is why the physical constants are what they are.
For example the fine structure constant is one of the about 22 empirical parameters in the Standard Model of particle physics, whose value is not determined within it.
In other words their values are not determined by theory but by experimentation.
An even more puzzling question is why a certain number of them lie within a very narrow range, so that if any were only slightly different, the Universe would be unable to develop matter,
astronomical structures, elemental diversity, or life as we presently understand it.
However there are several theoretical models that attempt to explain why we live in a universe that is so fine tuned for life.
For example the Multiverse class of theories assumes the value of the fundamental constants vary randomly though out many different universes and that we happen to live in one that have the values
that will support life.
In other words they all assume the existence of many universes, each with randomly chosen physical constants, some of which are hospitable to intelligent life and because we are intelligent beings,
we are by definition in a hospitable one.
However all of then suffer from the same problem in that are not verifiable or falsifiable because by definition universes are closed systems and cannot interact with each other. Therefore because
they cannot interact with ours there is no way to verify or falsify their existence.
This is why Critics of the Multiverse-related explanations argue that they are unscientific because there is no way to experimentally verify or falsify their existence.
Yet the reason why we live in a universe in which the values of fundamental constants are fine tuned to allow life to developed may not be due to a random property of their origins but may be because
they are preordained to have those values by a dynamic resonant property of energy/mass defined by Einstein’s General Theory of Relativity and his equation E=mc^2.
In other words the fundamental constants are what they are because they correspond to the most stable configuration of energy/mass possible.
For example a guitar string has a frequency at which it will naturally resonant at due, in part to the tension it is experiencing, and will, if allowed to, drift towards and stabilize at that optimal
Similarly the values of the fundamental constants associated with the resonant structure of energy/mass defined by Einstein would have a tendency to drift towards and stabilize at their optimal
However if it is true that the fundament constants are due to a dynamic resonant property of energy/mass one should be able to determine their values, including that of fine structure and
cosmological constant by measuring the components of the resonant system it creates.
The dynamic relationship between mass and energy defined by the equation E=mc^2 tell us that they are oppositely directed in the sense that if one increases the other must decrease. However this also
tells that whenever they interact a resonant structure would be formed whose fundamental frequency would be determined in part by the "tension" created their oppositely directed components similar to
how the frequency of a guitar string also depends on the tension it is under.
This suggest that the magnitude of the fine structure constant may be the result of a resonant structure formed by the "tension" created between the mass and the oppositely directed quantized
electrical energy of its components defined by the equation E=mc^2. Additionally because of the dynamic properties of energy/mass discussed above its value will adjust and stabilize around one that
defines the optimal resonant structure for those components.
In other words the value of fine structure constant may not be a random feature of our universe but is determine by a dynamic relationship between energy/mass and its quantized components.
However if it is true that the values of all of the fundamental constants are due to resonant property of energy/mass defined by Einstein then, as with the fine structure constant one should also be
able to determine the value of the cosmological constant in terms of those resonant properties.
The dynamic relationship between mass and energy describe above tells us that the universe’s expansion would form a resonant structure whose fundamental frequency would be determined by the relative
strengths of the "tension" associated with the kinetic energy of its expansion and the gravitational contractive forces associated with its mass. Again this would be similar to how the fundamental
frequency at which a guitar string resonates depends upon the tension of its strings.
This means the value of the cosmology constant associated with the universe’s expansion may be related to the dynamic resonant properties of energy and mass and not to some random function as is
assumed by most of Multiverse theories.
As mentioned earlier Einstein General Theory of Relativity tells us there is a dynamic balance between the universe’s gravitational potential energy and the kinetic energy associated with its
expansion. However, not all of the energy associated with that expansion is directed towards it because of the random motion of its energy/mass components. For example, observations indicate that
some stars and galaxies are moving towards not away from us. Therefore, not all of the kinetic energy present at the time of its origin is directed towards its expansion.
Additionally the equation E=mc^2 which defines the equivalence between mass and energy tells us the kinetic energy of the universe’s expansion also posses gravitational potential.
However the law of conservation of energy/mass tells that energy/mass cannot be created or destroyed in a closed environment. This also tells us since, by definition the universe is closed system the
kinetic energy of the universe’s energy/mass cannot exceed its gravitational contractive properties of its mass because Einstein tells us that its kinetic energy is made up of that mass.
Therefore because some of the kinetic energy of its components is not directed towards its expansion the total gravitational contractive properties of its energy/mass must exceed the kinetic energy
of its expansive components. Which means at some point in time the gravitation contractive potential of its energy/mass must exceed the kinetic energy of its expansion because as just mentioned not
all of its kinetic energy is directed towards its expansion. Therefore at that point, in time the universe will have to enter a contractive phase.
(Many physicists would disagree because recent observations suggest that a force called Dark energy is causing the expansion of the universe accelerate. Therefore they believe that its expansion will
continue forever. However, as was shown in the article "Dark Energy and the evolution of the universe" if one assumes the law of conservation of mass/energy is valid, as we have done here than the
gravitational contractive properties of its mass equivalent will eventually exceed its expansive energy associated with dark energy and therefore the universe must at some time in the future enter a
contractive phase.)
We know from observations that heat is generated when we compress a gas and that this heat creates pressure that opposes further contractions.
Similarly the contraction of the universe will create heat which will oppose its further contractions.
Therefore the velocity of contraction will increase until the momentum of the galaxies, planets, components of the universe equals the radiation pressure generated by the heat of its contraction.
At this point in time the total kinetic energy of the collapsing universe would be equal and oppositely directed with respect to the radiation pressure associated with the heat of its collapse. From
this point on the velocity of the contraction will slow due to the radiation pressure and be maintained by the momentum associated with the remaining mass component of the universe.
However, after a certain point in time the heat and radiation pressure generated by its contraction will become great enough to ionize the remaining mass and cause it to reexpand because the
expansive forces associated with the radiation pressure will exceed the contractive forces associated with its mass.
This will result in the universe entering an expansive phase and going through another age of recombination when the comic background radiation was emitted. The reason it will experience an age of
recombination as it passes through each cycle is because the heat of its collapse would be great enough to completely ionize all forms of matter.
However, at some point in time the contraction phase will begin again because as mentioned earlier its kinetic energy cannot exceed the gravitational energy associated with the total mass/energy in
the universe.
Since the universe is a closed system, the amplitude of the expansions and contractions will drift and stabilize at a specific value corresponding to its resonant frequency similar to how a guitar
string drift and stabilize at it’s resonant frequency
This results in the universe experiencing in a never-ending cycle of expansions and contractions whose frequency would be defined by its resonant properties.
Many cosmologists do not accept this cyclical scenario of expansion and contractions because they believe a collapsing universe would end in the formation of a singularity similar to the ones found
in a black hole and therefore, it could not re-expand.
However, according to the first law of thermodynamic the universe would have to begin expanding before it reached a singularity because that law states that energy in an isolated system can neither
be created nor destroyed
Therefore because the universe is by definition an isolated system; the energy generated by its gravitational collapse cannot be radiated to another volume but must remain within it. This means the
radiation pressure exerted by its collapse must eventually exceed momentum of its contraction and the universe would have to enter an expansion phase because its momentum will carry it beyond the
equilibrium point were the radiation pressure is greater that the momentum of its mass.
This would be analogous to the how momentum of a mass on a spring causes it to stretch beyond its equilibrium point resulting it osculating around it.
There can be no other interpretation if one assumes the validity of the first law of thermodynamics which states that the total energy is a closed system is defined its mass and the momentum of its
components. Therefore, when one decreases the other must increase and therefore it must oscillate around a point in space and time.
The reason a singularity can form in black hole is because it is not an isolate system therefore the thermal radiation associated with its collapse can be radiated into the surrounding space.
Therefore, its collapse can continue because momentum of its mass can exceed the radiation pressure cause by its collapse in the volume surrounding a black hole.
If this theoretical model is valid the heat generated by the collapse of the universe must raise the temperature to a point where it energy/mass would become ionized into their component parts
thereby making the universe opaque to radiation. It would remain that way until it entered the expansion phase and cooled enough to allow them become deionized. This Age of Recombination, as
cosmologists like to call it is the causality of the Cosmic Background Radiation.
As mentioned earlier the frequency of the expansions and contractions of all resonant systems is defined by their resonant properties.
Similarly the resonant structure created by the contractive properties of universe’s gravitational potential and the kinetic energy of its expansion will also have a natural frequency which would be
determine by resonant properties. Like all resonant structures any frequencies that do not correspond to that value will be attenuated.
Therefore the value of the cosmologic constant which would define the rate or frequency at which the universe is expanding or contracting would be determined by the resonant properties of energy/mass
define by Einstein.
In other words the value of its cosmological constant may not be randomly chosen but would be defined by the physical relationship between mass and kinetic energy defined by Einstein.
This means one could experimentally quantify and this scenario by using Einstein equations to determine the value of the cosmology constant based on that relationship and see if it agrees with its
observed value.
In other words it is not necessary to assume the existence of multiple universes to understand why fundamental physical constants lie within a very narrow range that allows life to develop because
their values may not be random chosen but are preordained to have them by a physical property of energy and mass defines by Einstein.
As mentioned earlier many Critics of the Multiverse-related explanations argue that there is no evidence or any way of verifying or falsifying the existence of other universes.
However we can observe and verify the existence of the resonant properties of energy and mass and if want we have said above is true that values all of the fundamental constants in physics are
related to those resonant properties them it would be falsified if it was found that the value of even one of them could not be derived using that concept.
Later Jeff
Copyright Jeffrey O’Callaghan 2014
The Imagineer’s The Imagineer’s
Chronicles Chronicles
Vol. 4 — 2013 Vol. 3 — 2012
Paperback Paperback
13.29 $10.96
E-book Ebook
$7.99 $6.55
The Reality
of the Fourth
The Imagineer’s
Vol. 2 — 2011
1 December 2013, theimagi @ 6:14 am
Many physicists assume the General Theory of Relativity predicts that all the mass in a black hole is concentrated at its center in a singularity or a point which has zero volume and infinite density
However the idea it can be concentrated in a non-dimensional point of infinite density with zero volume is a bit hard to grasp even for Einstein whose theory is used to predict their existence.
What makes it even more bizarre is that scientists tell us the laws of physics which they use to predict its existence break down at a singularity.
Why then do many believe that they exist?
The reason is because many believe the mathematics of the General Theory of Relativity tells us that when star starts to collapse after burning up its nuclear fuel and forms a black hole the
gravitational forces of its mass become large enough to cause matter to collapse to zero volume.
However even though there is observational evidence for the existence of black holes there never will be any for the singularity because according to the General Theory of Relativity nothing,
including light can escape form one.
For example NASA’s Hubblesite tells us that “Astronomers have found convincing evidence for a black hole in the center of our own Milky Way galaxy, the galaxy NGC 4258, the giant elliptical galaxy
M87, and several others. Scientists verified its existence by studying the speed of the clouds of gas orbiting those regions. In 1994, Hubble Space Telescope data measured the mass of an unseen
object at the center of M87. Based on the motion of the material whirling about the center, the object is estimated to be about 3 billion times the mass of our Sun and appears to be concentrated into
a space smaller than our solar system.”
However as mentioned earlier we will never be able to observe a singularity because they only exist inside black hole. Therefore to determine its reality we must rely solely on the mathematical
predictions of the General Theory of Relativity regarding their formation.
Yet there are some who say that the mathematics used to predict the existence of a black hole also predicts, with equal certainty the existence of singularities. In other words by verifying the
existence of black holes though observations means that we have also verified the existence of singularities.
However this would only be true if the mathematics used to predict both a black hole and a singularity conform to the conceptual arguments associated with Einstein General Theory of Relativity
because the mathematics used to confirm its existence is based solely on them and not on observations as is the case of black holes.
In other words the fact that we can observe a black hole tells us the mathematics used to predict its existence has a valid basis in ideas of General Relativity.
However the same cannot be said about the existence of a singularity because the conceptual arguments found in that theory tells us that we cannot extrapolate the mathematics associated with it to
the formation of a black hole.
To understand why we must look at how it describes both the collapse of a star to a black hole and then what happens to its mass after its formation.
Einstein in his General Theory of Relativity predicted time is dilated or moves slower when exposed to gravitational field than when it is not. Therefore, according to Einstein’s theory a
gravitational field, if strong enough it would stop time.
In 1915,Karl Schwarzschild discovered that according to it the gravitational field of a star greater than approximately 2.0 times a solar mass would stop the movement of time if it collapsed to a
singularity. He also defined the critical circumference or boundary in space around a singularity where the strength of a gravitational field will result in time being infinitely dilated or slowing
to a stop.
In other words as a star contacts and its circumference decreases, the time dilation on its surface will increase. At a certain point the contraction of that star will produce a gravitational field
strong enough to stop the movement of time. Therefore, the critical circumference defined by Karl Schwarzschild is a boundary in space where time stops relative to the space outside of that
This critical circumference is called the event horizon because an event that occurs on the inside of it cannot have any effect on the environment outside of it.
Many physicists as mentioned earlier believe the existence of a singularity is an inevitable outcome of Einstein’s General Theory of Relativity.
However, it can be shown using the concepts developed by Einstein; this may not true.
In Kip S. Thorne book “Black Holes and Time Warps“, he describes how in the winter of 1938-39 Robert Oppenheimer and Hartland Snyder computed the details of a stars collapse into a black hole using
the concepts of General Relativity. On page 217 he describes what the collapse of a star would look like, form the viewpoint of an external observer who remains at a fixed circumference instead of
riding inward with the collapsing stars matter. They realized the collapse of a star as seen from that reference frame would begin just the way every one would expect. “Like a rock dropped from a
rooftop the stars surface falls downward slowly at first then more and more rapidly. However, according to the relativistic formulas developed by Oppenheimer and Snyder as the star nears its
critical circumference the shrinkage would slow to a crawl to an external observer because of the time dilatation associated with the relative velocity of the star’s surface. The smaller the
circumference of a star gets the more slowly it appears to collapse because the time dilation predicted by Einstein increases as the speed of the contraction increases until it becomes frozen at the
critical circumference.
However, the time measured by the observer who is riding on the surface of a collapsing star will not be dilated because he or she is moving at the same velocity as its surface.
Therefore, the proponents of singularities say the contraction of a star can continue until it becomes a singularity because time has not stopped on its surface even though it has stopped to an
observer who remains at fixed circumference to that star.
But one would have to draw a different conclusion if one viewed time dilation in terms of the gravitational field of a collapsing star.
Einstein showed that time is dilated by a gravitational field. Therefore, the time dilation on the surface of a star will increase relative to an external observer as it collapses because, as
mentioned earlier gravitational forces at its surface increase as its circumference decrease.
This means, as it nears its critical circumference its shrinkage slows with respect to an external observer who is outside of the gravitation field because its increasing strength causes a slowing of
time on its surface. The smaller the star gets the more slowly it appears to collapse because the gravitational field at its surface increase until time becomes frozen for the external observer at
the critical circumference.
Therefore, the observations of an external observer would make using conceptual concepts of Einstein’s theory regarding time dilation caused by the gravitational field of a collapsing star would be
identical to those predicted by Robert Oppenheimer and Hartland Snyder in terms of the velocity of its contraction.
However, Einstein developed his Special Theory of Relativity based on the equivalence of all inertial reframes which he defined as frames that move freely under their own inertia neither “pushed not
pulled by any force and therefore continue to move always onward in the same uniform motion as they began”.
This means that one can view the contraction of a star with respect to the inertial reference frame that, according to Einstein exists in the exact center of the gravitational field of a collapsing
(Einstein would consider this point an inertial reference frame with respect to the gravitational field of a collapsing star because at that point the gravitational field on one side will be offset
by the one on the other side. Therefore, a reference frame that existed at that point would not be pushed or pulled relative to the gravitational field and would move onward with the same motion as
that gravitational field.)
The surface of collapsing star from this viewpoint would look according to the field equations developed by Einstein as if the shrinkage slowed to a crawl as the star neared its critical
circumference because of the increasing strength of the gravitation field at the star’s surface relative to its center. The smaller it gets the more slowly it appears to collapse because the
gravitational field at its surface increases until time becomes frozen at the critical circumference.
Therefore, because time stops or becomes frozen at the critical circumference for both an observer who is at the center of the clasping mass and one who is at a fixed distance from its surface the
contraction cannot continue from either of their perspectives.
However, Einstein in his general theory showed that a reference frame that was free falling in a gravitational field could also be considered an inertial reference frame.
As mentioned earlier many physicists assume that the mass of a star implodes when it reach the critical circumference. Therefore, the surface of a star and an observer on that surface will be in
free fall with respect to the gravitational field of that star when as it passes through its critical circumference.
This indicates that point on the surface of an imploding star, according to Einstein’s theories could also be considered an inertial reference frame because an observer who is on the riding on it
will not experience the gravitational forces of the collapsing star.
However, according to the Einstein theory, as a star nears its critical circumference an observer who is on its surface will perceive the differential magnitude of the gravitational field relative to
an observer who is in an external reference frame or, as mentioned earlier is at its center to be increasing. Therefore, he or she will perceive time in those reference frames that are not on its
surface slowing to a crawl as it approaches the critical circumference. The smaller it gets the more slowly time appears to move with respect to an external reference frame until it becomes frozen
at the critical circumference.
Therefore, time would be infinitely dilated or stop in all reference that are not on the surface of a collapsing star from the perspective of someone who was on that surface.
However, the contraction of a stars surface must be measured with respect to the external reference frames in which it is contracting. But as mentioned earlier Einstein’s theories indicate time on
its surface would become infinitely dilated or stop in with respect to reference frames that were not on it when it reaches its critical circumference.
This means, as was just shown according to Einstein’s concepts time stops on the surface of a collapsing star from the perspective of all observers when viewed in terms of the gravitational forces.
Therefore it cannot move beyond the critical circumference because motion cannot occur in an environment where time has stopped.
This contradicts the assumption made by many that the implosion would continue for an observer who was riding on its surface.
Therefore, based on the conceptual principles of Einstein’s theories relating to time dilation caused by a gravitational field of a collapsing star it cannot implode to a singularity as many
physicists believe but must maintain a quantifiable minimum volume which is equal to or greater than the critical circumference defined by Karl Schwarzschild.
Some claim that the irregularities in the velocity of contractions in the mass forming the black hole would allow it continue to collapse beyond its event horizon. However Einstein’s theories tells
us that time would move slower for the faster moving mass components of a forming black hole than the slower ones thereby allowing the them to catch up with their faster moving brothers.
In fact the conceptual arguments presented in Einstein’s theories tell us the entire mass of a forming black hole must reach the event horizon at exactly the same time because of time dilatation
predicted by his theories.
Therefore assuming the irregularities in the velocity of contractions in the mass forming the black hole would allow it continue to collapse beyond its event horizon is not justified by the
conceptual foundations in the General Theory Relativity
This means either the conceptual ideas developed by Einstein are incorrect or there must be an alternative solution to the field equations that many physicists used to predict the existence of
singularities because as has just been shown the mathematical predications made by it regarding their existence is contradictory to its conceptual framework.
In other words just because we have observationally verified the existence black holes which were based on equations created from Einstein’s theory does not mean that a singularity at its center is
an inevitable outcome of his General Theory of Relativity.
Later Jeff
Copyright Jeffrey O’Callaghan 2013
The Imagineer’s The Imagineer’s
Chronicles Chronicles
Vol. 4 — 2013 Vol. 3 — 2012
Paperback Paperback
$13.29 $10.96
Ebook Ebook
$7.99 $6.55
The Reality of the Fourth Spatial Dimension
The Imagineer’s
Vol. 2 — 2011
« Previous Articles | {"url":"http://www.theimagineershome.com/blog/?cat=27","timestamp":"2014-04-19T17:23:23Z","content_type":null,"content_length":"84711","record_id":"<urn:uuid:49d0ba75-b7f7-42b8-95c2-b5b9cd7ce81a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum Gravity and String Theory
1008 Submissions
[3] viXra:1008.0015 [pdf] replaced on 2013-06-12 17:04:07
Arithmetic Information in Particle Mixing
Authors: M. D. Sheppeard
Comments: 8 Pages.
Quantum information theory motivates certain choices for parameterizations of the CKM and MNS mixing matrices. In particular, we consider the rephasing invariant parameterization of Kuo and Lee,
which is given by a sum of real circulants. Noting the relation of this parameterization to complex rotation matrices we find a potential reduction in the degrees of freedom required for the CKM
Category: Quantum Gravity and String Theory
[2] viXra:1008.0008 [pdf] replaced on 18 Oct 2010
What Causes The Mass To Be Deficit Inside A Nucleus?
Authors: Karunakar Marasakatla
Comments: 7 Pages. This article has been published in the Focus Issue (Part-II) of Prespacetime Journal on Cosmology & Gravity (Vol. 1, Issue. 9), pp. 1418-1424, November 2010.
There is ample amount of ambiguity regarding the concept of mass in present principles of physics. The mass of a gas nebula will be measured as the combined mass of all the atoms within that nebula.
The only option for the measurement of mass of the same nebula when it collapses to a neutron star is by combining the mass of all the neutron particles. These two values of mass for the same object
will never be the same. This is, in fact, against the definition of mass which states that the mass of an object is a fixed amount irrespective of the size of the object. It appears that our
understanding of mass and the way we measure it is flawed. All the observations demonstrate that there will be deficit or gain in the mass of an object when there is a change in the volume of that
object. An object measures more mass when the volume of the object was decreased. A neutron star is a compact form of the gas nebula from which it was collapsed, therefore the neutron star measures
more mass or gravity than the gas nebula. A nucleus measures more gravity when all the particles were packed together in a small volume. The cause for the deficit of mass inside a nucleus is the
increase in volume in which the particles were occupied.
Category: Quantum Gravity and String Theory
[1] viXra:1008.0005 [pdf] submitted on 3 Aug 2010
The Weight Change of a Body in Connection with the Electrical Tension
Authors: H.-J. Hochecker
Comments: 3 Pages.
I stick metal foil on an a 1 square meter large and 10 kilograms heavy dielectric so that a condenser is build. By putting a tension of 10 kV on the condenser or by short-circuiting it I measure a
real reduction of the weight of about 0.1 grams each time. This result confirms my theoretical conclusion which says that the weight of a body is related with the movements of its charges (protons
and electrons). Keywords: Gravitation, movement of electrical charges, relativity
Category: Quantum Gravity and String Theory | {"url":"http://vixra.org/qgst/1008","timestamp":"2014-04-16T13:07:20Z","content_type":null,"content_length":"6762","record_id":"<urn:uuid:8a8fd1d1-a35c-41ea-99cd-0736a4463a21>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - 4-volume is a rank (0,0) tensor?
Originally posted by matt grime
I'm notm sure my interpretation is correct, but in differential forms, d^2 is identically zero. so d^4(x) is zero.
d^4x is shorthand for the invariant measure in Minkowski 4 space. it is the volume 4 form, not the exterior derivative applied 4 times.
you can explicitly show that it is invariant, just by applying the change of variables formula (involving a Jacobian, as turin correctly suggests). | {"url":"http://www.physicsforums.com/showpost.php?p=140607&postcount=3","timestamp":"2014-04-18T08:16:32Z","content_type":null,"content_length":"8085","record_id":"<urn:uuid:92af2fb4-d6de-4511-a229-d53aa5abb378>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
HPL_dtrsv x := A^{-1} x.
#include "hpl.h"
void HPL_dtrsv( const enum HPL_ORDER ORDER, const enum HPL_UPLO UPLO, const enum HPL_TRANS TRANS, const enum HPL_DIAG DIAG, const int N, const double * A, const int LDA, double * X, const int INCX );
HPL_dtrsv solves one of the systems of equations A * x = b, or A^T * x = b, where b and x are n-element vectors and A is an n by n non-unit, or unit, upper or lower triangular matrix. No test for
singularity or near-singularity is included in this routine. Such tests must be performed before calling this routine.
ORDER (local input) const enum HPL_ORDER
On entry, ORDER specifies the storage format of the operands
as follows:
ORDER = HplRowMajor,
ORDER = HplColumnMajor.
UPLO (local input) const enum HPL_UPLO
On entry, UPLO specifies whether the upper or lower
triangular part of the array A is to be referenced. When
UPLO==HplUpper, only the upper triangular part of A is to be
referenced, otherwise only the lower triangular part of A is
to be referenced.
TRANS (local input) const enum HPL_TRANS
On entry, TRANS specifies the equations to be solved as
TRANS==HplNoTrans A * x = b,
TRANS==HplTrans A^T * x = b.
DIAG (local input) const enum HPL_DIAG
On entry, DIAG specifies whether A is unit triangular or
not. When DIAG==HplUnit, A is assumed to be unit triangular,
and otherwise, A is not assumed to be unit triangular.
N (local input) const int
On entry, N specifies the order of the matrix A. N must be at
least zero.
A (local input) const double *
On entry, A points to an array of size equal to or greater
than LDA * n. Before entry with UPLO==HplUpper, the leading
n by n upper triangular part of the array A must contain the
upper triangular matrix and the strictly lower triangular
part of A is not referenced. When UPLO==HplLower on entry,
the leading n by n lower triangular part of the array A must
contain the lower triangular matrix and the strictly upper
triangular part of A is not referenced.
Note that when DIAG==HplUnit, the diagonal elements of A
not referenced either, but are assumed to be unity.
LDA (local input) const int
On entry, LDA specifies the leading dimension of A as
declared in the calling (sub) program. LDA must be at
least MAX(1,n).
X (local input/output) double *
On entry, X is an incremented array of dimension at least
( 1 + ( n - 1 ) * abs( INCX ) ) that contains the vector x.
Before entry, the incremented array X must contain the n
element right-hand side vector b. On exit, X is overwritten
with the solution vector x.
INCX (local input) const int
On entry, INCX specifies the increment for the elements of X.
INCX must not be zero.
#include "hpl.h"
int main(int argc, char *argv[])
double a[2*2], x[2];
a[0] = 4.0; a[1] = 1.0; a[2] = 2.0; a[3] = 5.0;
x[0] = 2.0; x[1] = 1.0;
HPL_dtrsv( HplColumnMajor, HplLower, HplNoTrans,
HplNoUnit, a, 2, x, 1 );
printf("x=[%f,%f]\n", x[0], x[1]);
exit(0); return(0);
See Also
HPL_dger, HPL_dgemv. | {"url":"http://www.netlib.org/benchmark/hpl/HPL_dtrsv.html","timestamp":"2014-04-19T12:04:40Z","content_type":null,"content_length":"4785","record_id":"<urn:uuid:081e732a-fa50-4dda-b7c9-f619ea0c5794>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
cdf ratio and mean estimation
November 24th 2008, 02:25 AM #1
Nov 2008
cdf ratio and mean estimation
Dear all,
I need to know if is it possible to find the mean of a normal distribution knowing the ratio of the area under the "bell" curve on the left and on the right of a given point.
To be more precise I need to solve the following equation
$\frac{1}{\sigma \sqrt{2\pi }}\int_{x=\bar{X}}^{+\infty }\exp \left( \frac{\left( x-\mu \right) ^{2}}{2\sigma ^{2}}\right) dx=\frac{k}{\sigma \sqrt{2\pi }}\int_{x=-\infty}^{\bar{X} }\exp \left( \
frac{\left( x-\mu \right) ^{2}}{2\sigma ^{2}}\right) dx$
where I know the ratio $k$, the variance $\sigma ^{2}$ and the point $\bar{X}$ is fixed. i need to find the mean of the distribution $\mu$ that achieve the ratio of $k$ between the two portion of
thanks in advance.
Dear all,
I need to know if is it possible to find the mean of a normal distribution knowing the ratio of the area under the "bell" curve on the left and on the right of a given point.
To be more precise I need to solve the following equation
$\frac{1}{\sigma \sqrt{2\pi }}\int_{x=\bar{X}}^{+\infty }\exp \left( \frac{\left( x-\mu \right) ^{2}}{2\sigma ^{2}}\right) dx=\frac{k}{\sigma \sqrt{2\pi }}\int_{x=-\infty}^{\bar{X} }\exp \left( \
frac{\left( x-\mu \right) ^{2}}{2\sigma ^{2}}\right) dx$
where I know the ratio $k$, the variance $\sigma ^{2}$ and the point $\bar{X}$ is fixed. i need to find the mean of the distribution $\mu$ that achieve the ratio of $k$ between the two portion of
thanks in advance.
hey mate, just so Im reading your your question correctly, you have an integral equation of the form
int(f(x)) = k*inf(g(x)) and you want to solve for k?
my problem is in the form
$\int_{x=\bar{X}}^{+\infty }f(\mu,x) dx=k\int_{x=-\infty}^{\bar{X} }f(\mu,x) dx$
and I want to solve it for $\mu$. I already know $k$ and $\bar{X}$.
Hey Simo,
are you sure the integrand in both integrals is exp( ( (x - mu)/(sqrt(2)*sigma) )^2 ), should it be exp( - ( (x - mu)/(sqrt(2)*sigma) )^2 ) ?
you are right...
typo and cut-&-past error!
the correct equation is
$\frac{1}{\sigma \sqrt{2\pi }}\int_{x=\bar{X}}^{+\infty }\exp \left( -\frac{\left( x-\mu \right) ^{2}}{2\sigma ^{2}}\right) dx=\frac{k}{\sigma \sqrt{2\pi }}\int_{x=-\infty}^{\bar{X} }\exp \left(
-\frac{\left( x-\mu \right) ^{2}}{2\sigma ^{2}}\right) dx$
of course, it is the pdf of a normal distribution.
I have already solved the problem...
how did you get it out?
actually...solving the integral equation...
$\frac{1}{\sigma \sqrt{2\pi }}\int_{x=\bar{X}}^{+\infty }\exp \left( -\frac{\left( x-\mu \right) ^{2}}{2\sigma ^{2}}\right) dx=\frac{K}{\sigma \sqrt{2\pi }}\int_{-\infty }^{x=\bar{X}}\exp \left(
-\frac{\left( x-\mu \right)<br /> ^{2}}{2\sigma ^{2}}\right) dx$
with the new variable
$y=\frac{x-\mu }{\sigma \sqrt{2}}$
the former become
$\int_{y=-\infty }^{\frac{\bar{X}-\mu }{\sigma \sqrt{2}}}\exp<br /> (-y^{2})dy=K\int_{y=\frac{\bar{X}-\mu }{\sigma \sqrt{2}}}^{+\infty }\exp(-y^{2})dy$
(note that in the previous the constant terms that appear on both sides are omitted)
solving the integrals become
${erf}(\frac{\bar{X}-\mu }{\sigma \sqrt{2}})-(-1)=K\left( (+1)-{erf}(\frac{\bar{X}-\mu }{\sigma \sqrt{2}})\right)$
${erf}(\frac{\bar{X}-\mu }{\sigma \sqrt{2}})=\frac{K-1}{K+1}$
$\mu =\bar{X}-{erf}^{-1}\left( \frac{K-1}{K+1}\right) \sigma \sqrt{2}$
at the end, it was quite straightforward.
i've tested it numerically...works.
hey mate,
exact solution I had, however I was trying to find an analytical expression for inv_erf(x), do you think it exists? I'm gonna keep working at it!
great problem through, I'm consider a Laplace transform technique... hopefully it will point me in the right direction
i.e. you have
erf( (x-mu/(sqrt(2)*sigma) ) = (K- 1)/(K + 1)
this is obviously an integral equation which when utilising Laplace (or Fourier transforms) may yield an analytical solution.
Cheers again for the problem, really enjoyed working on it!
ps - I do have one discrepency with you solution,
I have erf(...) = (1-K)/(1+K)
on your second line, are you sure you have the K on the correct side?
to the best of my knowledge, there is no closed form for the inverse error function. i've look in some texts but the only representation was the same you can find in wikipedia
Error function - Wikipedia, the free encyclopedia
...a series expansion...
at the moment, for my work it's fine, but if you go ahead with the work...good luck1
to the best of my knowledge, there is no closed form for the inverse error function. i've look in some texts but the only representation was the same you can find in wikipedia
Error function - Wikipedia, the free encyclopedia
...a series expansion...
at the moment, for my work it's fine, but if you go ahead with the work...good luck1
Cool no worries, Ive played around with it a bit and Ive gotten to the point of not caring, just one other thing are you sure about the K-1 as opposed to 1-K? following your original problem and
your solution I think you may have the K on the wrong side of the equation (line after 'introduce new variable (x-mu)/(sqrt(2)*sigma)
I hope you dont take offence, Ive just played with the algebra a few times and always end up with 1 - K.
Keep posts like this coming!
ok, I've made another mistake...
the equation that I've actually solved is
$\frac{1}{\sigma \sqrt{2\pi }}\int_{-\infty }^{x=\bar{X}}\exp \left( -\frac{\left( x-\mu \right)^{2}}{2\sigma ^{2}}\right) dx = \frac{K}{\sigma \sqrt{2\pi }}\int_{x=\bar{X}}^{+\infty }\exp \left(
-\frac{\left( x-\mu \right) ^{2}}{2\sigma ^{2}}\right) dx<br />$
instead of
$\frac{1}{\sigma \sqrt{2\pi }}\int_{x=\bar{X}}^{+\infty }\exp \left( -\frac{\left( x-\mu \right) ^{2}}{2\sigma ^{2}}\right) dx=\frac{K}{\sigma \sqrt{2\pi }}\int_{-\infty }^{x=\bar{X}}\exp \left(
-\frac{\left( x-\mu \right)<br /> ^{2}}{2\sigma ^{2}}\right) dx$
that was the first one posted and pasted in the post with the scratch of the solution...I've switched the integrals
November 24th 2008, 03:03 AM #2
Nov 2008
November 24th 2008, 04:01 AM #3
Nov 2008
November 24th 2008, 04:18 AM #4
Nov 2008
November 24th 2008, 04:26 AM #5
Nov 2008
November 24th 2008, 05:56 AM #6
Nov 2008
November 24th 2008, 06:39 AM #7
Nov 2008
November 24th 2008, 07:06 AM #8
Nov 2008
November 24th 2008, 07:24 AM #9
Nov 2008
November 24th 2008, 07:33 AM #10
Nov 2008
November 24th 2008, 07:52 AM #11
Nov 2008
November 24th 2008, 08:07 AM #12
Nov 2008 | {"url":"http://mathhelpforum.com/advanced-statistics/61320-cdf-ratio-mean-estimation.html","timestamp":"2014-04-20T06:28:28Z","content_type":null,"content_length":"65641","record_id":"<urn:uuid:2e35b0b5-a6a5-4c95-a3c4-13b6297e9dda>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cryptology ePrint Archive: Report 2009/304
Factor-4 and 6 Compression of Cyclotomic SubgroupsKoray KarabinaAbstract: Bilinear pairings derived from supersingular elliptic curves of embedding degrees 4 and 6 over finite fields of
characteristic two and three, respectively, have been used to implement pairing-based cryptographic protocols. The pairing values lie in certain prime-order subgroups of certain cyclotomic subgroups.
It was previously known how to compress the pairing values over characteristic two fields by a factor of 2, and the pairing values over characteristic three fields by a factor of 6. In this paper, we
show how the pairing values over characteristic two fields can be compressed by a factor of 4. Moreover, we present and compare several algorithms for performing exponentiation in the prime-order
subgroups using the compressed representations. In particular, in the case where the base is fixed, we expect to gain at least a 54% speed up over the fastest previously known exponentiation
algorithm that uses factor-6 compressed representations. Category / Keywords: Finite field compression, cyclotomic subgroups, pairing-based cryptographyDate: received 23 Jun 2009, last revised 27 Apr
2010Contact author: kkarabin at uwaterloo caAvailable format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation Version: 20100427:195835 (All versions of this report)
Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ] | {"url":"http://eprint.iacr.org/2009/304/20100427:195835","timestamp":"2014-04-18T13:10:49Z","content_type":null,"content_length":"2940","record_id":"<urn:uuid:4203d6eb-cb2f-42af-85fa-b9d29d21012f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-Dev] Resolving PR 235: t-statistic = 0/0 case
Skipper Seabold jsseabold@gmail....
Wed Jun 6 16:35:58 CDT 2012
On Wed, Jun 6, 2012 at 5:18 PM, Junkshops <junkshops@gmail.com> wrote:
> Hi Nathaniel,
> At the outset, I'll just say that if the consensus is that we should
> return NaN, I'll accept that. I'll still try and argue my case though.
>> My R seems to throw an exception whenever the variance is zero
>> (regardless of the mean difference), not return NaN:
> Sorry, yes, that's correct.
>> Like any parametric test, the t-test only makes sense under some kind
>> of (at least approximate) assumptions about the data generating
>> process. When the sample variance is 0, then those assumptions are
>> clearly violated,
> So this seems similar to argument J2, and I still don't understand it.
> Let's say we assume our population data is normally distributed and we
> take three samples from the population and get [1,1,1]. How does that
> prove our assumption is incorrect? It's certainly possible to pull the
> same number three times from a normal distribution.
How do you justify that 3 empirical observations [1,1,1] come from a
normal distribution? If you have enough data for the central limit
theorem to come into play, and your variance is still 0, this is so
unlikely that I think the consequences of *possibly* incorrectly
returning NaN here would be small. If you're simulating data from a
known distribution, take another draw...
>> and it doesn't seem appropriate to me to start
>> making up numbers according to some other rule that we hope might give
>> some sort-of appropriate result ("In the face of ambiguity, refuse the
>> temptation to guess."). So I actually like the R/Matlab option of
>> throwing an exception or returning NaN.
> Well, we're not making up numbers here - we absolutely know the means
> are the same. Hence p = 1 and t = 0.
But what we don't know is if the test is even appropriate, so why not
be cautious and return NaN. It's very easy for a user to make the
decision that NaN implies p = 1, if that's what you want to have.
This doesn't seem to be of all that much practical importance. In what
situation do you expect this to really matter?
More information about the SciPy-Dev mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-dev/2012-June/017646.html","timestamp":"2014-04-17T18:47:22Z","content_type":null,"content_length":"5054","record_id":"<urn:uuid:8f870463-8850-46d2-a595-eec6ba8c3110>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
Categories, Types, and Structures. Foundations of Computing Series
, 1994
"... There is a strong request for GIS to include temporal information. Most efforts currently are addressing the incorporation of time qua calendar time. Events are dated according to the ordinary
time and calendar, which are effectively measurements on an interval scale. Temporal information available ..."
Cited by 11 (5 self)
Add to MetaCart
There is a strong request for GIS to include temporal information. Most efforts currently are addressing the incorporation of time qua calendar time. Events are dated according to the ordinary time
and calendar, which are effectively measurements on an interval scale. Temporal information available only as relative order between events cannot be incorporated in this framework. Clearly knowledge
about temporal order without measurement on the time scale is less precise but nevertheless useful. Human beings use qualitative temporal reasoning all the time. Qualitative ordinal information about
events is typically encountered in archeology, urban development etc. where precise dates for events are not known but the relative order of events can be deduced from observations. Even in legal
proceedings about parcel data, ordinal relations are often all what matters. These are among the disciplines which have asked for the inclusion of facilities to deal with temporal data in GIS. This
paper gi...
, 1998
"... It was already known that the category of T 0 topological spaces is not itself cartesian closed, but can be embedded into the cartesian closed categories FIL of filter spaces and EQU of
equilogical spaces where the latter embeds into the cartesian closed category ASSM of assemblies over algebraic la ..."
Cited by 6 (3 self)
Add to MetaCart
It was already known that the category of T 0 topological spaces is not itself cartesian closed, but can be embedded into the cartesian closed categories FIL of filter spaces and EQU of equilogical
spaces where the latter embeds into the cartesian closed category ASSM of assemblies over algebraic lattices. Here, we first clarify the notion of filter space---there are at least three versions FIL
a ' FIL b ' FIL c in the literature. We establish adjunctions between FIL a and ASSM and between FIL c and ASSM, and show that FIL b and FIL c are equivalent to reflective full subcategories of ASSM.
The corresponding categories FIL b 0 and FIL c 0 of T 0 spaces are equivalent to full subcategories of EQU. Keywords: Categorical models and logics, domain theory and applications Author's address:
Reinhold Heckmann, FB 14 -- Informatik, Universitat des Saarlandes, Postfach 151150, D-66041 Saarbrucken, Germany Phone: +49 681 302 2454 Fax: +49 681 302 3065 e-mail: heckmann@cs.un...
- In CAAP'92 , 1998
"... Following the program of Moggi, the semantics of a simple non-deterministic functional language with recursion and failure is described by a monad. We show that this monad cannot be any of the
known power domain constructions, because they do not handle non-termination properly. Instead, a novel con ..."
Cited by 3 (1 self)
Add to MetaCart
Following the program of Moggi, the semantics of a simple non-deterministic functional language with recursion and failure is described by a monad. We show that this monad cannot be any of the known
power domain constructions, because they do not handle non-termination properly. Instead, a novel construction is proposed and investigated. It embodies both nondeterminism (choice and failure) and
possible non-termination caused by recursion. 1 Introduction Following the proposals of Moggi [Mog89, Mog91b], functional languages with various notions of computations can be denotationally
described by means of monads. Monads are constructions mapping domains of values into domains of computations for these values. Computations involving destructive assignments, for instance, are
handled by the state transformer monad [Wad90], whereas computations by non-deterministic choice are handled by power domain constructions [Plo76, Smy78, Gun90]. All known power domain constructions
and many others ar...
, 1998
"... ions are defined both for objects and layers. There are several compatibility requirements for the definition of these functions. The set of objects contains a specific ?-element which allows
the source and target functions to be total on the set of objects. Up to now there exists no implementation ..."
Cited by 2 (0 self)
Add to MetaCart
ions are defined both for objects and layers. There are several compatibility requirements for the definition of these functions. The set of objects contains a specific ?-element which allows the
source and target functions to be total on the set of objects. Up to now there exists no implementation of general colimits in the AGG-system. This problem is currently fixed by the integration of
the colimit library. Again we can use the colimit computation for Alpha algebras. For this purpose we have to find an Alpha representation of AGG-graphs. Here we will outline the idea. r0 r1 r2
object layer label v0 r4 r0 Item Data The picture above presents a possible Alpha type algebra for AGG-graphs. r 0 ; r 1 and r 2 correspond to the abstraction, source and target functions, r 4
represents the assignment of layers to objects and v 0 is the labelling function. Note that although not shown in the picture, since all references are total, r 1 ; r 2 and r 3 are defined also for
layer. This shows ...
- Proceedings of CSL 2000, Springer LNCS Volume 1862 , 2000
"... . We present a linear realizability technique for building Partial Equivalence Relations (PER) categories over Linear Combinatory Algebras. These PER categories turn out to be linear categories
and to form an adjoint model with their co-Kleisli categories. We show that a special linear combinato ..."
Cited by 2 (1 self)
Add to MetaCart
. We present a linear realizability technique for building Partial Equivalence Relations (PER) categories over Linear Combinatory Algebras. These PER categories turn out to be linear categories and
to form an adjoint model with their co-Kleisli categories. We show that a special linear combinatory algebra of partial involutions, arising from Geometry of Interaction constructions, gives rise to
a fully and faithfully complete model for ML polymorphic types of system F. Keywords: ML-polymorphic types, linear logic, PER models, Geometry of Interaction, full completeness. Introduction
Recently, Game Semantics has been used to define fully-complete models for various fragments of Linear Logic ([AJ94a,AM99]), and to give fully-abstract models for many programming languages,
including PCF [AJM96,HO96,Nic94], richer functional languages [McC96], and languages with non-functional features such as reference types and non-local control constructs [AM97,Lai97]. All these
results are cru...
, 1996
"... . A formal framework based on algebraic graph theory is presented that integrates specification and construction of dynamics in information systems. Specifications are based on temporal logic
whose semantics is given by algebras and partial homomorphisms. Constructions are given by graph transformat ..."
Add to MetaCart
. A formal framework based on algebraic graph theory is presented that integrates specification and construction of dynamics in information systems. Specifications are based on temporal logic whose
semantics is given by algebras and partial homomorphisms. Constructions are given by graph transformation rules whose operational nature provides a first step towards actual implementations. Both are
related by a correctness notion. The formal framework is especially suited as a semantical basis for graphical notations as used in conceptual modeling, thus combining intuitiveness of such notations
with precision of formal methods. 1 Introduction Conceptual modeling, i.e., specification of information systems and database applications is traditionally done by employing semantic data models [26,
27, 15, 23]. Although structure and behavior of information systems must be taken into account, conceptual modeling classically focuses on the structural aspects only, for instance by using the
, 2001
"... Building the access pointers to a computation environment † ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2791426","timestamp":"2014-04-19T23:08:52Z","content_type":null,"content_length":"29216","record_id":"<urn:uuid:d25b6570-60b7-468e-9c1e-95631128348d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inverse Functions
7.11: Inverse Functions
Difficulty Level:
At Grade
At Grade
Created by: CK-12
A planet's maximum distance from the sun (in astronomical units) is given by the formula $d = p^{\frac {2}{3}}$p is the period (in years) of the planet's orbit around the sun. What is the inverse of
this function?
By now, you are probably familiar with the term “inverse”. Multiplication and division are inverses of each other. More examples are addition and subtraction and the square and square root. We are
going to extend this idea to functions. An inverse relation maps the output values to the input values to create another relation. In other words, we switch the $x$$y$
Example A
Find the inverse mapping of $S=\left \{(6, -1), (-2, -5), (-3, 4), (0, 3), (2, 2)\right \}$
Solution: Here, we will find the inverse of this relation by mapping it over the line $y=x$$S$$S^{-1}$$s$$x$$y$
$S^{-1}=\left \{(-1, 6), (-5, -2), (4, -3), (3, 0), (2, 2)\right \}$
If we plot the two relations on the $x-y$
The blue points are all the points in $S$$S^{-1}$$S^{-1}$$S$$y=x$
If we were to fold the graph on $y=x$$S^{-1}$$S$$(2, 2)$
Domain of $S$$x \in \left \{6, -2, -3, 0, 2\right \}$
Range of $S$$y \in \left \{-1, -5, 4, 3, 2\right \}$
Domain of $S^\prime$$x \in \left \{-1, -5, 4, 3, 2\right \}$
Range of $S^\prime$$y \in \left \{6, -2, -3, 0, 2\right \}$
By looking at the domains and ranges of $S$$S^{-1}$$x$one-to-one function. Each value maps one unique value onto another unique value.
Example B
Find the inverse of $f(x)= \frac{2}{3}x-1$
Solution: This is a linear function. Let’s solve by doing a little investigation. First, draw the line along with $y=x$
Notice the points on the function (blue line). Map these points over $y=x$$x$$y$$y=x$
The red line in the graph to the right is the inverse of $f(x)= \frac{2}{3}x-1$$\frac{3}{2}$$y$
$f^{-1}(x)&= \frac{3}{2}x+b \\0&= \frac{3}{2}(-1)+b \\\frac{3}{2}&=b$
The equation of the inverse, read “$f$$f^{-1}(x)= \frac{3}{2}x+ \frac{3}{2}$
You may have noticed that the slopes of $f$$f^{-1}$$x$$f$$y$$f^{-1}$
Alternate Method: There is also an algebraic approach to finding the inverse of any function. Let’s repeat this example using algebra.
1. Change $f(x)$$y$
$y= \frac{2}{3}x-1$
2. Switch the $x$$y$$y$$y^{-1}$
$x= \frac{2}{3}y^{-1}-1$
3. Solve for $y^\prime$
$x&= \frac{2}{3}y^{-1}-1 \\\frac{3}{2}(x+1)&= \frac{3}{2} \cdot \left(\frac{2}{3}y^{-1} \right) \\\frac{3}{2}x+ \frac{3}{2}&=y^{-1}$
The algebraic method will work for any type of function.
Example C
Determine if $g(x)=\sqrt{x-2}$$f(x)=x^2+2$
Solution: There are two different ways to determine if two functions are inverses of each other. The first, is to find $f^{-1}$$g^{-1}$$f^{-1}=g$$g^{-1}= f$
$x&= \sqrt{y^{-1}-2}&& && \qquad \quad \ \ x=(y^{-1})^2+2 \\x^2&=y^{-1}-2 && and && \qquad x-2=(y^{-1})^2 \\x^2+2&=y^{-1}=g^{-1}(x)&& && \pm \sqrt{x-2}=y^{-1}=f^{-1}(x)$
Notice the $\pm$$f^{-1}$$g^{-1}$$\sqrt{x-2}$$- \sqrt{x-2}$
Therefore, $f^{-1}$$f(x)$$y=x$
The inverse of $g$$f$$g$
Alternate Method: The second, and easier, way to determine if two functions are inverses of each other is to use composition. If $f \circ g=g \circ f=x$$f$$g$$x$
$f \circ g&= \sqrt{\left(x^2+2\right)-2} && && g \circ f= \sqrt{x-2}^2+2 \\&= \sqrt{x^2} && and && \qquad=x-2+2 \\&=x && && \qquad=x$
Because $f \circ g=g \circ f=x$$f$$g$$f \circ g=x$$g \circ f=x$$f$$g$
Intro Problem Revisit In the function $d = p^{\frac {2}{3}}$d is the equivalent of y and p is the equivalent of x. So rewrite the equation and follow the step-by-step process.
$y = x^{\frac {2}{3}}$
Switch the $x$$y$$y$$y^{-1}$
$x = (y^{-1})^{\frac {2}{3}}$
3. Solve for $y^\prime$
$x = (y^{-1})^{\frac {2}{3}}\\x^\frac{3}{2} = (y^{-1})^{\frac {2}{3}\cdot \frac{3}{2}}\\x^\frac{3}{2} = y^{-1}$
Now replace y and x with d and p. The inverse d is $p^\frac{3}{2}$
Guided Practice
1. Find the inverse of $g(x)=- \frac{3}{4}x+12$
2. Find the inverse of $f(x)=2x^3+5$
3. Determine if $h(x)=4x^4-7$$j(x)= \frac{1}{4} \sqrt[4]{x-7}$
1. Use the steps given in the Alternate Method for Example B.
$y&=- \frac{3}{4}x+12 \\x&=- \frac{3}{4}y^{-1}+12 \\x-12&=- \frac{3}{4}y^{-1} \\- \frac{4}{3}(x-12)&=y^{-1} \\g^{-1}(x)&=- \frac{4}{3}x+16$
2. Again, use the steps from Example B.
$y&=2x^3+5 \\x&=2(y^{-1})^3+5 \\x-5&=2(y^{-1})^3 \\\frac{x-5}{2}&=(y^{-1})^3 \\f^{-1}(x)&= \sqrt[3]{\frac{x-5}{2}}$
Yes, $f^{-1}$
3. First, find $h(j(x))$
$h(j(x))&=4 \left(\frac{1}{4} \sqrt[4]{x+7}\right)^4-7 \\&=4 \cdot \left(\frac{1}{4}\right)^4 x+7-7 \\&= \frac{1}{64}x$
Because $h(j(x)) e x$$h$$j$$j(h(x))$
Inverse Relation/Function
When a relation or function’s output values are mapped to create input values for a new relation (or function). The input values of the original function would become the output values for the
new relation (or function).
One-to-one Function
When the inverse of a function is also a function.
Problem Set
Write the inverses of the following functions. State whether or not the inverse is a function.
1. $(2, 3), (-4, 8), (-5, 9), (1, 1)$
2. $(9, -6), (8, -5), (7, 3), (4, 3)$
Find the inverses of the following functions algebraically. Note any restrictions to the domain of the inverse functions.
3. $f(x)=6x-9$
4. $f(x)= \frac{1}{4x+3}$
5. $f(x)= \sqrt{x+7}$
6. $f(x)=x^2+5$
7. $f(x)=x^3-11$
8. $f(x)= \sqrt[5]{x+16}$
Determine whether $f$$g$$f \circ g=x$$g \circ f=x$
9. $f(x)= \frac{2}{3}x-14$$g(x)= \frac{3}{2}x+21$
10. $f(x)= \frac{x+5}{8}$$g(x)=8x+5$
11. $f(x)= \sqrt[3]{3x-7}$$g(x)= \frac{x^3}{3}-7$
12. $f(x)= \frac{x}{x-9},x e 9$$g(x)= \frac{9x}{x-1}$
Find the inverses of the following functions algebraically. Note any restrictions to the domain of the inverse functions. These problems are a little trickier as you will need to factor out the $y$
• $x=\frac{3y+13}{2y-11}$$x$$y$
• $2xy-11x=3y+13$$2y-11$
• $2xy-3y=11x+13$$y$
• $y(2x-3)=11x+13$$y$
• $y= \frac{11x+13}{2x-3}$$2x-3$$y$
So, the inverse of $f(x)= \frac{3x+13}{2x-11},x e \frac{11}{2}$$f^{-1}(x)= \frac{11x+13}{2x-3},x e \frac{3}{2}$
13. $f(x)= \frac{x+7}{x},x e 0$
14. $f(x)= \frac{x}{x-8},x e 8$
Multi-step problem
15. In many countries, the temperature is measured in degrees Celsius. In the US we typically use degrees Fahrenheit. For travelers, it is helpful to be able to convert from one unit of measure to
another. The following problem will help you learn to do this using an inverse function.
1. The temperature at which water freezes will give us one point on a line in which $x$$y$$y= \frac{9}{5}x+32$$F= \frac{9}{5}C+32$
2. Find the inverse of the equation above by solving for $C$
3. Show that your inverse is correct by showing that the composition of the two functions simplifies to either $F$$C$
Files can only be attached to the latest version of Modality | {"url":"http://www.ck12.org/book/CK-12-Algebra-II-with-Trigonometry-Concepts/r1/section/7.11/","timestamp":"2014-04-20T11:36:41Z","content_type":null,"content_length":"165023","record_id":"<urn:uuid:c1804755-7931-42c9-bd9d-b72bfd72f5f9>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pressure & Water
PRESSURE AND WATER
If you will recall, the pressure of the atmosphere may be measured in pounds per square inch (psi) or in atmospheres (atm). There are 14.7 pounds of air on every square inch at sea level, or 1
atmosphere. Again, the pressure is caused by billions of moving molecules bombarding each square inch. Most of those molecules are nitrogen (78%), and the rest are oxygen (21%).
A diver is ready to enter the water and is standing at the beach with the ambient (surrounding) pressure at 1 atm due to the 15 miles of air overhead. He or she enters the water and descends to 33'
(34' in fresh water) where the pressure is 2 atmospheres (29.4 psi). One of those atmospheres is caused by the air and the other from the water. Divers refer to the total pressure (2 atm.) as,
"absolute pressure." When one refers to the pressure caused only by the water it is called, "gauge pressure." Therefore, the absolute pressure at 33' in seawater is 2 atm, and the gauge pressure is 1
atm. because the air is ignored. Likewise, the gauge pressure at 66' would be 2 atmospheres and the absolute pressure would be 3.
The average male human being has about 2800 square inches on the outside of their body. If one considers there are 14.7 pounds of molecules pounding each square inch, the total pressure works out to
be 21 tons. We should be crushed to death. Fortunately there are molecules inside our body that push out with an almost equal force. We are stabilized and don't even know it. In reality, the internal
molecules found in the blood and tissues push with a greater force outward because of the power of the pump (heart). That is why the flow is outward when a knife wound occurs. (Things flow from high
pressure to low.) Blood pressure, like 120/80, is measured in millimeters of mercury. Sea level pressure is 760 mm, which equals 14.7 psi. So, 120 mm is about 2.4 psi and 80 mm is 1.6 psi. When your
heart pumps it creates a pressure of 2.4 psi, and when it is "resting" the pressure drops to 1.6 psi.
While we are at it, let's see how depth under the sea is related to millimeters of mercury (Hg). At sea level the average atmospheric pressure is 29.91" Hg. That is equivalent to 760mm or 76.0cm.
(Check a ruler that has inches next to mm or cm just to see the 760mm is next to 29.91".) If 33' of seawater is 1 atmosphere then 33' of seawater equals 760mm Hg. It follows that 1' of seawater would
equal 23mm Hg (760/33=23), and the heart pressure at its maximum equals about 5' of seawater.
Now, if you filled a plastic bag with water and took it down in the sea there would be little change to the shape of the bag. As you found out with the hypodermic syringe, fluids are barely
compressible. If you filled that same bag with air it would get smaller as you descended into greater pressure. In fact, the bag would be 1/2 the size at 33' because the pressure there is double (2
atm absolute). If you went further down to 66', where the pressure is 3 atm., the bag would be 1/3 of the size, and so on.
Robert Boyle stated the above in a mathematical way: P1V1 = P2V2. That's Boyle's Law and is very important for divers. The affect of Boyle's Law can kill a diver. How does the law work? The P1 is the
pressure at the first location such as sea level (1 atm.). The P2 is the pressure at the 2nd location such as 33'. The V1 is the volume of the gas space, such as our plastic bag, at the first
location, and the P2 is the volume of the gas space at the 2nd location. To make it clearer, let's say the plastic bag at the beach is 1 liter in size. When you take it to 33' it should be 1/2 liter.
Check it out doing Boyle's math:
1 Atm (at the surface) x 1 liter should = 2 Atm (at 33') x 1/2 liter
1 x 1 = 2 x 1/2
If you didn't know one of the numbers in the above equation you should be able to figure it out. If you didn't know how big the bag would be at 33' you would have math that looked like this:
1 Atm (beach) x 1 liter = 2 Atm (33') x WHAT ANSWER WOULD GO HERE?
1 x 1=2 x ? The question mark is a number you should now be able to figure out.
Your body is composed mainly of fluids and solids. If we were made totally of fluids and solids scuba diving would present far fewer problems. We have gas spaces in our bodies and they act like a
plastic bag filled with air when we go down to greater pressures. That is why your ears begin to hurt when you go to the bottom of a swimming pool. There are gas spaces in the ears that are being
squeezed smaller as one goes deeper. This crushing effect causes discomfort and pain. There are other gas spaces that also shrink and expand as the diver goes down and up. The list includes the
middle ears, sinuses, stomach, intestines, and the lungs. The mouth, nose, and throat are open to the outside and are not usually affected by Boyle's law.
It is important in skin and scuba diving to keep the pressure inside body air spaces the same as the pressure on the outside!
Pressure underwater increases and decreases most rapidly when you are near the surface. Going down 10 feet in a swimming pool results in a much greater pressure change then going from 30 to 40 feet
underwater. Descending from the surface to 33' changes the pressure from 1 atm. to 2 atm. and that would double it. Descending another 33' to 66', the pressure would increase from 2 to 3. That is not
another doubling. You would have to go from 33' (2 atm) to 99' (4 atm), or 66', to equal what happens from the surface (1 atm) to 33' (2 atm). So, the upper 33' doubles the pressure but it takes 66'
to do the same thing when deeper.
As a diver descends the pressure of the breathing air gets greater. As stated before, the pressure of the air entering the diver's mouth at 33' is double that of the air breathed at the surface.
There are twice as many molecules of oxygen and nitrogen going in and out of the diver's lungs. If the diver descends further the number of molecules increase and that makes breathing more difficult.
It is similar to sucking water verses oil through a straw. At 132' the density of the air is five times the surface density. Five times the number of molecules must move out from the tank and through
the regulator. Five times the number will enter the diver's lungs with each breath. Five times the number will be exhaled. The extra effort may be quite noticeable.
You should be able to figure out what the water pressure is for any depth. From memory, you probably know the answer to the question, "What is the pressure of the water at 33' (fresh: 34')?" You
would say 2 atmospheres, correct? If the question was for 66" (68'), you would answer 3 atmospheres. Now, what if you were asked for the pressure at 57' would you be able to figure it out? Try to
work out a simply equation for doing this math. Use 33' and 66' in the search because you already know the answers for those two. The answer is below (BUT DO NOT LOOK AT IT BEFORE FIGURING THE ANSWER
Depth and pressure problem answer: Depth/33' = Gauge pressure. Add 1 for the atmosphere to get the absolute pressure. So, Depth/33' + 1 = Pressure of the water in atmospheres.
Using 57": 57'/33' = 1.73; 1.73 +1=2.73. The pressure at 57' is 2.73 atm.
Note: Use 34' instead of 33' if it is fresh water.
The deepest lake in the world is Lake Baikal in southern Siberia. It is 5,390 feet. What would be the absolute pressure at the bottom of the fresh water Lake Baikal in psi?
The deepest depth of the ocean in the Marianas Trench near Guam. It measures 35,839 feet. What would be the pressure at the bottom of the salt water Marianas Trench in psi? | {"url":"http://www.deep-six.com/page59.htm","timestamp":"2014-04-19T01:47:56Z","content_type":null,"content_length":"9528","record_id":"<urn:uuid:34b7df7f-f3c7-4f44-84cf-9bd1258ffb17>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
ATIS Telecom Glossary
1. A measure of the attenuation caused by absorption of energy per unit of distance that occurs in an electromagnetic wave of given wavelength propagating in a material medium of given refractive
index. Note: The value of the absorption index K' is given by the relation
where K is the absorption coefficient, n is the refractive index of the absorptive material medium. [After 2196] 2. The functional relationship between the Sun angle--at any latitude and local
time--and the ionospheric absorption. | {"url":"http://www.atis.org/glossary/definition.aspx?id=16","timestamp":"2014-04-19T02:55:17Z","content_type":null,"content_length":"12922","record_id":"<urn:uuid:e758584e-6ffb-4d74-864a-c5df3efcfe8c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary of ANALGWST
Water Resources Applications Software
Summary of ANALGWST
analgwst - A set of programs that calculate analytical solutions for
one-, two-, and three-dimensional solute transport in
ground-water systems with uniform flow
The individual programs are:
finite(1) - One-dimensional solute transport in a finite system
seminf(1) - One-dimensional solute transport in a semi-infinite
point2(1) - Two-dimensional solute transport in an infinite system
with a continuous point source
stripf(1) - Two-dimensional solute transport in a finite-width
system with a finite-width solute source
stripi(1) - Two-dimensional solute transport in an infinite-width
system with a finite-width solute source
gauss(1) - Two-dimensional solute transport in an infinite-width
system with solute source having a gaussian
concentration distribution
point3(1) - Three-dimensional solute transport in an infinite system
with a continuous point source
point3_mod(1) - Point3 program modified to reproduce result as
described in Wexler (1992a), page 49.
patchf(1) - Three-dimensional solute transport in a finite-width and
finite-height system with a finite-width and finite-
height source
patchi(1) - Three-dimensional solute transport in an infinite-width
and infinite-height system with a finite-width and
finite-height source
Analytical solutions to the advective-dispersive solute-transport
equation are useful in predicting the fate of solutes in ground
water. Computer programs that compute the analytical solutions
compiled from available literature or derived by E.J. Wexler are
provided for a variety of systems and boundary conditions.
Version 1.1 1996/04/03 - All programs have been restructured to a
consistent coding format and style.
Each program requires data on advective velocity, dispersion
coefficient, spatial information, temporal information, and boundary
concentrations. Optional data may include a first-order solute-
decay coefficient.
Output is the calculated concentrations at specified points in time
and space. A plotting option exists to view the output as graphs.
The computer programs are written in Fortran 77 with the following
extensions: use of include files and reference to compiler-dependent
system date and time routines. The computer programs were originally
written for a Prime minicomputer but all programs should run on IBM-
compatible personal computers with minor modifications as described
in Wexler (1992a). The plot routines are written with Computer
Associates' DISSPLA library references. Generally, the programs are
easily installed on most computer systems that have access to the
DISSPLA graphics library. Alternatively, graphics can be disabled
and data can be easily extracted from the program output and plotted
using graphic presentation programs. The programs have been used on
UNIX-based computers.
Wexler, E.J., 1992a, Analytical solutions for one-, two-, and three-
dimensional solute transport in ground-water systems with uniform
flow: U.S. Geological Survey Techniques of Water-Resources
Investigations, book 3, chap. B7, 190 p.
Wexler, E.J., 1992b, Analytical solutions for one-, two-, and three-
dimensional solute transport in ground-water systems with uniform
flow -- Supplemental Report: Source codes for computer programs
and sample data sets: U.S. Geological Survey Open-File Report
92-78, 3 p., 1 computer diskette.
Some of the programs are introduced in the class Ground-Water
Solute-Transport Concepts for Field Investigations (GW2005TC),
offered annually at the USGS National Training Center.
U.S. Geological Survey
Office of Ground Water
Thomas E. Reilly
411 National Center
Reston, VA 20192
U.S. Geological Survey
Hydrologic Analysis Software Support Program
437 National Center
Reston, VA 20192
Official versions of U.S. Geological Survey water-resources analysis
software are available for electronic retrieval via the World Wide
Web (WWW) at:
and via anonymous File Transfer Protocol (FTP) from:
water.usgs.gov (path: /pub/software).
The WWW page and anonymous FTP directory from which the ANALGWST
software can be retrieved are, respectively:
The URL for this page is: http://water.usgs.gov/cgi-bin/man_wrdapp?analgwst
Send questions or comments to h2osoft@usgs.gov | {"url":"http://water.usgs.gov/cgi-bin/man_wrdapp?analgwst","timestamp":"2014-04-19T15:28:02Z","content_type":null,"content_length":"9800","record_id":"<urn:uuid:5fd96603-0290-4027-8d5e-06475ed21e65>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Extending my question. Was: The relational model and relational algebra - why did SQL become the industry standard?
Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
Home -> Community -> Usenet -> comp.databases.theory -> Re: Extending my question. Was: The relational model and relational algebra - why did SQL become the industry standard?
Re: Extending my question. Was: The relational model and relational algebra - why did SQL become the industry standard?
From: Mikito Harakiri <mikharakiri_at_ywho.com> Date: Thu, 13 Feb 2003 12:37:11 -0800 Message-ID: <irT2a.10$O%2.40@news.oracle.com>
"Lauri Pietarinen" <lauri.pietarinen_at_atbusiness.com> wrote in message news:3E4B8137.2080204_at_atbusiness.com...
> < quotes from book Hector Garcia-Molina, Jeffrey D. Ullman, and
> Jennifer Widom, DATABASE SYSTEM IMPLEMENTATION>
> [Relational] algebra was originally defined as if relations were sets
> [sic!--italics added].Yet relations in SQL are really bags ... Thus, we
> shall introduce relational algebra as an algebra on bags.
> ...
> For instance, you may have learned set-theoretic laws such as A
> INTERSECT (B UNION C) = (A INTERSECT B) UNION (A INTERSECT C), which is
> formally the "distributive law of intersection over union." This law
> holds for sets, but not for bags.
> < quotes from book/ >
Therefore, the idea here is that Set Algebra is superior to Bag Algebra? Not for aggregates:
PROJECTION*AGGREGATE != AGGREGATE*PROJECTION for example
select distinct S from (
select distinct SUM(SAL) from emp
is not the same as
select distinct SUM(SAL) from (
select distinct SAL from emp
where i'm using SQL with the "distinct" keyword merely as a surrogate for true relational syntax ("distinct" is redundant after aggregate operation, of course). Received on Thu Feb 13 2003 - 14:37:11
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US | {"url":"http://www.orafaq.com/usenet/comp.databases.theory/2003/02/13/0162.htm","timestamp":"2014-04-18T08:36:31Z","content_type":null,"content_length":"9244","record_id":"<urn:uuid:a519193c-94ec-4c34-aaf6-95cd928be203>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - Can grandpa understand the Bell's Theorem?
Here's an analogy I wrote up a while ago, see if it helps:
Suppose we have a machine that generates pairs of scratch lotto cards, each of which has three boxes that, when scratched, can reveal either a cherry or a lemon. We give one card to Alice and one to
Bob, and each scratches only one of the three boxes. When we repeat this many times, we find that whenever they both pick the same box to scratch, they always get the same result--if Bob scratches
box A and finds a cherry, and Alice scratches box A on her card, she's guaranteed to find a cherry too.
Classically, we might explain this by supposing that there is definitely either a cherry or a lemon in each box, even though we don't reveal it until we scratch it, and that the machine prints pairs
of cards in such a way that the "hidden" fruit in a given box of one card always matches the hidden fruit in the same box of the other card. If we represent cherries as + and lemons as -, so that a
B+ card would represent one where box B's hidden fruit is a cherry, then the classical assumption is that each card's +'s and -'s are the same as the other--if the first card was created with hidden
fruits A+,B+,C-, then the other card must also have been created with the hidden fruits A+,B+,C-.
The problem is that if this were true, it would force you to the conclusion that on those trials where Alice and Bob picked different boxes to scratch, they should find the same fruit on at least 1/3
of the trials. For example, if we imagine Bob and Alice's cards each have the hidden fruits A+,B-,C+, then we can look at each possible way that Alice and Bob can randomly choose different boxes to
scratch, and what the results would be:
Bob picks A, Alice picks B:
results (Bob gets a cherry, Alice gets a lemon)
Bob picks A, Alice picks C:
results (Bob gets a cherry, Alice gets a cherry)
Bob picks B, Alice picks A:
results (Bob gets a lemon, Alice gets a cherry)
Bob picks B, Alice picks C:
results (Bob gets a lemon, Alice gets a cherry)
Bob picks C, Alice picks A:
results (Bob gets a cherry, Alice gets a cherry)
Bob picks C, Alice picks picks B:
results (Bob gets a cherry, Alice gets a lemon)
In this case, you can see that in 1/3 of trials where they pick different boxes, they should get the same results. You'd get the same answer if you assumed any other preexisting state where there are
two fruits of one type and one of the other, like A+,B+,C- or A+,B-,C-. On the other hand, if you assume a state where each card has the same fruit behind all three boxes, so either they're both
getting A+,B+,C+ or they're both getting A-,B-,C-, then of course even if Alice and Bob pick different boxes to scratch they're guaranteed to get the same fruits with probability 1. So if you imagine
that when multiple pairs of cards are generated by the machine, some fraction of pairs are created in inhomogoneous preexisting states like A+,B-,C- while other pairs are created in homogoneous
preexisting states like A+,B+,C+, then the probability of getting the same fruits when you scratch different boxes should be somewhere between 1/3 and 1. 1/3 is the lower bound, though--even if 100%
of all the pairs were created in inhomogoneous preexisting states, it wouldn't make sense for you to get the same answers in less than 1/3 of trials where you scratch different boxes, provided you
assume that each card has such a preexisting state with "hidden fruits" in each box.
But now suppose Alice and Bob look at all the trials where they picked different boxes, and found that they only got the same fruits 1/4 of the time! That would be the violation of Bell's inequality,
and something equivalent actually can happen when you measure the spin of entangled photons along one of three different possible axes. So in this example, it seems we can't resolve the mystery by
just assuming the machine creates two cards with definite "hidden fruits" behind each box, such that the two cards always have the same fruits in a given box.
You can modify this example to show some different Bell inequalities, see post #8 of
this thread
for one example. | {"url":"http://www.physicsforums.com/printthread.php?t=488690","timestamp":"2014-04-20T03:22:06Z","content_type":null,"content_length":"49370","record_id":"<urn:uuid:565e61c6-27d7-4781-9eb5-393a8c10636e>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advanced Functions - Grade 12
Number of results: 76,600
Thank you for your help. It's just that I didn't take functions in grade 11 and now I'm taking Advanced Functions in grade 12 and I'm difficulty with. Thanks for your help though.
Sunday, November 24, 2013 at 4:10pm by Jessy
Grade 12 Advanced Functions
Thank you!!!!
Thursday, October 13, 2011 at 8:54am by Rebecca
Grade 12 Advanced Functions
Ummmmm, I needed help with the second part
Thursday, October 13, 2011 at 9:10am by Anonymous
Grade 12 Advanced Functions
Divide by x+3 and see what you get: x^3 + 4x^2 + x -6 = (x+3)(x^2 + x - 2) = (x+3)(x+2)(x-1) Got it now?
Thursday, October 13, 2011 at 9:10am by Steve
Math grade 12 Advanced Functions
Sorry about that, it's 20.50x
Friday, January 25, 2013 at 11:42am by Anonymous
Grade 12 Advanced Functions
The polynomial 2x^3+px^2-x+6 has a remainder of zero when divided by x-2. Calculate p.
Thursday, October 13, 2011 at 8:54am by Rebecca
Grade 12 Advanced Functions
Ummm, didn't MathMate just do one of these for you? Plug in -3 and evaluate to find c.
Thursday, October 13, 2011 at 9:10am by Anonymous
Grade 12 Advanced Functions
what about this question: Determine the roots algebraically by factoring. a) x^3-8x^2-3x+90=0
Thursday, October 13, 2011 at 9:10am by Confused
Ok, I am going to grade 12 and my university requires a 90% for advanced functions. In grade 11, I got a 78% with no tutoring. Is there who is willing to help me free of cost for tutoring or even
websites that would help me.
Friday, September 3, 2010 at 6:27pm by prethy
math advanced functions grade 12
Find the solution set to the following rational inequality, 240/x+8 > 20x/x+1
Tuesday, July 24, 2012 at 6:25pm by illiah
Grade 12 Advanced Functions
One zero of f(x)=x^3+4x^2+x+c is -3. a) calculate the value of c. b)calculate the other zeros.
Thursday, October 13, 2011 at 9:10am by Rebecca
Grade 12 Advanced Functions
P(x)=2x^3+px^2-x+6 = (x-2)Q(x)=0 Since (x-2) is a factor of P(x), evalulate P(2) and equate the result to zero: 2(2)^3+p(2)^2-(2)+6 = 0 16+4p-2+6=0 4p=-20 p=-5
Thursday, October 13, 2011 at 8:54am by MathMate
Grade 12 Advanced Functions Math
90 = 10 log [ I/F] Use the properties of logarithms to isolate I. 9 = log(I/F) 10^9 = I/F
Tuesday, January 19, 2010 at 12:06am by Marth
Advanced Functions
if cosx = 5/13, (recognize the 5,12,13 triangle ?) sinx = 12/13 tanx = 12/5 if cosy = -5/13 and y is in III siny = -12/13 tany = 12/5 2sinx+2siny+2cosx-2tanx+tany+2cosy = 2(12/13) + 2(-12/13) + 2(5/
13) -2(12/5) + (12/5) + 2(-5/13) = - 12/5 or -2.4
Tuesday, November 19, 2013 at 8:12pm by Reiny
Advanced Functions math
Yes to get that i did: (5^x)^2 - 4(5^x) =12 5^x(1-4) = 12 -3(5^x)= 12 log(5^x) = log(-4) If you could show me the correct way to do it, that would be great! thanks!
Monday, January 18, 2010 at 9:58pm by Diana
12th grade-Advanced functions
Given that f(x)=g(x/3), describe how the graph of f and g are related?
Friday, July 30, 2010 at 3:28pm by aaron
advanced functions/precalculus
1. The function f(x) = (2x + 3)^7 is the composition of two functions, g(x) and h(x). Find at least two different pairs of functions g(x) and h(x) such that f(x) = g(h(x)). 2. Give an example of two
functions that satisfy the following conditions: - one has 2 zeros - one has ...
Wednesday, January 15, 2014 at 2:33am by Diane
12th grade - Advanced Functions
thanks so much Reiny! :D for correcting my mistake and for the answer!
Monday, July 19, 2010 at 9:54pm by Nish
12th grade-Advanced functions
the y-intercept of the graph of y=k(x-1)(x-2)(x-3) is 24 Determine the value of k.
Friday, July 30, 2010 at 4:13pm by aaron
12th grade-Advanced functions
Determine the instantaneous rate of change of y=tanx at x=1 to 3 decimal places
Friday, July 30, 2010 at 5:14pm by sean
advanced functions grd 12
i didnt know how to put the root in so i converted it by adding in the 1/2 exponent! thank you :)
Tuesday, May 8, 2012 at 7:03pm by kerry
gr 12. advanced functions
Trig identity Prove: cos2x tan(pie/4 - x) ------- = 1 + sin2x
Saturday, November 21, 2009 at 10:40am by avm
aadvanced functions HELP!
nope i don't, the school makes it manditory to take advanced functions before calculus
Wednesday, December 10, 2008 at 9:43pm by james
Math grade 12 Advanced Functions
3. Using proper grammar 1 point 3. What is the value of x + y? (5 points) Rubrics: 1. Writing the correct equation(s) 1 point 2. Showing steps 1 point 3. Solving for x 1 point 4. Solving for y 1
point 5. Solving for x + y 1 point
Friday, January 25, 2013 at 11:42am by Anonymous
Advanced Functions
Beginning with the function f(x) = (0.8)x, state what transformations were used on this to obtain the functions given below: g(x) = - (0.8)x -2 h(x) = ½ (0.8)x-2 k(x) = (0.8)-3x+9
Wednesday, January 12, 2011 at 4:05pm by Hailey
Advanced functions/precalculus
#1. g(f) = f+2 = x^2+2 f(g) = g^2 = (x+2)^2 so, when do we have x^2+2 = (x+2)^2 ? #2. f and g are both defined only on {-1} #3. visit wolframalpha.com (fg)(x) = -6x^2+x+12
Wednesday, January 15, 2014 at 2:24am by Steve
Math grade 12 Advanced Functions
A company uses the function C(x)=20.50+2000, where C is the cost and x is the number of units it produces, to determine its daily costs. Find the inverse of the function and determine how many units
are produced when the cost is $625,000.
Friday, January 25, 2013 at 11:42am by Anonymous
Advanced Functions - Grade 12
The function C (t) = 0.16t/t^2 + t + 2 models the concentration, C, in milligrams per cubic centimetre, of a drug entering the bloodstream over time, t, in minutes. a) Sketch the graph of the
relation. b) Explain the shape of the graph in the context of the concentration of ...
Tuesday, July 20, 2010 at 12:26am by Nish
Math Grade 12 Advance Functions
I need help understanding equivalent trigonometric functions! Can someone PLEASE help me because the test is tommorrow and the internet or my textbook isn't helping at all!
Wednesday, December 19, 2012 at 12:31am by Buddy
12th grade-Advanced functions
provide any 2 solutions for the equation y=cot(1/2(theta-pi/2))-5 solutions may be in degrees
Monday, July 19, 2010 at 10:29pm by aaron
12th grade-Advanced functions
Determine the vertical asymptotes of the graph of graph of y=secx/logx for x≤2pi
Friday, July 30, 2010 at 12:35pm by aaron
advanced functions. gr.12
2^2x+ 3(2^x) - 10 = 0 6^2x-2(6^x)-15=0 3^4^5^x =336
Tuesday, August 11, 2009 at 3:25pm by ashley
Check my CALCULUS answers please!
Any short explanation for things I got wrong would be great, too, if possible! Thanks in advanced! :) 8. Which of the following functions grows the fastest? ***b(t)=t^4-3t+9 f(t)=2^t-t^3 h(t)=5^t+t^5
c(t)=sqrt(t^2-5t) d(t)=(1.1)^t 9. Which of the following functions grows the ...
Monday, October 7, 2013 at 12:08pm by Samantha
Advanced Functions
Determine the value of Tan2x when Sinx = -12/13 and 3pi/2 < x < 2pi.
Wednesday, January 14, 2009 at 8:14pm by Lisa
advanced functions
x is in a 3-4-5 triangle y is in a 12-13-5 triangle
Sunday, November 1, 2009 at 6:08pm by bobpursley
Advanced Functions math
Could someone help me with this question? 5^(2x) - 4^(5^x) = 12 I got up to log (5^x)= log(-4) but it isnt correct since it can't be negative.
Monday, January 18, 2010 at 9:58pm by Diana
math-advanced functions
on my exam review, i have this question for composition of functions Given f(x)=3x^2+x-1, g(x)=2cos(x), determine the values of x when f(g(x))=1 for 0≤x≤2π.
Thursday, January 23, 2014 at 9:32am by Liz
12th grade Advanced Functions
A generator produces electrical power, P, in watts, according to the function: P(R)= 120/ (0.4 + R)^2 where R is the resistance, in ohms. Determine the intervals on which the power is increasing.
Sunday, October 25, 2009 at 7:50pm by Anonymous
Grade 12 Advanced Functions Math
Hello! Can someone please help me out for this question? The loudness, L, of a sound in decibels can be calculated using the formula L= 10 log(I/ F) where I is the intensity of the sound in watts per
square metre and F= 10^ -12 . A singer is performing to a crowd. Determine ...
Tuesday, January 19, 2010 at 12:06am by Joan
Will I get a conditional offer to McMaster in the first round? The following is the admission for Commerce I to McMaster: "Grade 12 U/M course requirements: * English U * Advanced Functions 4U * One
of: Calculus & Vectors 4U or Mathematics of Data Management 4U (additional ...
Thursday, January 31, 2008 at 5:36pm by Anonymous
Math (Grade 12)
I need help with this problem (composite functions): If f(x) = 4x-3 and h(x )= 4x^2-21, find a function g such that f(g(x)) = h. I don't understand how we're supposed to combine the functions. Oh,
and the answer is supposed to be g(x) = x^2 - (9/2). Thanks!
Wednesday, September 17, 2008 at 5:50pm by Lucy
Advanced Functions - Grade 12
The maximum time, T, in minutes, a scuba diver can rise without stopping for decompression on the way to the surface is modeled by the equation T (d) = 525/d-10 , d > 10, where d is the depth of the
dive in metres. a) sketch a graph of this relationship. b) use the graph to...
Tuesday, July 20, 2010 at 12:24am by Nish
12th grade - Advanced Functions
the height, h, in meters, of a gold ball t seconds after is is hit can be modelled by the function: h (t) = 4.9t^2 + 32t + 0.2. When is the height of the ball 15 m?
Monday, July 19, 2010 at 9:54pm by Nish
Advanced Functions
Tan2x is Sin2x/cos2x but Sin2x is 2sinxcosx and Cos2x=1-2sin^2x you know sinx, and you can find cos x from pyth theorm (sides of thriangle are 13, 12,5)
Wednesday, January 14, 2009 at 8:14pm by bobpursley
thank you!
Monday, June 10, 2013 at 6:57pm by Anonymous
advanced functions/precalculus
1. Given f(x)=x^2 and g(x)=x+2, determine in standard form: a)g(f(x)) b)when f(g(x))=g(f(x)) for #1 would the answer for a be x^2+2? 2. Given the functions f(x)=5x+7 and g(x)=√x state the domain of g
(x) and g(f(x)) 3. Why is point (a,a) a point on f-1(f(x)) if (a,b) is a...
Monday, January 13, 2014 at 9:35pm by Diane
Advanced Functions/Precalculus
11π/12 is π/12 short of 2π so tan 11π/12 = -tan π/12 I know tan 2π/12 = tan π/6 = 1/√3 tan 2π/12 = 2tanπ/12/(1 - tan^2 π/12) let x = tan π/12 for easier typing so 1/√3 = 2x/(1 - x^2) 2√3x = 1 - x^2 x
^2 + ...
Wednesday, December 4, 2013 at 2:13pm by Reiny
Advanced Functions
I guess so
Tuesday, November 19, 2013 at 8:49pm by Sam
advanced functions
where do we get the 570 from?
Thursday, February 20, 2014 at 5:13pm by collard
Advanced Functions/Precalculus
2. 7sin^2x - 4sin2x/cosx = -1 7sin^2 x - 4(2sinxcosx)/cosx = -1 7sin^2x - 8sinx + 1 = 0 (7sinx - 1)(sinx - 1) = 0 sinx = 1/7 or sinx = 1 if sinx = 1 x = 90° or π/2 if sinx = 1/7, then x = aprr 8.2°
or 171.8° 3. cos 2Ø = cos^2 Ø - sin^2 Ø given: tan Ø = 3/4 (I recognize ...
Wednesday, December 4, 2013 at 11:15pm by Reiny
advanced functions HELP!!!
Wednesday, December 10, 2008 at 8:55pm by Dana
Advanced functions
Solve: a) 1/(2^x) = 1/(x+2) b) 1/(2^x) > 1/(x^2)
Sunday, October 25, 2009 at 12:30am by Anonymous
Advanced Functions
Solve: (1/2^x) > (1/x^2)
Sunday, October 25, 2009 at 10:23pm by Anonymous
Advanced Functions
R u sure thats right?
Monday, November 14, 2011 at 10:17am by Kailla
advanced functions grd 12
im not sure wheer to start and how to do solve this problem solve: cosX(2sinX-3^1/2)=0 ,0<(or equal to)X<(or equal to)2(pie sign)
Tuesday, May 8, 2012 at 7:03pm by kerry
advanced functions-logarithms
Wednesday, March 7, 2007 at 8:36pm by Karman Stevens
math(advanced functions)
describe it, I will critique.
Friday, November 4, 2011 at 4:01pm by bobpursley
advanced functions
Solve for x, x ϵ R a) (x + 1)(x – 2)(x – 4)2 > 0
Friday, July 5, 2013 at 2:07pm by FUNCTIONS
Advanced Functions
They were the most obvious ones.
Tuesday, November 19, 2013 at 8:49pm by Reiny
Advanced Functions/ Precalculus Log
Thanks a lot!
Thursday, December 19, 2013 at 11:42pm by Jessie
Advanced Functions
So we want h=7 when t=0 or 7 = 10sin((pi/15)(-d)) + 12 -.5 = sin((pi/15)(-d)) the reference angle is pi/6 .... ( sinpi/6 = 0.5) so (-pi/15)d = pi + pi/6 or 2pi - pi/6 or -pi/6 if we let (-pi/15)d =
-pi/6 d = 5/2 so h = 10 sin ((pi/15) (t-5/2)) + 12 check: when t = 0, h = 10sin...
Friday, November 20, 2009 at 8:38pm by Reiny
Grade 12- Advanced Funtions
what about this question: Determine the roots algebraically by factoring. a) x^3-8x^2-3x+90=0
Thursday, October 13, 2011 at 6:09pm by Confused
Advanced Functions
1. V= C - rtC, Eq: V = C(1 - rt). V = value. C = cost. r = rate expresed as a decimal. t = time in years. 2. V = 23000(1 - 0.12*2), V = 23000 * 0.76 = 17480. 3. V = 23000(1 - 0.12t) = 2300, 23000(1 -
0.12t) = 2300, Divide both sides by 23000: 1 - 0.12t = 0.1, -0.12t = 0.1 - 1...
Wednesday, January 12, 2011 at 4:04pm by Henry
Advanced Functions
How do you write -1/9 x^-1 with a positive exponent? also, -5 x^-2 ??
Monday, November 1, 2010 at 10:23am by Ali
math(advanced functions)
what are the key features of x/(x^2-3x+2)?
Friday, November 4, 2011 at 4:01pm by hi
math(advanced functions)
what are the key features of x/(x^2-3x+2)?
Friday, November 4, 2011 at 4:01pm by hi
math(advanced functions)
yes and domain and range pls.
Friday, November 4, 2011 at 4:01pm by hi
advanced functions
Solve for x, x ϵ R ¨Cx3 + 25x ¡Ü 0
Friday, July 5, 2013 at 2:08pm by FUNCTIONS
advanced functions
its *h(t)=-2t^3+3t^2+149t+410,
Thursday, February 20, 2014 at 5:13pm by collard
12th grade-Advanced functions
Vertical asymptotes occur when the denominaotrs of your function equal zero. y = secx/logx is the same as y = 1/(cosx logx) so you would have those asymptotes when cosx = 0 or logx = 0 x = pi/2 or
3pi/2 or x = 1
Friday, July 30, 2010 at 12:35pm by Reiny
I got the quartic by moving the sin to the right, squaring both sides, then multiplying everything by sin^2 and then changing the cos to sin. My friend gave me this question, and he got it from his
virtual school for gr 12 advanced functions lol.
Friday, May 2, 2008 at 12:10am by william
advanced functions
nvm on this question, the symbol is supposed to be pie
Monday, January 21, 2008 at 9:05pm by Mel
advanced functions
Prove that cscx / cosx = tanx + cotx
Saturday, November 7, 2009 at 10:15pm by ariza
advanced functions
Solve 2tan^2x + 1 = 0 Interval : [0, 2 pi]
Sunday, November 15, 2009 at 12:25am by -
math(advanced functions)
how would describe it at x=0, and x=inf, and any asymptotes?
Friday, November 4, 2011 at 4:01pm by bobpursley
Advanced Functions math
ah, but now you changed the question. In the original you had 4 raised to the (5 raised to the x) in the middle term Now you have 4(5^x) This last version makes it easy, the first version would be a
nightmare. If it is (5^x)^2 - 4(5^x) =12 then let's let y = 5^x our equation ...
Monday, January 18, 2010 at 9:58pm by Reiny
7th grade advanced math
I do. Have you considered pasting that problem into the Google Search window, and pressing enter? I am wondering why this is called advanced math.
Wednesday, September 10, 2008 at 3:44pm by bobpursley
advanced functions
The revenue and cost functions for the housing developer are: C(n) = 8 + 0.065n R(n) = 1.6 √n Suppose that the developer found a way to reduce her variable cost to $58 000 per house. How would this
affect: i) the minimum and maximum number of houses she could build? ii) ...
Monday, January 11, 2010 at 4:11pm by Anonymous
Advanced Functions math
I don't know if this will show up properly, but do you mean 52x - 45x = 12 ? How did you get log(5^x) = log(-4) ??
Monday, January 18, 2010 at 9:58pm by Reiny
Advanced functions
Solve x^3 -3x^2 +5x +4 = 0 using Tartaglia's method.
Sunday, October 25, 2009 at 6:39pm by josh
Advanced functions
Solve x^3 -3x^2 +5x +4 = 0 using Tartaglia's method.
Sunday, October 25, 2009 at 6:41pm by josh
Advanced Functions 2
are the exponents (a-b) and (a+2b) You have to use grouping symbols.
Monday, November 1, 2010 at 10:26am by bobpursley
math advanced functions
angle=5/2.5=2 radians= 2*180/PI degrees
Tuesday, July 24, 2012 at 6:23pm by bobpursley
Ms, Sue help me in science
Your welcome. Im advanced in some work for my age. Im a 12 year old poet, and in advanced reading and math. Thank you. :)
Tuesday, April 10, 2012 at 8:24pm by ♥rebel•stupidity=kathy♥
Advanced Functions/Precalculus
Trigonometry Questions 1.) Find the exaqct value of tan(11π/12) 2.) A linear trig equation involving cosx has a solution of π/6. Name three other possible solutions 3) Solve 10cosx=-7 where 0≤x≤2π
Wednesday, December 4, 2013 at 2:13pm by john
Graph and label the following two functions: f(x)=(x^2+7x+12)/(x+4) g(x)=(-x^2+3x+9)/(x-1) 1. Describe the domain and range for each of these functions. 2. Determine the equation(s) of any asymptotes
found in the graphs of these functions, showing all work. 3. Discuss the ...
Monday, May 2, 2011 at 11:22am by Debra
Advanced Functions 2
I'm sorry for not using grouping symbols! You are correct, the exponents are (a-b) and (a-2b)
Monday, November 1, 2010 at 10:26am by Ali
Advanced Functions Math
y = 2^(-x) for negative x stretch up by 5 y = 5 * 2^(-x) go five right y = 5 *2^(-x-1) y = -5 + 5 * 2^-(x+1)
Saturday, January 29, 2011 at 3:01pm by Damon
Advanced Functions
Oh wow, we got teh same answer reiny cool
Tuesday, November 19, 2013 at 8:49pm by Sam
7th grade advanced math
Not off the top of my head, I don't. I'm in 7th grade advanced math and we get to use calculators. Can you use a calculator? If so, you can enter this. I have a scientific calculator, it's really
nice to have aroud for probblems like that!
Wednesday, September 10, 2008 at 3:44pm by Cecilia
advanced functions
Find an equation with the given form in each case below. y=sin through(Ï/2, 0)
Monday, January 21, 2008 at 9:05pm by Mel
advanced functions
I would take the first derivative, and make it >0 and solve for the intervals.
Sunday, October 25, 2009 at 9:27pm by bobpursley
Advanced Functions math
The original question was what i needed to solve, but with the way i changed it, is it correct?
Monday, January 18, 2010 at 9:58pm by Mai
Advanced Functions math
My answer, if substituted into the second version of the equation, satisfies it.
Monday, January 18, 2010 at 9:58pm by Reiny
advanced functions
Solve the inequality 12x^3 + 8x^2 <= 3x + 2. Justify your answer.
Sunday, October 20, 2013 at 8:57am by Rinchan
advanced functions
Remember your algebra I? Subtract 980 from both sides. You want to have f(x) = 0 to solve for x.
Thursday, February 20, 2014 at 5:13pm by Steve
Are multi cellular organisma more advanced that unicellular organisma. Why or why not Mutlicellular organisms have cells that have specialized in various functions. Would you call that more advanced?
Since this is not my area of expertise, I searched Google under the key words...
Wednesday, October 4, 2006 at 11:16am by sharon
Advanced Functions
Identify the point of intersection of these two curves: P(t)=300(1.05)^t F(t)=1000(0.92)^t
Monday, January 4, 2010 at 4:17pm by Anonymous
Advanced functions/precalculus
1. Given f(x)=x^2 and g(x)=x+2, determine in standard form: a) g(f(x)) b) when f(g(x))=g(f(x)) 2. If f = {(-10, 1), (-1, -1), (10, 0), (11, 7)} and g = {(-1, -1), (0, 10), (1, -10), (7, 11)} for how
many values of x is f x g defined? Explain 3. Given f(x) = -2x + 3 and g(x) = ...
Wednesday, January 15, 2014 at 2:24am by Diane
The beginning tennis class has twice as many students as the advanced class. The intermediate class has three more students than the advanced class. How many students are in the advanced class if the
total enrollment for the three tennis classes is 39? A. 10 B. 15 C. 9 D. 12 ...
Friday, March 22, 2013 at 2:00pm by Amy
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=Advanced+Functions+-+Grade+12","timestamp":"2014-04-20T16:41:07Z","content_type":null,"content_length":"33476","record_id":"<urn:uuid:cbacdd7d-77f6-4ae6-96a3-794f8b33d383>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find Mean/Variance from probability mass function
September 12th 2009, 10:22 PM #1
Sep 2009
Find Mean/Variance from probability mass function
I'm not sure if this actually Advanced, but it is from a 3000-level University course. My apologies if I am misposting.
A random variable X has the probability mass function
$f(x) = p * 0.6^{x-1}$ , for x = 1, 2 , 3, …
Find p, the mean and the variance of X.
Ok... So that seems pretty clear to me that p = .4 (based on the logic that q=.6 = 1-p).
But I have no idea where to go from here. If I go to take the mean and variance, according to the text and notes, the formulae I'd be using are:
$<br /> \mu = \sum x f(x) = \sum_{x=1}^{\infty} x (.4 * .6^{x-1})<br />$
$<br /> \delta^2 = \sum(x-\mu)^2f(x)<br />$
Maybe my calculus-fu has weakened (it's been a long time), but I can't for the life of me figure out how to do those sums since they don't appear to resemble any of the series I studied in Calc
If anyone could give me a pointer, or tell me where I did something wrong, I'd appreciate it. Thanks...
That is the right value for $p$, but your explanation cannot be followed. You should state that for f to be a pmf:
$\sum_{x=1}^{\infty} p (0.6)^{x-1}=1$
or state some other obvious reason why this is so.
$g(x)=0.4 \; \sum_{x=1}^{\infty} 0.6^{x}$
Now differentiate $g(x)$ (and the series term by term)
I was just putting that there to illustrate my thought process in case that's where my error was, but I'll keep it in mind. Thank you.
I'm a little lost there. Is that a rewriting of the function I had, or was mine wrong?
I assume it'll be summed the same way you described for the mean above?
Thanks again.
September 12th 2009, 11:15 PM #2
Grand Panjandrum
Nov 2005
September 12th 2009, 11:19 PM #3
Grand Panjandrum
Nov 2005
September 12th 2009, 11:20 PM #4
Grand Panjandrum
Nov 2005
September 12th 2009, 11:27 PM #5
Sep 2009 | {"url":"http://mathhelpforum.com/advanced-statistics/101986-find-mean-variance-probability-mass-function.html","timestamp":"2014-04-19T13:48:50Z","content_type":null,"content_length":"50524","record_id":"<urn:uuid:5f452ca0-982e-444a-b6de-9dcf9f5cdcf6>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trigonometric Delights
Trigonometric Delights:
Excerpts from book:
This book is neither a textbook of trigonometry—of which there are many—nor a comprehensive history of the subject, of which there is almost none. It is an attempt to present selected topics in
trigonometry from a historic point of view and to show their relevance to other sciences.
Trigonometry has always been the black sheep of mathematics. It has a reputation as a dry and difficult subject, "a glorified form of geometry complicated by tedious computation". In this book,
author Eli Maor tries to dispel that view. Rejecting the usual descriptions of sine, cosine, and their trigonometric relatives, he brings the subject to life in a blend of history, biography, and
mathematics. He presents both a survey of the main elements of trigonometry and an account of its vital contribution to science and social development.
This book begins by examining the "proto-trigonometry" of the Egyptian pyramid builders. It shows how Greek astronomers developed the first true trigonometry. It traces the slow emergence of modern,
analytical trigonometry, recounting its origins in Renaissance Europe's quest for more accurate artillery, more precise clocks, and more pleasing musical instruments. Along the way, readers will see
trigonometry at work in, for example, the struggle of the mapmaker Gerardus Mercator to represent the curved earth on a flat sheet of paper; how M. C. Escher used geometric progressions in his art;
and how the toy Spirograph uses epicycles and hypocycles.
This book also sketches the lives of some of the figures who have shaped four thousand years of trigonometric history. Readers will meet, for instance, the Renaissance scholar Regiomontanus, who is
rumored to have been poisoned for insulting a colleague, and Maria Agnesi, an eighteenth-century Italian genius who gave up mathematics to work with the poor -- but not before she investigated a
special curve that, due to mistranslation, bears the unfortunate name "the witch of Agnesi." The book is richly illustrated, including rare prints from the author's own collection.
The first nine chapters require only basic algebra and trigonometry; the remaining chapters rely on some knowledge of calculus (no higher than Calculus II). Much of the material should thus be
accessible to high school and college students. Aiming for this audience, this book limited the discussion to plane trigonometry, avoiding spherical trigonometry altogether (although historically it
was the latter that dominated the subject at first). Some additional historical material -– often biographical in nature -- is included in eight "sidebars" that can be read independently of the main | {"url":"http://www.bookgoldmine.com/math/trigonometry/trigonometric-delights/27","timestamp":"2014-04-18T20:43:42Z","content_type":null,"content_length":"12500","record_id":"<urn:uuid:07155797-b6b2-4d05-89e4-d32b7c99bf97>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
Areas and Perimeters of CirclesAlgebraLAB: Lessons
Introduction: Area
is a measure of the amount of
contained inside a closed figure.
is a measure of the distance around a closed figure. In this lesson we will examine these concepts for a circle. In particular, the
of a
is called the
The Lesson:
of a
involve the
and/or the
of the circle. Sometimes we have information about a figure inside the
or other segments in the
which we use to calculate the
or diameter. First, however, we need formulas for the
of a circle.
To derive a formula for the
of a circle, we will examine a diagram of a (partial)
regular polygon
with sides of
length s
and number of sides
. A diagram is shown below.
Assume that
O is the center of the
regular polygon
, the distance from the center to a vertex, is called the
of the polygon. This means
is the
of a
and O is the center of a
into which a
regular polygon
can be inscribed. The
of the
is the sum of the
sides of
s and is given by
. As the number of sides in the
increases, the
fits more exactly into the
(becomes closer to the actual shape and size of the circle). We will derive a formula for the
of the
in terms of the
radius r
and use this to calculate the
of a circle.
The Polygon:
AOB is a
central triangle
. There are
such triangles in this polygon, one for each side.
Since there are n such triangles in our polygon:
Since AB is a
of measure
MB has a measure of
A =
A =
The Circle:
Below we show two tables of values for n.
As you can infer from the diagram below, the more sides we use in the polygon, the closer the
will get to matching the
of the circle.
Combine this with the fact that the value of p for larger and larger values of
, we can conclude that our formula for the total
of the
A = pr^2.
of the
is the number of sides. It happens that p as
gets larger allowing us to conclude that our formula for the total
of the
We now have the following two formulas:
□ The area of a circle is given by A = pr^2 where r is the length of the radius of the circle.
□ The circumference of a circle is given by C = 2pr where r is the length of the radius of the circle. Notice that the circumference could also be written as C = pd where d is the length of the
diameter since d = 2r.
Let's Practice:
i. A circle has a diameter of 8 feet. What are the area and circumference of this circle?
Notice that we are given a
of 8 which means that the
is 4.
A = pr
= p(4)
= 16p
C = 2pr = 2p(4) = 8p feet
It is typical to leave answers in terms of p unless there is a need for a decimal approximation of the answer.
ii. A right triangle is inscribed in a circle of radius 6 inches as shown in the diagram. What is the area of the shaded region?
The strategy is to find the
of the
and subtract it from the
of the circle.
We have the
of the shaded region as 36p – 30 83.1
iii. A square is inscribed in a circle of circumference 10p. What is the area of the shaded region? A diagram is given below.
The strategy is to subtract the
of the
from the
of the circle.
C = pd
C = 10p
pd = 10p
d = 10
In our diagram, AC is a
of the circle. Therefore, AC = 10.
We will now use this information along with the
Pythagorean Theorem
to find the
of a
of the square, which we have labeled
in the diagram.
2x^2 = 100
x^2 = 50
of the shaded region is
25p – 50 28.5
A circle has an area of 40p. What are the radius and the circumference of this circle?
What is your answer?
A right triangle having an area of 16 and an altitude of 3 is inscribed in a circle . What are the area and circumference of this
What is your answer? | {"url":"http://algebralab.org/lessons/lesson.aspx?file=Geometry_AreaPerCricle.xml","timestamp":"2014-04-25T01:50:53Z","content_type":null,"content_length":"44381","record_id":"<urn:uuid:1d289771-9fff-40b1-b90f-e35bfa99e4fd>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
Course Website Locator: epi207-01
Harvard School of Public Health
The following course websites match your request:
Fall 1 2009
Dr. J. Robins, Dr. M. Hernan
2.5 credits
Lectures. Two 2-hour sessions and one 2-hour lab each week.
Provides an in-depth investigation of statistical methods for drawing causal inferences from observational studies. Informal epidemiologic concepts such as confounding, selection bias, overall
effects, direct effects, and intermediate variables will be formally defined within the context of a counterfactual causal model and with the help of causal diagrams. Methods for the analysis of the
causal effects of time-varying exposures in the presence of time dependent covariates that are simultaneously confounders and intermediate variables will be emphasized. These methods include
g-computation algorithm estimators, inverse probability weighted estimators of marginal structural models, g-estimation of structural nested models. As a practicum, students will reanalyze data sets
using the above methods.
Course Activities: Class discussion, homework, practicum and final examination.
Course Note: EPI204, BIO210 and EPI289, or BIO233, or signature of instructor required; familiarity with logistic regression and survival analysis is expected; lab time will be announced at first
meeting. (5.06)
Course evaluations are an important method for feedback on the quality of course offerings. The submission of a course evaluation is a requirement for this course. Your grade for the course will be
made available only after you have submitted responses to at least the first three questions of the on-line evaluation for this course.
Fall 1 2008
Dr. J. Robins, Dr. M. Hernan
2.5 credits
Lectures. Two 2-hour sessions and one 2-hour lab each week.
Provides an in-depth investigation of statistical methods for drawing causal inferences from observational studies. Informal epidemiologic concepts such as confounding, selection bias, overall
effects, direct effects, and intermediate variables will be formally defined within the context of a counterfactual causal model and with the help of causal diagrams. Methods for the analysis of the
causal effects of time-varying exposures in the presence of time dependent covariates that are simultaneously confounders and intermediate variables will be emphasized. These methods include
g-computation algorithm estimators, inverse probability weighted estimators of marginal structural models, g-estimation of structural nested models. As a practicum, students will reanalyze data sets
using the above methods.
Course Activities: Class discussion, homework, practicum and final examination.
Course Note: EPI204, BIO210 and EPI289, or BIO233, or signature of instructor required; familiarity with logistic regression and survival analysis is expected; lab time will be announced at first
meeting. (5.06)
Course evaluations are an important method for feedback on the quality of course offerings. The submission of a course evaluation is a requirement for this course. Your grade for the course will be
made available only after you have submitted responses to at least the first three questions of the on-line evaluation for this course.
Fall 1 2007
Dr. J. Robins, Dr. M. Hernan
2.5 credits
Lectures. Two 2-hour sessions and one 2-hour lab each week.
Provides an in-depth investigation of statistical methods for drawing causal inferences from observational studies. Informal epidemiologic concepts such as confounding, selection bias, overall
effects, direct effects, and intermediate variables will be formally defined within the context of a counterfactual causal model and with the help of causal diagrams. Methods for the analysis of the
causal effects of time-varying exposures in the presence of time dependent covariates that are simultaneously confounders and intermediate variables will be emphasized. These methods include
g-computation algorithm estimators, inverse probability weighted estimators of marginal structural models, g-estimation of structural nested models. As a practicum, students will reanalyze data sets
using the above methods.
Course Activities: Class discussion, homework, practicum and final examination.
Course Note: EPI204, BIO210 and EPI289, or BIO233, or signature of instructor required; familiarity with logistic regression and survival analysis is expected; lab time will be announced at first
meeting. (5.06)
Course evaluations are an important method for feedback on the quality of course offerings. The submission of a course evaluation is a requirement for this course. Your grade for the course will be
made available only after you have submitted responses to at least the first three questions of the on-line evaluation for this course.
Fall 2006
Dr. J. Robins, Dr. M. Hernan
2.5 credits
Lectures. Two 2-hour sessions and one 2-hour lab each week.
Provides an in-depth investigation of statistical methods for drawing causal inferences from observational studies. Informal epidemiologic concepts such as confounding, selection bias, overall
effects, direct effects, and intermediate variables will be formally defined within the context of a counterfactual causal model and with the help of causal diagrams. Methods for the analysis of the
causal effects of time-varying exposures in the presence of time dependent covariates that are simultaneously confounders and intermediate variables will be emphasized. These methods include
g-computation algorithm estimators, inverse probability weighted estimators of marginal structural models, g-estimation of structural nested models. As a practicum, students will reanalyze data sets
using the above methods.
Course Activities: Class discussion, homework, practicum and final examination.
Course Note: EPI204, BIO210 and EPI289, or BIO233, or signature of instructor required; familiarity with logistic regression and survival analysis is expected; lab time will be announced at first
meeting. (5.06)
Course evaluations are an important method for feedback on the quality of course offerings. The submission of a course evaluation is a requirement for this course. Your grade for the course will be
made available only after you have submitted responses to at least the first three questions of the on-line evaluation for this course. | {"url":"http://my.hsph.harvard.edu/course/hsph-epi207-01","timestamp":"2014-04-19T12:36:58Z","content_type":null,"content_length":"11005","record_id":"<urn:uuid:ec680e1c-4466-49ec-9f39-f8ad56fbea02>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Simple Algebraic Proof Of Farkas's Lemma And Related Theorems
A Simple Algebraic Proof Of Farkas's Lemma And Related Theorems (1998)
Download Links
author = {C. G. Broyden},
title = {A Simple Algebraic Proof Of Farkas's Lemma And Related Theorems},
year = {1998}
this paper we have given an alternative proof of Farkas's lemma, a proof that is based on a theorem, the main theorem, that relates to the eigenvectors of certain orthogonal matrices. This theorem is
believed to be new, and the author is not aware of any similar theorem concerning orthogonal matrices although he recently proved the weak form of the theorem using Tucker's theorem (see [5]). His
proof of the theorem is "completely elementary" (a referee) and requires little more than a knowledge of matrix algebra for its understanding. Once the theorem is established, Tucker's theorem (via
the Cayley transform), Farkas's lemma and many other theorems of the alternative follow trivially. Thus the paper establishes a connection between the eigenvectors of orthogonal matrices, duality in
linear programming and theorems of the alternative that is not generally appreciated, and this may be of some theoretical interest.
356 Lectures on Polytopes - Ziegler - 1995
193 eds, ‘Nonlinear Programming - Mangasarian, Ritter - 1970
171 Linear Programming and Network Flows - Bazaraa, Jarvis, et al. - 1977
133 The Theory of Linear Economic Models - Gale
34 Finite algorithms in optimization and data analysis - Osborne - 1985
22 die theorie der einfachen ungleichungen. Journal fur die Reine und Angewandte Mathematik 124 - FARKAS - 1902
19 Dual systems of homogeneous linear relations - Tucker - 1956
18 A combinatorial abstraction of linear programming - Bland - 1977
6 On the Development of Optimization Theory - Prékopa - 1980
5 An elementary proof of Farkas’ lemma - Dax - 1997
5 Practical Methods of Optimization, Volume 2 - Fletcher - 1981
3 Uber die Anwendungen des mechanischen Princips von Fourier, Mathematische und Naturwissenschaftliche Berichte aus Ungarn - Farkas
3 Die algebraische Grundlage der Anwendungen des mechanischen Princips von Fourier, Mathematische und Naturwissenschaftliche Berichte aus Ungarn - Farkas
2 Some LP algorithms using orthogonal matrices. Calcolo - Broyden, Spaletta - 1995
2 1998]: Private communication - Burdakov
2 The relationship between theorems of the alternative, least norm problems, steepest descent directions and degeneracy: A review - Dax - 1993
2 Die algebraischen Grundlagen der Anwendungen des Fourierschen Princips in der Mechanik, Mathematische und Naturwissenschaftliche Berichte aus Ungarn - Farkas
2 A Real Linear Algebra - Fekete - 1985
2 Systems of linear relations - Good - 1959
1 Skew-symmetric matrices, staircase functions and theorems of the alternative - Broyden - 1986
1 Some LP algorithms. Technical Report 1/175, Consiglio Nazionale di Ricerca - Broyden - 1993
1 A further look at theorems of the alternative - Dax - 1994
1 On theorems of the alternative and duality - Dax, Sreedharan - 1997 | {"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.36.9534","timestamp":"2014-04-20T00:15:06Z","content_type":null,"content_length":"25481","record_id":"<urn:uuid:f726097c-98d6-4789-b56f-86a138a462b4>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Now that you know something about the properties of the two main types of waves (Lesson 43), we need to make sure that you can look at individual characteristics that waves can have.
• Not all waves are created equal!
• You need to be able to see the specific “faces” that each wave can have, based on three important characteristics: frequency, wavelength, and amplitude.
When we first started looking at SHM we defined period as the amount of time it takes for one cycle to complete... seconds per cycle
• Frequency is the same sort of idea, except we’re just going to flip things around.
• Frequency is a measurement of how many cycles can happen in a certain amount of time… cycles per second.
• If a motor is running so that it completes 50 revolutions in one second, I would say that it has a frequency of 50 Hertz.
• Hertz is the unit of frequency, and just means how many cycles per second.
□ It is abbreviated as Hz.
□ It is named after Heinrich Hertz, one member of the Hertz family that made many important contributions to physics.
• In formulas frequency appears as an "f".
Since frequency and period are exact inverses of each other, there is a very basic pair of formulas you can use to calculate one if you know the other…
It is very easy to do these calculations on calculators using the x^-1 button.
Example 1: The period of a pendulum is 4.5s. Determine the frequency of this pendulum.
The period means that it will take 4.5 seconds for the pendulum to swing back and forth once. So, I expect that my frequency will be a decimal, since it will complete a fraction of a swing per
Wavelength is a property of a wave that most people (once they know what to look for) can spot quickly and easily, and use it as a way of telling waves apart. Look at the following diagram...
• Any of the parts of the wave that are pointing up like mountains are called crests. Any part that is sloping down like a valley is a trough.
• Wavelength is defined as the distance from a particular height on the wave to the next spot on the wave where it is at the same height and going in the same direction.
□ Usually it is measured in metres, just like any length.
• There isn’t a special spot you have to start on a wave to measure wavelength, just make sure you are back to the same height going in the same direction. Most people do like to measure from one
crest to the next crest (or trough to trough), just because they are easy to spot.
On a longitudinal wave, the wavelength is measured as the distance between the middles of two compressions, or the middles of two expansions.
This leads us to one of the most important formulas you will use when studying waves.
• Frequency tells us how many waves are passing a point per second, the inverse of time.
• Wavelength tells us the length of those waves in metres, almost like a displacement.
• If we multiply these two together, we are really multiplying 1/s and m… which gives us m/s, the velocity of the wave!
v = velocity of the wave (m/s)
f = frequency (Hz)
λ = wavelength (m)
Example 2: A wave is measured to have a frequency of 60Hz. If its wavelength is 24cm, determine how fast it is moving.
Example 3: The speed of light is always 3.00e8 m/s. Determine the frequency of red light which has a wavelength of 700nm.
Be careful when changing the 700nm into metres. Some people get really caught up with changing it into regular scientific notation with only one digit before the decimal. Why bother? It's only
being used in a calculation. You’ll probably just make a mistake changing the power of 10, so just substitute in the power for the prefix and leave everything else alone…700 nm = 700 x 10^-9 m
since “nano” is 10^-9.
Amplitude is a measure of how big the wave is.
• Imagine a wave in the ocean. It could be a little ripple or a giant tsunami.
□ What you are actually seeing are waves with different amplitudes.
□ They might have the exact same frequency and wavelength, but the amplitudes of the waves can be very different.
The amplitude of a wave is measured as:
1. the height from the equilibrium point to the highest point of a crest or
2. the depth from the equilibrium point to the lowest point of a trough
When you measure the amplitude of a wave, you are really looking at the energy of the wave.
• It takes more energy to make a bigger amplitude wave.
• Anytime you need to remember this, just think of a home stereo’s amplifier… it makes the amplitude of the waves bigger by using more electrical energy. | {"url":"http://www.studyphysics.ca/newnotes/20/unit03_mechanicalwaves/chp141516_waves/lesson44.htm","timestamp":"2014-04-17T16:04:09Z","content_type":null,"content_length":"9154","record_id":"<urn:uuid:d11df127-e5f0-44a3-9cef-e199b5f21fba>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
Verizon check - snopes.com
Originally Posted by
I had this forwarded to me recently as "Never piss off an engineer." Given the use of Euler's Formula and a geometric series, I'd say it was most likely cooked up by an electrical engineer. A
mathematician could probably come up with some harder formulas.
As an electrical engineer, I find that comment moderately offensive.
I could probably come up with some harder formulas just from one specific field (antennas), never mind some of the other stuff that I work with more regularly.
I also wonder why you chose to disparage only *electrical* engineers?
By the way, the infinite sum is indeed a geometric series of the form E a* r^k, with a=1, r= 1/2, and with k starting at 1 and not 0. Since 1/(2^0) = 1/1, we can just do the sum via the formula and
subtract one.
To wit : (k=0 to inf) E a*r^k = a / (1-r)
For a=1, r=1/2, the sum is 2.
So, (k=1 to inf) E 1/(2^k) = 1, as explained above.
Euler's identity is e^(i*pi) + 1 =0, so e^(i*pi) = -1.
So the check is for $0.002.
Next time solve the problem, and don't insult people. | {"url":"http://message.snopes.com/showthread.php?t=4059","timestamp":"2014-04-20T20:56:10Z","content_type":null,"content_length":"116390","record_id":"<urn:uuid:06a277ae-7628-424b-91e7-6935e9275a91>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
Closed Subsets
March 1st 2012, 11:20 AM #1
Closed Subsets
Are the following subsets closed?
1) $S=(0,1]; in X=R$
2) $S=\{x=(x_1,x_2) \in R^2 : x_1^2+x_2^2 \le 1 \} in X=R$
For 1) $S=(0,1]; in X=R$ is not closed
for if $S_n$ is a convergent sequence with $0 < S_n \le 1 \forall n \in N$
$S-N \le 1 \forall n \implies Lim S_n \le Lim 1 =1 as n \rightarrow \infty$
I dont know if this is right or how to continue.....?
Re: Closed Subsets
Re: Closed Subsets
Let $\displaystyle x_n=1/n$ be a sequence $\in S \forall n \in Z^+$
ie $\displaystyle 0 < 1/n \le 1 \forall n \in Z^+$ but the $lim_{ n \rightarrow \infty } 1/n= 0 \implies x_n otin S \implies S=(0,1]$ is not closed...
Re: Closed Subsets
NO indeed.
The whole point is that $\forall n,~x_n\in S$ but $\displaystyle\lim_{n\to\infty}x_notin S$.
Re: Closed Subsets
Re: Closed Subsets
Re: Closed Subsets
Definition: If S is a subset of an n.l.s, we say S is a closed set if for every
convergent series (S_n) of points $S_n \in S$, the
$\displaystyle \lim_{n\to\infty}S_n \in S$
in our case $\implies \lim_{n\to \infty} S_n otin S$
Therefore the set is not closed...?
Re: Closed Subsets
Yes, the set is not closed.
We need to show $||x|| \le 1$
By Triangle inequality we have $||x|| \le ||x-x_n||+||x_n|| \forall n \in N$
but we know $\lim_{n\to\infty}x_n=x$
so $\forall \epsilon > 0, \exists n_0 \in N s.t ||x-x_n|| < \epsilon$
In particular taking n=n_0
$||x|| \le ||x-x_n_0||+||x_n_0|| < \epsilon +1$
I dont understand the above line, why is there a '+1' on the RHS?
March 1st 2012, 12:46 PM #2
March 1st 2012, 01:52 PM #3
March 1st 2012, 02:16 PM #4
March 1st 2012, 02:26 PM #5
March 1st 2012, 02:32 PM #6
March 2nd 2012, 05:05 AM #7
March 9th 2012, 09:59 AM #8 | {"url":"http://mathhelpforum.com/differential-geometry/195537-closed-subsets.html","timestamp":"2014-04-20T13:02:29Z","content_type":null,"content_length":"64758","record_id":"<urn:uuid:87d820fe-4d21-475a-83e9-fada64d03433>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
Prove that if $\mathbf{A}\mathbf{B}=\mathbf{B}\mathbf{A}$ then $(\mathbf{A}+\mathbf{B})^n=\binom n0\mathbf{A}^n+\binom n1\mathbf{A}^{n-1}\mathbf{B}+\dots + \binom n{n-1} \mathbf{A}\mathbf{B}^{n-1}+\
binom nn\mathbf{B}^n$.
Hi You can first prove by induction that $\mathbf{B}^k\mathbf{A}=\mathbf{A}\mathbf{B}^k$ And use this to prove by induction $\left(\mathbf{A}+\mathbf{B}\right)^n=\sum_{k=0}^{n } \binom nk\mathbf{A}^k
\mathbf{B}^{n-k}$ $\left(\mathbf{A}+\mathbf{B}\right)^{n+1}=\left(\su m_{k=0}^{n} \binom nk\mathbf{A}^k\mathbf{B}^{n-k}\right)\left(\mathbf{A}+\mathbf{B}\right)$ $\left(\mathbf{A}+\mathbf{B}\right)^
{n+1}=\sum_{k=0 }^{n} \binom nk\mathbf{A}^k\mathbf{B}^{n-k}\mathbf{A}+\sum_{k=0}^{n} \binom nk\mathbf{A}^k\mathbf{B}^{n+1-k}$ $\left(\mathbf{A}+\mathbf{B}\right)^{n+1}=\sum_{k=0 }^{n} \binom nk\
mathbf{A}^{k+1}\mathbf{B}^{n-k}+\sum_{k=0}^{n} \binom nk\mathbf{A}^k\mathbf{B}^{n+1-k}$ | {"url":"http://mathhelpforum.com/advanced-algebra/82336-matrix.html","timestamp":"2014-04-17T05:06:36Z","content_type":null,"content_length":"35237","record_id":"<urn:uuid:8d4f53b8-4a95-4ee1-af55-e0cf5d37bde5>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is QEG's asymptotic safe point an example of self criticality?
According to wikipedia:
"In physics, self-organized criticality (SOC) is a property of (classes of) dynamical systems which have a critical point as an attractor. Their macroscopic behaviour thus displays the spatial and/or
temporal scale-invariance characteristic of the critical point of a phase transition, but without the need to tune control parameters to precise values."
The asymptotic safe point pretty much fits this description, apparently.
In relation to Quantum Gravity, the only thing I could find was this:
http://arxiv.org/abs/hep-th/0412307 Self-organized criticality in quantum gravity
Mohammad H. Ansari, Lee Smolin
(Submitted on 27 Dec 2004 (v1), last revised 18 May 2005 (this version, v5))
We study a simple model of spin network evolution motivated by the hypothesis that the emergence of classical space-time from a discrete microscopic dynamics may be a self-organized critical process.
Self organized critical systems are statistical systems that naturally evolve without fine tuning to critical states in which correlation functions are scale invariant. We study several rules for
evolution of frozen spin networks in which the spins labelling the edges evolve on a fixed graph. We find evidence for a set of rules which behaves analogously to sand pile models in which a critical
state emerges without fine tuning, in which some correlation functions become scale invariant.
Perhaps this is a clue that AS is really related to spin networks? | {"url":"http://www.physicsforums.com/showthread.php?p=3801939","timestamp":"2014-04-17T01:07:05Z","content_type":null,"content_length":"42232","record_id":"<urn:uuid:683719a6-d64d-440c-a3de-4458d5945113>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
Period of sums/multiplied sines/cosines
July 10th 2011, 01:10 AM
Period of sums/multiplied sines/cosines
I am trying to revise some rules regarding the period of sines (or cosines) when (a) they are added , and (b) they are multiplied. I haven't had too much luck Googling this topic.
For example, what would be the fundamental period of:
y=cos(2.pi.2x) + cos(2.pi.3x)
Or, if we multiply:
People have suggested to me that I use trig. identities (or complex exponentials) but I am sure there is a more intuitive way such as considering the phases between both parts.
Any ideas appreciated !
July 10th 2011, 06:37 AM
Re: Period of sums/multiplied sines/cosines
Consider $f(x) = cos(2\pi .2x) + cos(2\pi .3x)$ if $f(x)$ is periodic then $f(x+T) = f(x)$ where $T$ is a period.
Now we have $cos(2\pi .2(x+T)) + cos(2\pi .3(x+T)) = cos(2\pi .2x) + cos(2\pi .3x)$ as $cosine$ is a function with periodicity $2\pi$ we must have $2\pi(2T)$ and $2\pi(3T)$ are both of the form
$2m\pi$ where $m$ is an integer not necessarily same in both the cases. In first case its enough if $T = \frac{m}{2}$ ,and in the second case if $T = \frac{n}{3}$ where $m,n \in Z$ where $Z$ is
the set of integers. Therefore if $T=1$ to be the fundamental period then we are fine with it.
Multiplication of functions can be handled on the same lines.
Now try solving the following problem instead
$f(x) = cos(2\pi .\frac{x}{2}) + cos(2\pi .\frac{x}{3})$
July 10th 2011, 10:05 AM
Re: Period of sums/multiplied sines/cosines
Thank you Kaylan
So it looks like the period will be the LCM (lowest common multiple) of the individual periods?
I think the answer to your question is SIX
Do you have any good ideas how to find the period when the cosines are multiplied?
Thanks again
July 10th 2011, 10:25 AM
Re: Period of sums/multiplied sines/cosines
So it looks like the period will be the LCM (lowest common multiple) of the individual periods?
I think the answer to your question is SIX
That's true period in the case of sums is the LCM of individual periods.
Do you have any good ideas how to find the period when the cosines are multiplied?
Now in the case when cosines are multiplied or for that matter any powers of sines or cosines can be handled by transforming them to sums of sines and cosines.
July 10th 2011, 10:53 AM
Re: Period of sums/multiplied sines/cosines
Here are few examples
$f(x) = 2cos(2\pi . \frac{x}{2}).cos(2\pi . \frac{x}{3}) \Rightarrow f(x) = cos(2\pi . \frac{5x}{6}) + cos(2\pi . \frac{x}{6})$ as you can see since the LCM has already been obtained during the
sum and difference of angles the approach does not change drastically. However in the following example the approach varies
$f(x) = cos^3 (2\pi . \frac{x}{12}) \Rightarrow f(x) = \frac{1}{4} (3cos(2\pi x) - cos(2\pi . \frac{x}{4}))$ , $T = 4$.
July 10th 2011, 11:30 AM
Re: Period of sums/multiplied sines/cosines
your question is:
For example, what would be the fundamental period of:
y=cos(2.pi.2x) + cos(2.pi.3x)
Or, if we multiply:
my anwer is:
in both cases the total period is
that is the least common multiple of the two periods.
in fact the 1st period is x=(0;1/2)
and the 2nd period is x=(0;1/3)
therefore the range x=(0;1) is the least common multiple of the two single ranges.
July 10th 2011, 09:59 PM
Re: Period of sums/multiplied sines/cosines
Thanks everybody
Mike, you seem to be saying that this LCM method is also true for procucts of sines (or cosines), not just when they are added. So is this generally true?
Kalyan, in your cos-cubed example, after you transform it, we have periods of 1 and 4. So again it is the LCM.
Thanks again
July 11th 2011, 02:43 AM
Re: Period of sums/multiplied sines/cosines
Hi Matt,
Mike, you seem to be saying that this LCM method is also true for procucts of sines (or cosines), not just when they are added. So is this generally true?
Kalyan, in your cos-cubed example, after you transform it, we have periods of 1 and 4. So again it is the LCM.
The idea of taking an LCM is valid for the case of sum or multiplication of functions. All I was suggestting is that you have to consider taking the transformations when multiplication of
functions is involved as in the case of $cos^2 \theta, cos^3 \theta$ are involved.
July 11th 2011, 10:53 AM
Re: Period of sums/multiplied sines/cosines
Hello again
Thank you for all your help so far Kalyan
I am glad you agree about the LCM idea !
However, I have just been trying an online graphing program.
Firstly I tried cos(2pi*(x/2))*cos(2pi*(x/3)). As expected I can see that the pattern repeats every 6 units.
Then I tried cos(2pi*x)*cos(2pi*(x/3)). Maybe I am wrong, but it looks like a period of 1.5. There is ALSO a period of 3. But the lowest common multiple (LCM) idea does not suggest that !
Does this mean that perhaps the LCM idea is not the way to go and really we should always use trig identies to get the period when our sines or cosines are multiplied?
July 11th 2011, 11:04 AM
Re: Period of sums/multiplied sines/cosines
Hi Gbber808,
Then I tried cos(2pi*x)*cos(2pi*(x/3)). Maybe I am wrong, but it looks like a period of 1.5. There is ALSO a period of 3. But the lowest common multiple (LCM) idea does not suggest that !
Well all our previous examples we dealt with simple cases as of the type $\frac{x}{m} , \frac{x}{n}$ where we took the $lcm(m,n)$ in the case $f(x) = cos(2\pi x).cos(2 \pi \frac{x}{3}) = \frac{1}
{2}(cos(2\pi \frac{4x}{3}) + cos(2 \pi \frac{2x}{3}))$ and we see that its enough for $T =n \frac{3}{2}$ for $cos(2\pi \frac{4x}{3}) , cos(2\pi \frac{2x}{3})$ to be periodic.
July 11th 2011, 11:12 AM
Re: Period of sums/multiplied sines/cosines
Another way to look at it is that you have taken the $lcm(1,3)$ i.e LCM of denominators and divided it by HCF of numerators $hcf(2,4)$
T = 3/2.
July 13th 2011, 08:53 AM
Re: Period of sums/multiplied sines/cosines
Sorry for the late reply. Apart from lots of work I have also been revising my trig identities. It was a long ago I learnt those. Thanks again for all of your help Kalyanram. | {"url":"http://mathhelpforum.com/trigonometry/184356-period-sums-multiplied-sines-cosines-print.html","timestamp":"2014-04-16T05:38:22Z","content_type":null,"content_length":"18878","record_id":"<urn:uuid:99d981b9-b230-4764-b451-299ebab00168>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
New London, PA Math Tutor
Find a New London, PA Math Tutor
...British Lit. from Chaucer through Shakespeare, Milton, Austen, Dickens, Wordsworth, Shaw et al. I am also conversant with American Lit., including essayists, poets, novelists and playwrights
(Emerson, Thoreau, Whitman, Dickinson, Frost, Poe, Fitzgerald, Hemingway, Updike, Saroyan, T. Williams, A.
32 Subjects: including algebra 1, algebra 2, American history, biology
I am certified as an Elementary and ESOL teacher in DE, MD, and PA. I have a Bachelor's Degree in Elementary Education and ESOL, and a Master's Degree in Teaching English as a Second Language. I
was a classroom teacher for 12 years; I taught 4th grade for 8 years, and 5th grade for four years.
29 Subjects: including geometry, prealgebra, reading, writing
...Unfortunately, they moved out of the area and we could not finish the next level. I understand what is is like to learn a foreign language. I am fluent in German and have studied various
European languages and have a strong interest in learning about other cultures, but also to share with peop...
41 Subjects: including trigonometry, SAT math, linear algebra, English
...During my college career, I tutored college students in biology, chemistry, and algebra. I also tutored high school algebra, chemistry, and biology while I pursued my undergraduate education.
After I received my Bachelor's degree, I worked for one year as a high school algebra, chemistry, physics, and physical science teacher for students in 8th-12th grade.
51 Subjects: including algebra 2, English, SAT math, algebra 1
...This will be evident in each lesson. I expect myself to challenge each student and I expect each student to give his or her best effort. Reflection is essential for growth and therefore, I will
give feedback to parents and students and welcome feedback in return.
21 Subjects: including algebra 1, chemistry, statistics, reading | {"url":"http://www.purplemath.com/New_London_PA_Math_tutors.php","timestamp":"2014-04-17T13:20:53Z","content_type":null,"content_length":"23961","record_id":"<urn:uuid:af9a275f-ef3b-486e-93c1-84a10b4f0b5e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cartesian graph of density of water - temperature, Mathematics
Cartesian Graph of Density of Water - Temperature:
Example: The density of water was measured over a range of temperatures. Plot the subsequent recorded data on a Cartesian coordinate graph.
Temperature (°C) Density (g/ml)
40° 0.992
50° 0.988
60° 0.983
70° 0.978
80° 0.972
90° 0.965
100° 0.958
To plot the data the first step is to label the x-axis and the y-axis. Let the x-axis be temperature in °C and the y-axis is density in g/ml.
The further step is to establish the units of measurement along every axis. The x-axis must range from approximately 40 to 100 and the y-axis from 0.95 to 1.00.
The points are then plotted one by one. Below figure shows the resulting Cartesian coordinate graph.
Figure: Cartesian Coordinate Graph of Density of Water vs. Temperature
Graphs are convenient since, at a single glance, the main features of the relationship among the two physical quantities plotted can be seen. Further, if some previous knowledge of the physical
system under consideration is available, the numerical value pairs of points could be connected through a straight line or a smooth curve. From these plots, a values at points not specifically
measured or calculated can be acquired. In Figures, the data points have been connected through a straight line and a smooth curve, correspondingly. From these plots, the values at points not
particularly plotted can be determined. For instance, using Figure, the density of water at 65°C can be determined to be 0.98 g/ml. Because 65°C is within the scope of the available data, it is
known as an interpolated value. Also using Figure, the water density at 101°C can be estimated to be 0.956 g/ml. Because 101°C is outside the scope of the available data, it is known as an
extrapolated value. While the value of 0.956 g/ml appears reasonable, a significant physical fact is absent and not predictable from the data given. Water boils at 100°C at atmospheric pressure.
At temperatures above 100°C it is not a liquid, but a gas. Thus, the value of 0.956 g/ml is of no importance except when the pressure is above atmospheric.
This describes the relative ease of interpolating & extrapolating using graphs. It also points out the precautions which must be taken, namely, extrapolation & interpolation should be done only if
there is some prior knowledge of the system. That is particularly true for extrapolation where the available data is being extended into a region whereas unknown physical changes may take place.
Posted Date: 2/9/2013 5:32:45 AM | Location : United States
Your posts are moderated
Explain Comparing Mixed Numbers in maths? A mixed number is made up of two parts: a whole number and a fraction. For example: 2(3/4) 2(3/4) is read "two and three-fourths
prove that J[i] is an euclidean ring
Initial Conditions and Boundary Conditions In many problems on integration, an initial condition (y = y 0 when x = 0) or a boundary condition (y = y
Newton's Method : If x n is an approximation a solution of f ( x ) = 0 and if given by, f ′ ( x n ) ≠ 0 the next approximation is given by
A framed print measures 36 by 22 in. If the print is enclosed by a 2-inch matting, Evaluate the length of the diagonal of the print? Round to the nearest tenth. See Example.
Subtraction - Vector arithmetic Computationally, subtraction is very similar. Given the vectors a → = (a 1 , a 2 , a 3 ) and b → = (b 1 , b 2 , b 3 ) the difference of the t
Find out the Taylor Series for f (x) = e x about x = 0. Solution In fact this is one of the easier Taylor Series that we'll be asked to calculate. To find out the Taylor
Need solution For the universal set T = {1, 2, 3, 4, 5} and its subset A ={2, 3} and B ={5, } Find i) A 1 ii) (A 1 ) 1 iii) (B 1 ) 1 | {"url":"http://www.expertsmind.com/questions/cartesian-graph-of-density-of-water-temperature-30132596.aspx","timestamp":"2014-04-19T19:33:42Z","content_type":null,"content_length":"33049","record_id":"<urn:uuid:73893ae2-b4e5-4fa1-b8d6-6b7072a4eb12>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monday, April 20th, 2009
24th April 2009, 1400 MC206
Dr. Florica Cirstea
University of Sydney
will speak on
On Classification of Isolated Singularities of Solutions Associated with the Hardy-Sobolev Operator
In this talk we present a complete classification of the isolated singularities of stationary solutions for nonlinear Schrodinger-type equations. We reveal a trichotomy of positive singular solutions
in the case of superlinear nonlinearities with subcritical growth, where the fundamental solutions associated with the Hardy-Sobolev operator play a crucial role.
This is joint work with N. Chaudhuri (University of Wollongong). | {"url":"http://blog.une.edu.au/mathseminars/2009/04/","timestamp":"2014-04-16T04:37:44Z","content_type":null,"content_length":"19827","record_id":"<urn:uuid:a637fbdc-602c-4797-b168-6443945be84d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find r(dot)
Can anyone show me how to find r(dot) of the following vector? r = <x,y> + (lm)/(m+n)<cos(theta),sin(theta)>
Sorry about that. Here is the complete question: Two point masses $m_{1}$ and $m_{2}$ are joint together by a rigid light rod of lenth $\ell$. If the rod move on a vertical plane under the action of
the earth's gravitational field only, show that the path of the center of mass is a parabola and the rod rotates about the center of mass at a uniform angular velocity. The given information I have
is: $\vec{r_{1}} = <x,y> + \displaystyle{\frac{\ell m_{1}}{m_{1}+m_{2}}} <cos (\theta),sin (\theta)>$ and $\vec{r_{2}} = <x,y> - \displaystyle{\frac{\ell m_{2}}{m_{1}+{m_{2}}} <cos (\theta),sin (\
theta)>$ Prove that $\theta$ is cyclic. To do this, the first step is to find $\dot{\vec{r_{1}}}$ and $\dot{\vec{r_{2}}}$ and use them to find the Euler-Lagrangian equation. I hope that's a little
more specific. I just need help finding $\dot{\vec{r_{1}}}$. Thanks!
Last edited by Hartlw; October 20th 2011 at 06:23 PM. | {"url":"http://mathhelpforum.com/advanced-applied-math/189863-find-r-dot.html","timestamp":"2014-04-16T14:21:42Z","content_type":null,"content_length":"45306","record_id":"<urn:uuid:67eed7c0-6ccc-4e87-9b1a-f2fe019cb12d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Measuring the Density of Solids
Name of Corresponding Unit Plan: Archaeological Documentation
Grade Level: 5-8
Common Core Standards:
RS9-10. 3. Follow precisely a complex multistep procedure when carrying out experiments, taking measurements, or performing technical tasks, attending to special cases or exceptions defined in the
Math G-MG.2. Apply concepts of density based on area and volume in modeling situations (e.g., persons per square mile, BTUs per cubic foot).
Content Areas: Math, Science
Recommended Length/Duration: 45-60 minute period
Learning Goals: Students will understand the meaning of density and how to calculate its value.
1. Discuss how different materials have different qualities. They may be solid, liquid, or gaseous. They may be compact and heavy or expansive and light. One of the measurable characteristics of a
material is its density.
2. Define density as the relationship between a material’s volume and its mass. A material that is very massive but small, has a high density (e.g. iron, rock, mercury). A material that is less
massive for its size would have a low density (e.g. wood, Styrofoam). The standard for density is water because standard metric units derive from water. 1 ml of water = 1 gram, 1l of water = 1
kilogram. Therefore, the density of water is 1/1 g/ml or 1/1 kg/l. We say the density of water = 1. Materials more dense than water will have a value greater than 1. Materials less dense than
water will have a fractional value less than 1.
3. Describe that to find the density of a material you need to measure its volume and mass. The volume of a regular object can be determined by measuring its dimensions and using the appropriate
formula. However, most objects do not have regular shapes, so their volume must be determined by displacement.
4. To measure volume by displacement, an object can be submerged in water and measure the amount of water it displaces. Demonstrate how an object submerged will raise the level of the water in a
graduated cylinder or measuring cup.
5. The mass of an object can be determined by weighing it on a scale.
6. Density is calculated by dividing the mass by the volume. Work an example together.
7. Organize students into small work groups to find the density of the various objects provided.
8. After students have completed their measurements and calculations, compare findings and clarify any disagreements or questions.
Assessments: Check worksheets for accuracy.
Materials/Resources: Density of Solids Worksheets (pdf), Scales, Calculators, Collection of various liquids and granular samples
Special Considerations: Weaker math students might be grouped with stronger students.
Scientific measurements are generally easier to take and calculate in metric than in English units. Additional work may be needed if students are not familiar with metric measurements, or if the
teacher chooses to use English units.
Extensions: Students might want to find the density of additional materials at home.
Students might want to find the average density of their body. | {"url":"http://lcmm.org/education/resource/stud/measuring-density-of-solids.html","timestamp":"2014-04-18T08:02:42Z","content_type":null,"content_length":"7326","record_id":"<urn:uuid:66975035-1a8b-4e4c-90cc-3f2864e9445d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00032-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quartile deviation for Grouped Data
Posted by mbalectures | Posted in Descriptive statistics | 14,961 views | Posted on 15-06-2010 |
Quartile deviation or semi-interquartile range is the dispersion which shows the degree of spread around the middle of a set of data. Since the difference between third and first quartiles is called
interquartile range therefore half of interquartile range is called semi-interquartile range also known as quartile deviation. For both grouped and ungrouped data, quartile deviation can be
calculated by using the formula:
Coefficient of Quartile Deviation:
Coefficient of Quartile Deviation is used to compare the variation in two data. Since quartile deviation is not affected by the extreme values therefore it is widely used in the data containing
extreme values. Coefficient of Quartile Deviation can be calculated by using the formula:
The concept of quartile deviation and coefficient of quartile deviation can be explained with the help of simple problems for grouped data.
For Grouped Data
Problem: Following are the observations showing the age of 50 employees working in a whole sale center. Find the quartile deviation and coefficient of quartile deviation.
In case of frequency distribution, quartiles can be calculated by using the formula:
First Quartile (Q1)
In case of frequency distribution first quartile can be calculated by using the formula given below:
Third Quartile (Q3)
Like first and second quartile, the third quartile can be calculated by using the formula:
By putting the values into the formulas of quartile deviation and coefficient of quartile deviation we get:
See also calculation of quartile deviation for ungrouped data | {"url":"http://mba-lectures.com/statistics/descriptive-statistics/302/quartile-deviation-or-semi-interquartile-range-grouped-data.html","timestamp":"2014-04-17T21:22:55Z","content_type":null,"content_length":"39110","record_id":"<urn:uuid:50ad8a17-15fd-422a-a911-51a3e57701a3>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dorothy Wrinch was a mathematician who made contributions to the areas of mathematics, philosophy, physics, and biochemistry. She was born in Rosario, Argentina, where her British parents were
temporarily located while her father, an engineer, worked for a British firm. Dorothy was raised in England, and in 1913 began her studies in mathematics and philosophy at Girton College, Cambridge
University. She was the only Girton woman Wrangler in the Mathematical Tripos in 1916, earning a First Class degree. In 1917 she took Part II of the Moral Sciences (Philosophy) Tripos so that she
could study symbolic logic with Bertrand Russell whom she had met during her first year. She remained at Girton as a research scholar during the academic year 1917-1918, continuing to correspond with
Russell who had moved to London. In 1918 she won Girton's prestigious Gamble Prize (given to distinguished alumnae) for her work on transfinite numbers. The prize had been awarded to Grace Chisholm
Young three years earlier.
In 1918 Wrinch began teaching algebra, trigonometry, calculus, and solid geometry to first- and second-year mathematical honor students at University College, London. While teaching at University
College she also earned M.S. (1920) and D.Sc. degrees (1922). She returned to Girton College in 1921 with a research fellowship. The next year she married John William Nicholson, the director of
studies in mathematics and physics at Oxford College. Wrinch remained at Cambridge during the first year of their marriage, but in 1923 she moved to Oxford to become a part-time mathematics tutor and
lecturer at one of Oxford's women college, Lady Margaret Hall. She also gave lectures at the other women's colleges on a per-term basis. Her status changed in 1927, however, when she received a
three-year appointment from Lady Margaret Hall as a lecturer in mathematics. She also became the first woman to qualify for a university lectureship in mathematics at Oxford which meant that her
lectures were open to male students.
Russell's influence had led Wrinch towards a mathematical interest in logic and epistemology, but neither of these subjects were popular among the general population of English mathematicians and
philosophers, especially after Russell abandoned his academic pursuits in 1921. By 1929 she had published 42 papers in pure mathematics, mathematical physics, and philosophy of science, including six
papers in probability theory and the theory of the scientific method written with Harold Jeffreys between 1919 and 1923. One of Wrinch's early papers in 1923 covered the topic of mediate cardinals
from Russell's Principia Mathematica [Abstract]. Wrinch wrote two papers with her husband, including a 1925 paper on Laplace's equation and surfaces of revolution [Abstract]. An example of one of her
papers in complex analysis is the 1928 paper on the asymptotic evaluation of functions defined by contour integrals [Abstract]. Fifteen of her papers were submitted in 1929 as a basis for a D.Sc.
degree from Oxford, the first such degree to be awarded to a woman by that university. All of her papers were published under her maiden name. As Abir-Am writes, "Her insistence on her own
professional identity stemmed both from her Girton education, which promoted female scholarly independence, and from the socialist and feminist beliefs she had absorbed in Russell's circles" [1].
Wrinch and Nicholson had a daughter in 1928. By 1930, however, her husband's excessive drinking had caused a nervous breakdown and the two separated that year (the marriage was dissolved in 1938).
Wrinch also published a book in 1930, using the pseudonym of Jean Ayling, called The Retreat from Parenthood that tried to illustrate the difficulties of professional women in combining careers with
parenthood, using examples no doubt based on her own life and those of her friends. Her utopian ideals of childcare stressed the need for professional women to have freedom from both domestic and
financial anxieties through the establishment of Child Rearing Services.
In 1931 Wrinch began shifting her research focus to molecular biology. With a leave from Oxford and with the help of several research fellowships, she spent the years 1931 to 1934 in Vienna, Paris,
London, and Berlin, visiting various universities and laboratories and studying the biological applications of the mathematical theory of potential theory in explaining the mechanics of chromosomes
and the structure of proteins. During the summer of 1932 she became a founding member of the Theoretical Biology Club, a group of scientists who shared the vision that philosophy, mathematics,
physics, chemistry and biology could all contribute to the understanding and investigation of the complexity of living organisms. She published her first paper on proteins, "Chromosome behavior in
terms of protein pattern," in Nature in 1934. In 1935 Wrinch received a five-year research fellowship from the Rockefeller Foundation to support her work in mathematical applications to biological
problems. It was during this time that she developed her controversial model of the structure of proteins. She presented the first architectural plan of protein molecules at the 1937 meeting of the
British Association for the Advancement of Science. Abir-Am describes her theory [2]:
This theory combined certain ideas of mathematical symmetry with the notion of a relatively rare type of chemical bond, called the cyclol bond. It suggested that the two-dimensional cyclol bond
was the main link between the proteins' building blocks, the amino acids. In Wrinch's theory, the spatial structure of proteins (known to be the source of their functional versatility) was built
of fabrics instead of the chains that then current chemical theory assumed to exist on the basis of inferences from the results of analytic protein chemistry.
Wrinch's theory, promoted in part by a lecturing tour in the United States in 1937, created great interest among those scientists concerned with the molecular viewpoint of proteins as it could
explain qualitatively much of what was currently known about the behavior of proteins. Niels Bohr wrote in 1939: "Dr. Wrinch's work is indeed a most striking illustration of the fruitfulness of the
application of mathematical argumentation to problems of natural science" [8]. The New York Times reported on her talk at the spring 1940 meeting of the American Philosophical Society:
Dr. Wrinch presented a model of the building blocks of living things in the form of a hollow cage, a truncated tetrahedron in shape.
The metal model, two four-sided pyramids base to base, was stamped with holes showing the fundamental hexagonal ring structure of proteins. The patterns, running over the surface of the cage,
comprised many of these six-sided rings, all interconnected.
Dr. Wrinch took diagrams of the structures of related chemical and biological substances and fitted them into the "template" of living matter; that is, the geometric surface pattern of the atoms
in her ultimate life unit. These included the sterobs, from which bile acids can be built up; hormones, vitamins, cancer-causing substances and heart-stimulating drugs.
Offering an explanation of how viruses reproduce, she said that a protein unit gave birth to another by having a second layer form on its surface in exactly the same pattern, with the first
splitting and flattening.
Although protein cages are "hollow" they are filled with substances the nature of which, it was stated, might determine differences between proteins which are similar in their molecular
Wrinch's cyclol bond theory also came under attack, however, by a group of British protein X-ray crystallographers who argued that her model was not supported by X-ray data, despite her claims to the
contrary. In the United States, Linus Pauling calculated that the cyclol bond was too thermodynamically unstable to exist in nature or the laboratory, leading to an ongoing confrontation between the
two strong-willed scientists carried out publicly and in the pages of the Journal of the American Chemical Society. The dispute even led Wrinch's 13-year old daughter, Pamela, to write to Pauling
complaining that "Your attacks on my mother have been made rather too frequently. If you both think each other is wrong, it is best to prove it instead of writing disagreeable things about each other
in papers. I think it would be best to have it out and see which one of you is really right" [7]. Actually, in the end, neither were. Pauling's claims and calculations were refuted in 1952 when the
cyclol bond was indeed discovered by a Swiss chemist in the ergot (a parasitic fungus) alkaloids. By that time, however, improved experimental techniques had led most scientists to believe proteins
did not have the cyclol structure and molecular biologists had shifted their attention to the study of DNA rather than proteins as the "secret of life." Nevertheless, Wrinch's hypothesis produced so
much interest in the structure of proteins that biologists eventually discovered over 100 such structures by 1980.
Partially because of Pauling's attacks, Wrinch had a difficult time finding a full-time position after emigrating to America in 1939. After a one-year visiting position in the chemistry department at
Johns Hopkins University where she taught a course on the mathematics connected with organic structural chemistry, she accepted in 1941 a joint visiting research professorship at Amherst, Smith, and
Mount Holyoke Colleges. This arrangement was engineered by Otto Glaser, a biologist and vice-president of Amherst College, who had been corresponding with Wrinch for three years and was a strong
supporter of her cyclol theory. The two married that same year in the unusual setting of the Marine Biological Laboratory at Woods Hole, Massachusetts. An announcement of the wedding in the New York
Times mentioned that "her work on protein molecules started a controversy among the leading chemists, physicists and biologists of this country and Europe." The next year she was appointed as a
special research professor of physics at Smith College where she worked with graduate students, lectured, and continued her research. She and her husband spent their summers at Woods Hole. Wrinch
became an United States citizen in 1943. Upon her husband's death in February 1951 she moved to a residence on the Smith College campus, remaining at Smith until her retirement in 1971.
Wrinch's research during the 1940s focused on mathematical techniques for interpreting X-ray data of complicated crystal structures. The basic steps in finding a structure of proteins by X-ray
crystallography are to form a high quality crystal from a sample of protein, place the crystal in an X-ray beam and measure the intensities of the diffraction spots, then compute the structure from
the diffraction intensities using Fourier analysis. In 1946 Wrinch published a monograph on Fourier Transforms and Structure Factors that was an important contribution to this use of Fourier series
in representing and determining the periodic structure of crystals. When the cyclol bond was found in nature and then synthesized in a laboratory during the 1950s, Wrinch felt vindicated and spent
much of her remaining professional life working on her mathematical theory of protein structure. Her two books, Chemical Aspects of the Structure of Small Peptides, An Introduction published in 1960,
and the sequel, Chemical Aspects of Polypeptide Chain Structures and the Cyclol Theory published in 1965, were meant to be the culmination of her thirty years of developing and defending her study of
proteins. Wrinch's list of publications eventually included 192 papers and books. A complete bibliography is given in [7]. In the same volume, Carolyn Cohen, professor of biology at Brandeis
University, describes Wrinch's influence in molecular biology:
...Dorothy Wrinch began her creative work as a mathematician and philosopher, and in the 1930s turned her attention to Biology. She became a member of a small remarkable group at Cambridge
University—the Theoretical Biology Club. Joseph Needham's classic book, Order and Life (1936), was, to a large extent, generated form their discussions. It is dedicated to members of that club;
among them are J.D. Bernal, J.H. Woodger, C.H. Waddington; Dorothy Needham and Dorothy Wrinch. The full story of this group is not yet known, but they were primarily concerned with the analysis
of biological form–both its philosophical and physical basis. And their common belief was in the vital importance of proteins as the key structures in Biology. Dorothy Wrinch's life's work
centered on this problem, and she influenced many, including Joseph Needham in England and, in America, Ross Harrison, the great embryologist at Yale, and Irving Langmuir, the physical chemist. I
believe that her influence has been vastly underestimated.
Wrinch moved to Woods Hole after her retirement from Smith in 1971. Her daughter, Pamela, had become one of the first women to earn a Ph.D. in international relations when she received her degree
from Yale University in 1954. Pamela died tragically in a fire in November 1975. Dorothy Wrinch died 10 weeks later. In [7], Marjorie Senechal writes:
During her years with us [at Smith], she was an inspired teacher, a severe critic, an example of dedication and courage. We will always be grateful to her for the guidance and encouragement she
showed to students and junior colleagues, and for the uncompromisingly high standards she set for herself and for others. We share her concern that the great questions of science be studied in
their whole as well as in their parts.
1. Abir-Am, Phina G. "Synergy or Clash: Disciplinary and Marital Strategies in the Career of Mathematical Biologist Dorothy Wrinch," in Uneasy Careers and Intimate Lives: Women in Science 1789-1979,
Phina G. Abir-Am and Dorinda Outram, Editors, Rutgers University Press, 1987, 239-280.
2. Abir-Am, Phina G. "Dorothy Maud Wrinch (1894-1976)," in Women in Chemistry and Physics: A Biobibliographic Sourcebook, edited by Louise S. Grinstein, Rose K. Rose, and Miriam H. Rafailovich,
Greenwood Press, 1993, 605-612.
3. Carey, Charles W. "Dorothy Maud Wrinch," in American National Biography, Vol. 24, Oxford University Press, 1999, 69-71.
4. "Dorothy Maud Wrinch". A to Z of Women in Science and Math, Lisa Yount (Editor), Facts on File, Inc., 1999.
5. Julian, Maureen M. "Women in Crystallography," in Women of Science-Righting the Record, G. Kass-Simon and Patricia Farnes, Editors, Indiana University Press, 1990, 364-368.
6. Hodgkin, Dorothy Crowfoot and Harold Jeffreys. "Obituary - Dorothy Wrinch," Nature, Vol. 260 (April 8, 1976), 564.
7. Structures of Matter and Patterns in Science, inspired by the work and life of Dorothy Wrinch, 1894-1976," The Proceedings of a Symposium held at Smith College, Northampton, Massachusetts
September 28-30, 1977, and Selected papers of Dorothy Wrinch, from the Sophia Smith Collection. Marjorie Senechal, editor. Schenkman Publishing Company, 1980.
8. Senechal, Marjorie. "A Prophet without Honor: Dorothy Wrinch, Scientist, 1894-1976," Smith Alumnae Quarterly, Vol. 68 (1977), 18-23.
9. Senechal, Marjorie. "Hardy as Mentor," Mathematical Intelligencer, Vol. 29, No. 1 (2007), 16-23. Describes the time from when Wrinch entered Girton College through her early years as a
professional mathematician, and the role G. H. Hardy played as her mentor.
10. Howie, David. Interpreting Probability, Controversies and Developments in the Early Twentieth Century, Cambridge University Press, 2002. [Contains a description of Dorothy Wrinch's work with
Harold Jeffries in probability theory and the scientific method.]
11. "Protein Units Put in Graphic 'Cage'," New York Times, April 19, 1940, p14.
12. "Dr. O. C. Glaser Weds Dr. Dorothy Wrinch," New York Times, August 21, 1941, p20.
13. "Waffle-Iron Theory of Proteins," New York Times, February 2, 1947, pE9.
14. Obituary, New York Times, February 15, 1976, p67.
15. Biography at the MacTutor History of Mathematics Archive.
Note: The papers, letters, diaries, and notebooks of Dorothy Wrinch are part of the Sophia Smith Collection at Smith College. | {"url":"http://agnesscott.edu/lriddle/women/wrinch.htm","timestamp":"2014-04-17T00:49:56Z","content_type":null,"content_length":"19849","record_id":"<urn:uuid:f7975817-4419-45d8-879c-09447053d474>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Forces with angles, Find the acceleration
1. The problem statement, all variables and given/known data
Two forces act on a 4.00 kg mass which is sitting at rest on a horizontal frictionless surface. One force is 6.50 N directed 55° West of South, the other is 7.00 N directed 25° North of West. What
acceleration does the mass receive?
2. Relevant equations
sin = o/h
cos = a/h
tan = o/a
a = F/m
3. The attempt at a solution
I drew forces (not accurately) on paint (attachment) to help me solve this question then did the following calculations:
sin = o/h sin35° = o/6.5N o = 3.7N [S]
sin = o/h sin25° = o/7N o = 3N [N]
Fapx = Fapx1 + Fapx2
Fapx = 3.7N - 3N
Fapx = 0.7N
cos = a/h cos35° = a/6.5N a = 5.3N [W]
cos = a/h cos25° = a/7N a = 6.3N [W]
Fapy = Fapy1 + Fapy2
Fapy = 5.3N + 6.3N
Fapy = 11.6N
tan = o/a tan = 11.6/0.7 tan = 86.5°
sin = o/h sin86.5° = 11.6/h h = 11.622N
a = F/m a = 11.622N/4.00kg a = 2.9 m/s^2
_______________________________________________________________________ ______
I was wondering if I did the question right. Did I get the right answer?
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
Firstly, you have not helped yourself with your diagram.
"Your" 55
angle is smaller than your 25
angle, so when you got two x-components of 3N and 3.7N you did not recognise that at least one of them was wrong, as the x-component of the 6.5N force is way less than the x component of the 7N
force. The Y components don't llok good either. You seem to have added the y components and subtracted the x ?? | {"url":"http://www.physicsforums.com/showpost.php?p=3673839&postcount=2","timestamp":"2014-04-20T21:24:46Z","content_type":null,"content_length":"9773","record_id":"<urn:uuid:7a99b2f7-18db-44a2-8557-6d50a4af5b8a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is 202 pounds in kilograms?
You asked:
What is 202 pounds in kilograms?
91.62565874 kilograms
the mass 91.62565874 kilograms
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/what_is_202_pounds_in_kilograms","timestamp":"2014-04-21T15:02:40Z","content_type":null,"content_length":"53311","record_id":"<urn:uuid:5051ee4d-da25-4da2-80c9-e7ad518cfd14>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Encyclopedia of Integer Sequences
The Encyclopedia of Integer Sequences, N. J. A. Sloane and S. Plouffe, Academic Press, San Diego, 1995, 587 pp. ISBN 0-12-558630-2.
Reviews, etc
• Favorite quotation from a reader of the 1973 book: "There's the Old Testament, the New Testament, and The Handbook of Integer Sequences".
• The review by Richard K. Guy in The American Mathematical Monthly (Volume 104, Number 2, Feb. 1997, pp. 180-184) begins: "John Conway calls The Encyclopedia `The best present I've had in
years'... "
• A Question of Numbers - Fascinating review by Brian Hayes in American Scientist.
• Equally fascinating review by J. M. Borwein and R. M. Corless in SIAM Review.
• One of the many reviews from the Amazon.com web page: Rating: "***** My God!"
Reviewer: You Seng Peng from Taipei, Taiwan. December 12, 1999.
"Since combinatorics is my major, this book fulfills my dream. It contains over 5000 sequences, from famous Fibonacci to notorious 1,3,6,11,17,25,... (perfect ruler, general term still
unknown), to nonsense 1,11,21,1211,111221,.. (every term describe the former term). Nearly every important integer sequence in mathematics get a line here, with references. This is a dream
book for combinatorics specialists, a must for high-school teachers while doing some short essays with gifted students, a fun book for mathematics fans, especially those like mathematical
• Even though the sequence table is now accessible through the Web and via email, you still need the book because:
□ it contains a convenient selection of the best 5487 sequences (the on-line version now has well over 50,000 sequences, far too many to print);
□ for the introductory chapters that describe (among other things) techniques for analyzing sequences;
□ for the illustrations, that describe a selection of the most interesting sequences;
□ because it is nice to see all the sequences together; and
□ as many readers have mentioned, it makes excellent bedtime reading!
Ordering information
The book may be ordered:
• from the publisher, Academic Press:
Details: ISBN 0-12-558630-2, $44.95 (as of Feb 1997), 587pp., 5487 sequences. Academic Press: phone (800) 321 5068 in the US, or email to ap@acad.com. To order in Europe, contact Academic Press
Inc. (London), 24-28 Oval Road, London NW1 7DX, Great Britain, phone: 01-81-300-33-22.
• or from Amazon.com
Further information
• The publisher unfortunately omitted the list of figures. This is available as either a plain text or postscript file. It can be pasted over the (presently blank) page vi.
• (Very) partial listing of errors in book | {"url":"https://oeis.org/book.html","timestamp":"2014-04-17T07:46:17Z","content_type":null,"content_length":"7941","record_id":"<urn:uuid:da766e00-0017-41c4-bd75-7d9cb0550b8f>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Kinematics] Faster way?
June 1st 2010, 10:36 PM
[Kinematics] Faster way?
I know how to do it, but it's ridiculously long.
How do I do this the fast way, assuming there is one.
I found the area of all and removed the negative areas to get the distance.
And correctly got A.
June 1st 2010, 10:46 PM
I don't think there's a faster way with the information you've been given. You just have to be fast with finding areas of triangles and rectangles (and trapezoid/trapezium) and seeing when
certain things cancel each other out. :(
Now, if you'd been given the start position (call it a) and end position (call it b), then there would be an easier way. It would be (1/14)(b-a). (By the fundamental theorem of calculus.)
Edit: You could however eliminate B immediately, since there's no way a denominator of 14 could get simplified to 16. You could eliminate E by eyeballing it. But I don't see easy ways to
eliminate C and D.
June 1st 2010, 11:03 PM
I don't think there's a faster way with the information you've been given. You just have to be fast with finding areas of triangles and rectangles (and trapezoid/trapezium) and seeing when
certain things cancel each other out. :(
Now, if you'd been given the start position (call it a) and end position (call it b), then there would be an easier way. It would be (1/14)(b-a). (By the fundamental theorem of calculus.)
Edit: You could however eliminate B immediately, since there's no way a denominator of 14 could get simplified to 16. You could eliminate E by eyeballing it. But I don't see easy ways to
eliminate C and D.
Could you show me how you would do it?
June 1st 2010, 11:14 PM
No problem.
So going from point A to B, the area is 0 because the positive and negative contributions are equal.
Then from B to C it's -18.
Then from C to the place where it crosses the x-axis, it's -6, making overall total of -24. Then from that point to D it's 24, making overall total 0.
Then from D to E it's 2 * (average of 8 and 12) = 20. So the answer is 20/14 = 10/7.
June 1st 2010, 11:44 PM
Thanks, that's really helpful. I wasn't aware of the first part being that they would cancel out. | {"url":"http://mathhelpforum.com/pre-calculus/147422-kinematics-faster-way-print.html","timestamp":"2014-04-20T00:31:19Z","content_type":null,"content_length":"8236","record_id":"<urn:uuid:18fd09f5-cc88-43e2-9cb5-973ee7438895>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
Additional Mathematics Syllabus
New Syllabus Additional Mathematics is specially written for students preparing for the GCE 'O' level examinations, but will be useful for any students studying in ...
O Level Additional Mathematics Syllabus Secondary Three/Four 33 7 O LEVEL ADDITIONAL MATHEMATICS Knowledge of the content of O Level Mathematics ...
Contents Cambridge IGCSE Additional Mathematics Syllabus code 0606 1. Introduction .....2
additional mathematics gce ordinary level (syllabus 4038) contents page gce ordinary level mathematics 4038 1 mathematical formulae 8 mathematical notation 9
A list of resources for students studying O Level Additional Maths (4037). ... CIE syllabus material and other resources are listed in our Publications Catalogue.
CONTENTS Notes page 2 GCE Ordinary Level and School Certificate Syllabuses Mathematics (Syllabus D) 4024 3 Additional Mathematics 4037 11 Statistics 4040* 16
Topics that are covered in the Additional Mathematics syllabus include functions, quadratic equations, differentiation and integration Additional Mathematics in Hong Kong
- Effective for Examinations from May/June 2012 - Includes Mark Scheme and Specimen Paper -
additional mathematics gce ordinary level (syllabus 4018) contents page notes 1 gce ordinary level additional mathematics 4018 2 mathematical notation 7
Mathematics Syllabus for secondary schools by ... schools, Mathematics and Additional Mathematics for secondary schools. The Malaysian school mathematics curriculum aims ...
The parts in italics are for the Additional Mathematics syllabus only. THEMES AND TOPICS LEARNING OBJECTIVES Learners will: BASIC COMPETENTCIES
additional mathematics gce ordinary level (syllabus 4038) contents page gce ordinary level mathematics 4038 1 mathematical formulae 8 mathematical notation 9
Purple Math - Practical Algebra Lesson Huraian Sukatan Pelajaran Add Maths (Syllabus Book)- Form 4 Huraian Sukatan Pelajaran Add Maths (Syllabus Book)- Form 5
MATHEMATICS. Syllabus; Curriculum Specifications; Scheme Of Work; Teaching Tools ... Additional Mathematics; Mathematics SPM; Mathematics PMR; Exam papers. SPM Mathematics | {"url":"http://m.webtopicture.com/additional/additional-mathematics-syllabus.html","timestamp":"2014-04-20T05:44:46Z","content_type":null,"content_length":"27246","record_id":"<urn:uuid:2bf5ca2d-f611-4488-8819-b9f84dcb7841>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equation can be explained like this: Geraldo has $40 already saved. He needs more money to equal $129 for CD. Lets say y is that additional amount of money that he needs. So he needs $40+y to equal
129. That y is, if he saves money for four months every week, y=16x. So we have $40+y=$129 $40+16x=$129 To find out how much Geraldo should save in order to buy CD then equation need to be solved for
x: 16x=129-40 16x=89 x=5.5625 = $5.7 So he needs to save 5.7$ each week which is 16*5.7=89$ totaly saved for four months which summed with $40 equals $129 for CD. | {"url":"http://mathhelpforum.com/algebra/8027-modeling-print.html","timestamp":"2014-04-16T14:03:52Z","content_type":null,"content_length":"4622","record_id":"<urn:uuid:8f12564a-9065-44b9-a628-d38188f1a8a1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
HKUST Institutional Repository: Item 1783.1/587
About the
Top 20
HKUST Scholarly
HKIR - Cross-
HK Repositories
& Collections
By Date
HKUST Institutional Repository >
Mathematics >
MATH Doctoral Theses >
Please use this identifier to cite or link to this item: http://hdl.handle.net/1783.1/587
Title: Dynamics of transcendental entire functions
Authors: Wang, Xiaoling
Issue Date: 2003
Let f(z) denote a transcendental entire function, F(f) and J(f) are the Fatou set and Julia set of f respectively. In this thesis, we shall mainly investigate the dynamical properties of
transcendental entire functions. In complex dynamics, there are some basic and important problems, the first (and classical) one is to study the behavior of points in C under iterations
of f(z); another one is to study the analytic and geometric properties of the sets F(f) and J(f). Two of the prevailing research topics in complex dynamics are: (i) The dynamics of two
permutable transcendental entire functions. Julia proved in 1920’s that if f and g are two permutable rational functions then J(f) = J(g). I.N. Baker (1960’s) extended this to a certain
class of transcendental entire functions. (ii) Geometric properties of Julia sets and Fatou sets, that is, are there buried points or buried components in Julia sets? Are all the Fatou
components bounded? In the first Chapter, we will give a simple introduction of Nevanlinna’s value distribution theory (which is the main tools in our investigations), factorization of
meromorphic functions, minimum modulus, maximum modulus, Poincare metric theory, the classical function theory, the definition of dynamical theory of transcendental entire functions and
Abstract: some key lemmas that will be used throughout this thesis. Then in Chapter 2, we recall four different kinds of sets on which fn(z) (the nth iterate of f) goes to ∞ in four different
ways, and these sets have a close relationships with the Julia set J(f) and Fatou set F(f). We will show some important dynamical properties by studying these four sets, and we get the
relationships between the above four sets and the Fatou components of f. In Chapter 3, as an extension of Baker’s result, we will show that if f and g are two permutable transcendental
entire functions with q(g) = aq(f) + b for some nonconstant polynomial q(z) and some two numbers a(≠ 0) and b. Then the above four sets, Julia sets and Fatou sets of f and g
correspondently are the same. In Chapter 4, we study the boundedness, connectivity and boundary of a Fatou component. It’s well-known that if F(f) contains multiply connected components,
then all components are bounded. Furthermore, in Chapter 5 we prove that under some weak conditions, F(f) contains only bounded components. These subjects have attracted much interests
among complex analysts and many related results were obtained earlier by Anderson, I.N. Baker, A. Hinkkanen, X.H. Hua, G.S. stallard, Y.F. Wang, C.C. Yang, J.H. Zheng and etc. Many
results as well as some conjectures of this thesis have been published in, for example, Indian J. Pure Appl. Math., J. Math. Anal. Appl. and Inter. J. Bifur. Chaos.
Thesis (Ph.D.)--Hong Kong University of Science and Technology, 2003
Description: viii, 105 leaves ; 30 cm
HKUST Call Number: Thesis MATH 2003 Wang
URI: http://hdl.handle.net/1783.1/587
Appears in MATH Doctoral Theses
Files in This Item:
File Description Size Format
th_redirect.html 0Kb HTML View/Open
All items in this Repository are protected by copyright, with all rights reserved. | {"url":"http://repository.ust.hk/dspace/handle/1783.1/587","timestamp":"2014-04-17T12:35:16Z","content_type":null,"content_length":"19868","record_id":"<urn:uuid:709fec75-b9ee-4585-b2ae-3d518e82813f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
infinite solutions
March 13th 2011, 10:37 PM #1
Nov 2010
infinite solutions
Hello, we have been solving linear equations in three variables and one of them has the final two equations as:
which will answer as an infinite number of solutions possible.
My question is, is it acceptable to simply write "infinite number of solutions possible" in an exam, or should I express that some other way ?
Thanks for any help.
Yes. You can add that this is because the two equations are identical (what happens when you multiply both sides of the first equation by -1?)
If you wanted to really impress (and/or shock) your teacher, you could find a general expression for all of those "infinite" number of solutions. Of course, that would depend upon what the first
equation was.
If the problem was, say, x+ y+ z= 1, -x-3z=-1, x+3z=1, you could rewrite the last equation as x= 1- 3z and put that into the first equation to get 1- 3z+ y+ z= 1+ y- 2z= 1 so that y= 2z. Now you
can write that (x, y, z)= (1- 3z, 2z, z) is a solution for z any number
March 13th 2011, 10:42 PM #2
March 14th 2011, 04:57 AM #3
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/algebra/174520-infinite-solutions.html","timestamp":"2014-04-19T15:07:39Z","content_type":null,"content_length":"36351","record_id":"<urn:uuid:63b9e297-41f0-40db-89f9-9f869f1965da>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
9:50 Puzzle
Move 1 number to make this true.
You have to move the 1 from the instructions to the beginning of the calculation!
1103 + 104 = 1207."
Move two items in this number sentence to make it correct.
Answer 0.5 = 1/2."
The solution to sinx = nx is x = 6, can you work out how?
The answer is to simply divide both sides by n:
si x = x
A little maths joke I hope you agree!"
what do you think?"
Move one number
A: 13-2=11."
A. 1+1 =2."
I rode into town on Friday, stayed for 3 days, and rode back out on Friday, how did I do it?"
Move one Digit in this sum to make it correct...
103 - 102 = 3
103 - 10^2 = 3.
Great idea Paul. It has been created as an animation below. Transum"
Answer 545+5=550
Many others similar of course!"
Can you make this statement true
by adding just one small line?
Can you make up a puzzle like this? An equation or identity which is not true but can be made true by adding a line or a point. Your puzzle can be featured on this page. Enter your puzzle here.
Teacher, do your students have access to computers?
Do they have Laptops in Lessons or iPads?
Whether your students each have a TabletPC, a Surface or a Mac, this activity lends itself to eLearning (Engaged Learning).
Here is the URL for a concise version of this page without comments or answers.
Here is the URL which will take them to another lateral thinking puzzle. | {"url":"http://www.transum.org/Software/sw/Starter_of_the_day/Starter_May30.asp","timestamp":"2014-04-21T15:45:10Z","content_type":null,"content_length":"18731","record_id":"<urn:uuid:c87b3980-42a9-4d8f-b377-9022e1068102>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
SAS-L archives -- October 2009, week 5 (#54)LISTSERV at the University of Georgia
Date: Thu, 29 Oct 2009 06:55:06 -0700
Reply-To: bigD <diaphanosoma@GMAIL.COM>
Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From: bigD <diaphanosoma@GMAIL.COM>
Organization: http://groups.google.com
Subject: Re: calculating number of people present given their arrival and
departure time
Comments: To: sas-l@uga.edu
Content-Type: text/plain; charset=ISO-8859-1
On Oct 28, 8:32 pm, "Lou" <lpog...@hotmail.com> wrote:
> "bigD" <diaphanos...@gmail.com> wrote in message
> news:08fa14e0-5632-4600-a970-2b9df177d5d6@j9g2000vbp.googlegroups.com...
> > Hi,
> > I have a data set that contains the time a person entered the
> > emergency room and the time they left.
> > I would like to know how many people were in the emergency department
> > for each hour of the day on a weekly basis.
> The problem statement could be a little stronger. For instance, just for
> purposes of illustration, let's suppose a day is two hours long - 120
> minutes. How many hourly intervals are you concerned with? Two, starting
> from time 0 and ending at time 120? Or maybe it's 60, the first interval
> starting at time 0 and ending at time 60, the second starting at time 1 and
> ending at time 61, etc.? Or maybe it's only 1, starting at time 30 and
> ending at time 90?.
> I ask because recently I had to calculate the average number of episodes a
> patient had per 28 days over a 84 day interval, and there are 56 periods of
> 28 days in an 84 day interval.
> Whatever the answers to the above, and whatever temporal resolution you need
> (hourly intervals by the hour, minute, or second) I probably wouldn't bother
> with arrays. Instead, I'd generate a record for each interval between the
> the time a person entered the emergency roon and the time s/he left (one
> record an hour, or one record a minute, or one record a second, or whatever)
> and then count the total number of records for each timepoint using PROC
> MEANS.
Its always hard to describe one's problem on these lists because after
working on the problem for a while, you just assume everyone knows
what you are thinking. I also try to make the post as general as
possible so that more people will read the post.
I'm only interested in getting an hourly "census" of the ED
department. The problem with not using arrays in this case is that the
data set would become fairly large. Also I think its more intuitive
to see the data across time (columns) in this case, but that's just | {"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0910e&L=sas-l&F=&S=&P=6901","timestamp":"2014-04-20T18:25:10Z","content_type":null,"content_length":"11294","record_id":"<urn:uuid:9e52c682-e8af-47c0-8706-8420fc33e8a5>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00513-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What operation is necessary to solve the equation that represents the statement: one point eight is what number times 9? Explain your answer.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
The question translates to 9x = 1.8. Since we solve for x, we can isolate it by dividing both sides of the equation by 9, so division is the operation that will get the answer.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4e2d96820b8b3d38d3baba60","timestamp":"2014-04-18T08:33:36Z","content_type":null,"content_length":"30254","record_id":"<urn:uuid:bf36e9d3-ed8d-4bc8-ba96-5e2b040a59ad>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the topological structure of a finitely generated semigroup of matrices, Semigroup Forum 37
, 2002
"... We define general algebraic frameworks for shortest-distance problems based on the structure of semirings. We give a generic algorithm for finding single-source shortest distances in a weighted
directed graph when the weights satisfy the conditions of our general semiring framework. The same algorit ..."
Cited by 72 (20 self)
Add to MetaCart
We define general algebraic frameworks for shortest-distance problems based on the structure of semirings. We give a generic algorithm for finding single-source shortest distances in a weighted
directed graph when the weights satisfy the conditions of our general semiring framework. The same algorithm can be used to solve efficiently classical shortest paths problems or to find the
k-shortest distances in a directed graph. It can be used to solve single-source shortest-distance problems in weighted directed acyclic graphs over any semiring. We examine several semirings and
describe some specific instances of our generic algorithms to illustrate their use and compare them with existing methods and algorithms. The proof of the soundness of all algorithms is given in
detail, including their pseudocode and a full analysis of their running time complexity.
, 1994
"... The tropical semiring M consists of the set of natural numbers extended with infinity, equipped with the operations of taking minimums (as semiring addition) and addition (as semiring
multiplication). We use factorization forests to prove finiteness results related to semigroups of matrices over M. ..."
Cited by 32 (0 self)
Add to MetaCart
The tropical semiring M consists of the set of natural numbers extended with infinity, equipped with the operations of taking minimums (as semiring addition) and addition (as semiring
multiplication). We use factorization forests to prove finiteness results related to semigroups of matrices over M. Our method is used to recover results of Hashiguchi, Leung and the author in a
unified combinatorial framework.
"... this paper is to present other semirings that occur in theoretical computer science. These semirings were baptized tropical semirings by Dominique Perrin in honour of the pioneering work of our
brazilian colleague and friend Imre Simon, but are also commonly known as (min; +)-semirings ..."
Cited by 24 (0 self)
Add to MetaCart
this paper is to present other semirings that occur in theoretical computer science. These semirings were baptized tropical semirings by Dominique Perrin in honour of the pioneering work of our
brazilian colleague and friend Imre Simon, but are also commonly known as (min; +)-semirings
, 1996
"... We show that the answer to the Burnside problem is positive for semigroups of matrices with entries in the (max,+)-algebra (that is, the semiring (R[ f\Gamma1g; max; +)), and also for semigroups
of (max,+)-linear projective maps with rational entries. An application to the estimation of the Lyapuno ..."
Cited by 11 (2 self)
Add to MetaCart
We show that the answer to the Burnside problem is positive for semigroups of matrices with entries in the (max,+)-algebra (that is, the semiring (R[ f\Gamma1g; max; +)), and also for semigroups of
(max,+)-linear projective maps with rational entries. An application to the estimation of the Lyapunov exponent of certain products of random matrices is also discussed. 1. Introduction The "(max,+)
-algebra" is a traditional name for the semiring (R[f\Gamma1g; max; +), denoted Rmax in the sequel. This is a particular example of idempotent semiring (that is a semiring whose additive law
satisfies a \Phi a = a), also known as dioid [17, 18, 2]. This algebraic structure has been popularized by its applications to Graph Theory and Operations Research [17, 8]. Linear operators in this
algebra are central in Hamilton-Jacobi theory and in the study of exponential asymptotics [33]. The study of automata and semigroups of matrices over the analogous "tropical" semiring (N [ f+1g;min;
+) has been ...
, 2002
"... Hashiguchi has studied tile limitedness problem of distance automata (DA) in a series of paper ([3], [6] and [71). Tile distance of a DA can be limited or unbounded. Given that tile distance of
a. DA is limited, Hashiguchi has proved in [71 that the distance of the automaton is bounded by 2 4ns+'n l ..."
Cited by 7 (0 self)
Add to MetaCart
Hashiguchi has studied tile limitedness problem of distance automata (DA) in a series of paper ([3], [6] and [71). Tile distance of a DA can be limited or unbounded. Given that tile distance of a. DA
is limited, Hashiguchi has proved in [71 that the distance of the automaton is bounded by 2 4ns+'n lg(n+2)+n, where n is tile number of states. In this paper, we study again Hashiguchi's solution to
tile limitedhess problem. We have made a number of simplification and improvement on Hashiguchi's method. VVe are able to improve the upper bound to...
- In Proceedings Mathematical Foundations of Computer Science , 1998
"... We develop a new algorithm for determining if a given nondeterministic finite automaton is limited in nondeterminism. From this, we show that the number of nondeterministic moves of a finite
automaton, if limited, is bounded by 2 2 where n is the number of states. If the finite automaton is over ..."
Cited by 4 (1 self)
Add to MetaCart
We develop a new algorithm for determining if a given nondeterministic finite automaton is limited in nondeterminism. From this, we show that the number of nondeterministic moves of a finite
automaton, if limited, is bounded by 2 2 where n is the number of states. If the finite automaton is over a one-letter alphabet, using Gohon's result the number of nondeterministic moves, if limited,
is less than n . In both cases, we present families of finite automata demonstrating that the upper bounds obtained are almost tight. We also show that the limitedness problem of the number of
nondeterministic moves of finite automata is PSPACE-hard. Since the problem is already known to be in PSPACE, it is therefore PSPACE-complete. 1
, 1994
"... We show that the answer to the Burnside problem is positive for semigroups of matrices with entries in the (max; +)-algebra (that is, the semiring (R[ f\Gamma1g; max; +)), and also for
semigroups of (max; +)-linear projective maps with rational entries. An application to the estimation of the Lyap ..."
Cited by 2 (1 self)
Add to MetaCart
We show that the answer to the Burnside problem is positive for semigroups of matrices with entries in the (max; +)-algebra (that is, the semiring (R[ f\Gamma1g; max; +)), and also for semigroups of
(max; +)-linear projective maps with rational entries. An application to the estimation of the Lyapunov exponent of certain products of random matrices is also discussed.
"... ABSTRACT We define general algebraic frameworks for shortest-distance problems based on the structure of semirings. We give a generic algorithm for finding single-source shortest distances in a
weighted directed graph when the weights satisfy the conditions of our general semiring framework. The sam ..."
Add to MetaCart
ABSTRACT We define general algebraic frameworks for shortest-distance problems based on the structure of semirings. We give a generic algorithm for finding single-source shortest distances in a
weighted directed graph when the weights satisfy the conditions of our general semiring framework. The same algorithm can be used to solve efficiently classical shortest paths problems or to find the
k-shortest distances in a directed graph. It can be used to solve single-source shortest-distance problems in weighted directed acyclic graphs over any semiring. We examine several semirings and
describe some specific instances of our generic algorithms to illustrate their use and compare them with existing methods and algorithms. The proof of the soundness of all algorithms is given in
detail, including their pseudocode and a full analysis of their running time complexity.
"... Summary. One of the challenges of computer science is to manipulate objects from an infinite set using finitary means. All data processing problems have an infinite number of potential input
data. All but the simplest specifications of computer systems talk about an infinite set of possible behavior ..."
Add to MetaCart
Summary. One of the challenges of computer science is to manipulate objects from an infinite set using finitary means. All data processing problems have an infinite number of potential input data.
All but the simplest specifications of computer systems talk about an infinite set of possible behaviors, be it, for example, as input/output relation or as infinite sequences of possible actions. Of
course mathematics is well accustomed to deal with infinite sets. But it is computer science that brings a completely new dimension to the picture, namely that of effectiveness. One of the central
concepts that have emerged from computer science in response to this challenge is that of recognizability, whose combination with logic and automata has proved incredibly fruitful. Both logic and
automata theory have then seen their areas of applications extend far beyond what could be imagined at their creation. One can for example refer to an essay “On the Unusual Effectiveness of Logic in
Computer Science ” [HHI01] whose title appropriately summarizes this phenomenon and draws a comparison with the role of mathematics in physics. The theory of automata and recognizability has
developed in two main directions: as an ever more sophisticated and efficient tool to handle finite, sequential and discrete behaviors (languages of finite words); and through a number of extensions
of the theory aiming at the analysis of more complex, | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1451326","timestamp":"2014-04-21T06:14:44Z","content_type":null,"content_length":"32768","record_id":"<urn:uuid:42f54de8-6251-45fc-a4b5-76ada9948166>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
"... gperf is a "software-tool generating-tool" designed to automate the generation of perfect hash functions. This paper describes the features, algorithms, and object-oriented design and
implementation strategies incorporated in gperf.Italso presents the results from an empirical comparison between gp ..."
Cited by 51 (35 self)
Add to MetaCart
gperf is a "software-tool generating-tool" designed to automate the generation of perfect hash functions. This paper describes the features, algorithms, and object-oriented design and implementation
strategies incorporated in gperf.Italso presents the results from an empirical comparison between gperf-generated recognizers and other popular techniques for reserved word lookup. gperf is
distributed with the GNU libg++ library and is used to generate the keyword recognizers for the GNU C and GNU C++ compilers. 1 Introduction Perfect hash functions are a time and space efficient
implementation of static search sets, which are ADTs with operations like initialize, insert,andretrieve. Static search sets are common in system software applications. Typical static search sets
include compiler and interpreter reserved words, assembler instruction mnemonics, and shell interpreter builtin commands. Search set elements are called keywords.Key- words are inserted into the set
once, usually at c...
, 2002
"... ... this paper is to precisely analyse the behaviour of one such extremely simple heuristic which is known to give modest compression in practice. For the heuristic we prove that the expected
asymptotic space requirement is, at worst, a(k)n+ b(k)x and that although its dependency on n is inherent ..."
Add to MetaCart
... this paper is to precisely analyse the behaviour of one such extremely simple heuristic which is known to give modest compression in practice. For the heuristic we prove that the expected
asymptotic space requirement is, at worst, a(k)n+ b(k)x and that although its dependency on n is inherent, it can be made arbitrarily small. Here k is a parameter and a(k) and b(k) are, respectively,
monotonically decreasing and increasing functions. Thus k allows a trade-off between dependency on n and x;for example, pairs (a(k), b(k)) can be (0.1, 3.26), (0.03, 5.57) and (6 10 -4 , 33). We also
show that for some applications the dependency of the space requirement on n canbemadesublinear. The heuristic allows constant time access to any element. Our analyses are over two different models
for the uniform probability distribution and we derive exact formulae for the expected space used. We prove that the heuristic gives the same asymptotic performance in both models | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2666568","timestamp":"2014-04-24T21:07:28Z","content_type":null,"content_length":"15321","record_id":"<urn:uuid:66873e6d-03f3-45f9-b18c-bf7dc0558db4>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ratio test question
February 16th 2010, 12:21 AM #1
Feb 2009
Ratio test question
hi everyone
im stuck with this question, need help to clarify..
Determine this series is absolutely convergent or diverges by using ratio test.
$\sum ^\infty_{n=1}$$\frac{(-1)^n e^\frac{1}{n}}{n^3}$
i manage to get this
$\frac{\frac{(-1)^{n+1} e^\frac{1}{n+1}}{(n+1)^3}}{\frac{(-1)^n e^\frac{1}{n}}{n^3}}$
how do i complete this working, out of ideas.need help.
really appreciate all your help & guidance, thank you in advance for all your help & support.
hi everyone
im stuck with this question, need help to clarify..
Determine this series is absolutely convergent or diverges by using ratio test.
$\sum ^\infty_{n=1}$$\frac{(-1)^n e^\frac{1}{n}}{n^3}$
i manage to get this
$\frac{\frac{(-1)^{n+1} e^\frac{1}{n+1}}{(n+1)^3}}{\frac{(-1)^n e^\frac{1}{n}}{n^3}}$
how do i complete this working, out of ideas.need help.
really appreciate all your help & guidance, thank you in advance for all your help & support.
You actually need to find $\lim_{n \to \infty}\left|\frac{a_{n + 1}}{a_n}\right|$
$= \lim_{n \to \infty}\left|\frac{\frac{(-1)^{n + 1}e^{\frac{1}{n + 1}}}{(n + 1)^3}}{\frac{(-1)^n e^\frac{1}{n}}{n^3}}\right|$
$= \lim_{n \to \infty}\frac{\frac{e^{\frac{1}{n + 1}}}{(n + 1)^3}}{\frac{e^{\frac{1}{n}}}{n^3}}$
$= \lim_{n \to \infty}\frac{n^3e^{\frac{1}{n + 1}}}{(n + 1)^3e^{\frac{1}{n}}}$
$= \lim_{n \to \infty}\frac{n^3e^{\frac{1}{n + 1} - \frac{1}{n}}}{(n + 1)^3}$
$= \lim_{n \to \infty}\frac{n^3e^{-\frac{1}{n(n + 1)}}}{(n + 1)^3}$
$= \lim_{n \to \infty}\frac{n^3}{(n + 1)^3e^{\frac{1}{n(n + 1)}}}$
Now yo might want to try using L'Hospital's Rule to simplify...
thank you for guiding. got no idea how to use L Hospital rule to simplify this question, can some one help me.
thank you for all your help & guidance.
There is no need to use L'Hopital's rule. Separate it into two parts:
$<br /> = \frac{n^3}{(n + 1)^3}e^{-\frac{1}{n(n + 1)}}<br />$
Now, $\lim_{n\to\infty}\left(\frac{n}{n+1}\right)^3$ should be easy. As n goes to infinity, $\frac{1}{n(n+1)}$ goes to 0. I think you have a problem applying the ratio test here!
thank you for replying.
$<br /> <br /> \lim_{n\to\infty}\left(\frac{n}{n+1}\right)^3<br />$ = 1
is that correct? then how do i simplify the other limit , $e^{-\frac{1}{n(n + 1)}}$
need some guide & help, thank you in advance for all help & support.
There is no need to use L'Hopital's rule. Separate it into two parts:
$<br /> = \frac{n^3}{(n + 1)^3}e^{-\frac{1}{n(n + 1)}}<br />$
Now, $\lim_{n\to\infty}\left(\frac{n}{n+1}\right)^3$ should be easy. As n goes to infinity, $\frac{1}{n(n+1)}$ goes to 0. I think you have a problem applying the ratio test here!
Wouldn't that mean that $e^{-\frac{1}{n(n + 1)}}$ tends to $e^0 = 1$?
So really, all you have to worry about is the $\left(\frac{n}{n + 1}\right)^3 = \left(1 - \frac{1}{n + 1}\right)^3$ which tends to $1$.
Ohhh, so it all tends to $1$ - so I agree with you that the ratio test fails...
Why not try LAST instead?
did you mean
$e^-\frac{\frac{1}{n^2}}{\frac{n^2}{n^2}+\frac{n}{n^2} }$
$e^0$ = 1 ?
thank you for confirming, sorry to trouble everyone...
thank you in advance for all your help.
Surely if $-\frac{1}{n(n + 1)}$ tends to $0$ then $e^{-\frac{1}{n(n + 1)}}$ tends to $e^0 = 1$...
February 16th 2010, 12:45 AM #2
February 16th 2010, 01:05 AM #3
Feb 2009
February 16th 2010, 01:12 AM #4
MHF Contributor
Apr 2005
February 16th 2010, 01:16 AM #5
Feb 2009
February 16th 2010, 01:21 AM #6
February 16th 2010, 01:27 AM #7
Feb 2009
February 16th 2010, 01:41 AM #8 | {"url":"http://mathhelpforum.com/calculus/129045-ratio-test-question.html","timestamp":"2014-04-16T04:39:42Z","content_type":null,"content_length":"61893","record_id":"<urn:uuid:285b7cfc-6607-4130-8283-abcc779ba084>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |
While completing Eigen2 AltiVec support (should be almost complete now), I noticed that the 32-bit integer multiplication didn't work correctly all of the time. As AltiVec does not really include any
instruction to do 32-bit integer multiplication, I used Apple's routine from the Apple Developer's site. But this didn't work and some results were totally off. With some debugging, I found out that
this routine works for unsigned 32-bit integers, where Eigen2 uses signed integers! So, I had to search more, and to my surprise, I found no reference of any similar work. So I had 2 choices: a)
ditch AltiVec integer vectorisation from Eigen2 (not acceptable!) b) implement my own method! It is obvious which choice I followed :)
UPDATE: Thanks to Matt Sealey, who noticed I could have used vec_abs() instead of vec_sub() and vec_max(). Duh! :D | {"url":"http://freevec.org/","timestamp":"2014-04-19T09:23:33Z","content_type":null,"content_length":"30953","record_id":"<urn:uuid:598194c8-d689-4f70-a682-7d3cdc0c0019>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
Le 10/04/12 22:54, Christopher Kormanyos a écrit :
>>> the recent discussion on the MultiplePrecission Arithmetic library has show that some people has its ow fixed point library.
>>> Is there an interest in a Boost library having as base
>>> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html?
>> Vicente
> Yes! Absolutely interested!
> I use fixed-point extensively with hard real-time microcontroller systems.
> I am particularly interested in stuff like a 7.8 and 15.16 signed split.
What 7.8 stands for? 7 bits for the integer part and 8 for the
fractional part?
I guess you prefer a specific class for signed fixed_points and not a
template parameter.
fp<int16_t, 7,8>
Do you mind if the library propose some meta-function to specify
different formats, e.g.
format_i_f<7,8>::type and format_i_f<15,16>::type
An alternative design is to have a format parameter, so for example the
user could use specific formats
fp<i_f<7,8>> a;
fp<r_r<7,-8>> b;
where i_f has as parameters the number of integral and fractional bits
and r_r has as parameters the range and resolution. Currently my
prototype and the C++ proposal uses this r_r format. Others use the
total number of bits (width including the sign) and the fractional w_f
fp<w_f<16,8>> b;
The advantage of the meta-function is that the library is open to
unknown formats. The liability is the need of ::type (not realy with
template aliases)
The liability of the format template parameter is that while
fp<i_f<7,8>> and fp<r_r<7,-8>> are equivalent they are not the same type
and again ::type should be needed.
My preference is of course the implicit r_r format and the use of
meta-functions for the other formats.
> I always use signed fixed-point and always use a split right
> down the middle of the representation. It keeps the math simple
> and fast.
Do you mind if the rounding strategy is a template parameter? An
enumeration or and open policy?
Do you have any preferences for the namespace and class names?
> Good luck with this project. Looking forward to it.
> I will look into the prelim code later.
Any comment is welcome.
Boost list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | {"url":"http://lists.boost.org/Archives/boost/2012/04/192139.php","timestamp":"2014-04-17T21:23:56Z","content_type":null,"content_length":"13971","record_id":"<urn:uuid:3a441c1b-cb0c-47a1-831d-b2ee63636514>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
ventonegro dcoutts: nevermind, solved :-)
sioraiocht (-4) :: Word8
> (-4) :: Word8
lambdabot 252
dons dpiponi: nope, but if you find out something, do let us know.
dpiponi Will do. I'd like to pott the code for the Make Controller. Should be almost trivial judging by the C code it generates.
syntaxfree it's a wonderful world where people like me can be in the same chatroom as people like sigfpe.
but hey, is there a decimal float type in Haskell?
ddarius There is a library referenced on haskell.org
syntaxfree cool.
sioraiocht @type liftIO . char
lambdabot Couldn't match expected type `IO a' against inferred type `Doc'
In the second argument of `(.)', namely `char'
sioraiocht @type liftIO . getChar
lambdabot Couldn't match expected type `a -> IO a1'
against inferred type `IO Char'
johanatan @type
lambdabot [1 of 2] Compiling ShowQ ( scripts/ShowQ.hs, interpreted )
[2 of 2] Compiling L ( L.hs, interpreted )
johanatan @type map (\((n,t):xs) -> (n:map fst xs,t)) . groupBy (\x y -> snd x == snd y)
lambdabot forall a b. (Eq b) => [(a, b)] -> [([a], b)]
hmm.. i'm getting 'parse error on input '\' for the statement above
johanatan i'm assuming it's the first '\'
@type map (\((n,t):xs) -> (n:map fst xs,t)) . groupBy (\x y -> snd x == snd y)
lambdabot forall a b. (Eq b) => [(a, b)] -> [([a], b)]
dolio Is that the whole expression?
johanatan @type map (\(([n],t):xs) -> (n:map fst xs,t)) . groupBy (\x y -> snd x == snd y)
lambdabot Occurs check: cannot construct the infinite type: a = [a]
Expected type: [(a, b)]
johanatan yea
dolio Maybe you can paste some more context.
lambdabot Haskell pastebin: http://hpaste.org/new
johanatan well, that's it... someone gave that earlier.. but, i can paste what i'm trying to do with that statement
dolio It's possible that could help. I don't see anything wrong with it in itself.
Obviously lambdabot doesn't either. :)
johanatan true, but ghci does... i'll try it in an empty file by itself
LoganCapaldo could you be having layout issues?
oh not in ghci I guess | {"url":"http://ircarchive.info/haskell/2007/5/5/1.html","timestamp":"2014-04-20T18:56:07Z","content_type":null,"content_length":"7151","record_id":"<urn:uuid:0f8be9e1-c327-44a4-8b6b-25765d26fd59>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00303-ip-10-147-4-33.ec2.internal.warc.gz"} |
teerable filters example
Steerable filters example
2nd derivative of Gaussian
Example of steerable filters. Top line: three rotations of the second derivative of a Gaussian. That filter has three (complex) frequencies in polar angle, so a linear combination of three copies of
the filter are sufficient to synthesize all rotations of the filter (see references). Middle: Zone plate test image. Bottom: By the linearity of convolution, the output, filtered to any orientation,
can be synthesized as a lineaer combination of the outputs of the basis filters.
Synthesized filter and output
Rotated version of 2nd derivative of Gaussian, obtained as a linear combination of the basis filters above. The output of the zone plate to that filter was obtained as the same linear combination of
the outputs to the basis filters.
Architecture for applying steerable filters
Steerable quadrature pair
A steerable quadrature pair allows for continuous control of the filter's phase and orientation, useful for contour analysis and enhancement. Seven basis filters span the space of all orientations
and phases of this filter.
W. T. Freeman and E. H. Adelson, The design and use of steerable filters, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 13, no. 9, pp. 891 - 906, September, 1991. MIT Vision and
Modeling Group TR 126.
G. H. Granlund and H. Knutsson, Signal processing for computer vision, Kluwer Academic Publishers, 1995.
P. Perona, Deformable kernels for early vision, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 17, no. 5, pp. 488-499, May, 1995.
E. P. Simoncelli and H. Farid, Steerable wedge filters for local orientation analysis, IEEE Trans. Information Theory vol. 5, no. 9, pp. 1377-1382 , 1996.
E. P. Simoncelli and W. T. Freeman, The steerable pyramid: a flexible architecture for multi-scale derivative computation, 2nd Annual IEEE Intl. Conference on Image Processing, Washington, DC.
October, 1995. MERL-TR95-15.
P C. Teo and Y. Hel-Or, A Computational Group-Theoretic Approach to Steerable Functions<\cite>, STAN-CS-TN-96-33, Department of Computer Science, Stanford University, April 1996.
J. W. Zweck and L. R. Williams Euclidean group invariant computation of stochastic completion fields using shiftable-twistable functions, December, 1999. | {"url":"http://people.csail.mit.edu/billf/steer.html","timestamp":"2014-04-20T08:16:21Z","content_type":null,"content_length":"8340","record_id":"<urn:uuid:0368e72d-b342-4a4e-aa68-227eec44dd39>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
A counter example in obstruction theory
up vote 3 down vote favorite
Let $K$ denote a simplicial complex and $Y$ a path-connected topological space. Let us also denote by $K^n$ the $n$-skeleton of $K$. I would like to have an example for the following situation or a
proof of its impossibility:
A map $f^1:K^1\to Y$ that can be extended to $f^2:K^2\to Y$ and yet no such extension can be further extended to $f^3:K^3\to Y$.
The idea is that there is an obstruction to the existence of $f^3$ already on the one-dimensional level, but not by obstructing the existence of $f^2$. It is written in Hilton and Wylie's book that a
bit more general phenomenon of this type is possible:
There is a complex $K$, a subcomplex $L\subseteq K$ and a map $f^0:L\cup K^0\to Y$, such that there is an extension $f^1:L\cup K^1\to Y$ which has an extension over $L\cup K^2$, but not over $L\
cup K^3$ while $f^0$ has an extension over $L\cup K^3$.
In words, when trying to extend a given map $f:L\to Y$ over $K$, inductively through $f^n:L\cup K^n$ it is possible to get stuck with an $f^2$ that not only does not have an extension over $L\cup K^
3$, it is even impossible to fix it by revising the last step and yet by revising the last two steps it is possible to extend the chosen $f^0$ over $K^3$. I could not find an example for this either
so it would also be appreciated.
at.algebraic-topology examples
Take K the 3-ball and $Y=S^2$ and $f$ identity on the boundary. – Misha Apr 4 '12 at 12:02
2 @Misha: If you read the question more carefully you will see that your example is not an example. – Neil Strickland Apr 4 '12 at 12:16
add comment
1 Answer
active oldest votes
Here is an example with CW complexes rather than simplicial complexes. I doubt that there is an important difference, although the simplicial case will require more bookkeeping.
Take $K=\mathbb{R}P^3$ and $Y=\mathbb{R}P^2$. We can give $K$ a CW structure with skeleta $\mathbb{R}P^k$ for $0\leq k\leq 3$. Let $f^1:\mathbb{R}P^1\to Y$ be the evident inclusion.
up vote 12 Clearly this extends over $K^2$. Now suppose we have an extension $f^3:K^3=K\to Y$ of $f^1$. This will then give a graded ring homomorphism $(f^3)^*:H^*(Y;\mathbb{Z}/2)\to H^*(K;\mathbb
down vote {Z}/2)$, or in other words $(f^3)^*:(\mathbb{Z}/2)[y]/y^3\to (\mathbb{Z}/2)[x]/x^4$. Because $f^3$ extends $f^1$ we must have $(f^3)^*(y)=x$. This gives a contradiction because $y^3=0$
accepted but $x^3\neq 0$.
I don't see how the last line gives a contradiction. You have a non-zero element in $H^3(K)$, which is mapped to zero in $H^3(Y)$, what's wrong with that? – KotelKanim Apr 4 '12 at
3 Cohomology is contravariant. – Neil Strickland Apr 4 '12 at 13:05
Oops... that was silly. Thanks for the answer. – KotelKanim Apr 4 '12 at 13:22
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology examples or ask your own question. | {"url":"http://mathoverflow.net/questions/93098/a-counter-example-in-obstruction-theory","timestamp":"2014-04-19T02:10:26Z","content_type":null,"content_length":"57267","record_id":"<urn:uuid:0e645c8b-c9bb-4a3b-b066-178e895bdb8a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the coarse moduli space of a stack
up vote 3 down vote favorite
Consider a stack $\mathcal{X}$ over $\mathbb{C}$ as a category fibred in groupoids over the category of schemes. Let $\mathcal{X}^s$ be the $\pi_0$ of this category, i.e. objects of $\mathcal{X}^s$
are the objects in $\mathcal{X}$ and morphisms of $\mathcal{X}^s$ are the morphisms in $\mathcal{X}$ modulo automorphisms of objects. It "kills" the groupoid structure, so I think it is possible to
consider $\mathcal{X}^s$ as a category fibred in sets over the category of schemes. Assume $\mathcal{X}^s$ is represented by a scheme. Should it be the coarse moduli space for $\mathcal{X}$?
stacks ag.algebraic-geometry coarse-moduli-spaces
2 This should be relevant: mathoverflow.net/questions/70520/… – Mattia Talpo Feb 2 '13 at 14:21
add comment
1 Answer
active oldest votes
Yes, this would imply that $\newcommand{\X}{\mathcal X}\X^s$ is the coarse moduli space, but I don't think this is the "right" question to ask -- I believe that $\X^s$ will not even form a
sheaf unless $\X$ happens to be a scheme/algebraic space to begin with.
Anyway, any morphism from a groupoid to a set factors through $\pi_0$ of the groupoid. This implies in particular that any morphism from $\X$ to an algebraic space factors through the
presheaf $\X^s$. And the map $\X \to \X^s$ is a bijection on geometric points because it's in fact a bijection on $S$-points for any scheme $S$. So if $\X^s$ is a scheme/algebraic space
then it is the coarse moduli space.
Addendum. I think you are confused about some basic issues. Let us see why $BG^s$ is not the coarse moduli space of $BG$. Let $G$ be a nontrivial finite group, say.
Consider for simplicity the topological setting, so we have a topological space $X$ and an open cover $\{U_i\}$. If we have a $G$-torsor on $X$ then we can restrict to a $G$-torsor on each
$U_i$, and on each overlap $U_i \cap U_j$ we have isomorphisms between the restrictions from $U_i$ and from $U_j$. These isomorphisms satisfy cocycle relation. Conversely, if we have
$G$-torsors on each $U_i$ and isomorphisms satisfying the cocycle relation, we can reconstruct a $G$-torsor on the whole of $X$, unique up to canonical isomorphism. What this paragraph
says is exactly that the functor $BG$ which sends a space to the groupoid of $G$-torsors over it is a sheaf of groupoids, that is, a stack. (In the usual Grothendieck topology on the
category of topological spaces, where open covers are, well, open covers. And when I call $BG$ a "functor" I should say "pseudofunctor" or "fibered category".)
up vote 4
down vote On the other hand we can consider $BG^s$, which is now a priori just a presheaf of sets, mapping a space to the set of isomorphism classes of $G$-torsors over it. If we have an isomorphism
accepted class of $G$-torsor on $X$ then we get well defined isomorphism classes of $G$-torsors on each $U_i$ with compatible restrictions to each $U_i \cap U_j$. But it is NOT true that if we have
an isomorphism class of $G$-torsor on each $U_i$ which agree on double overlaps, then we can reconstruct a unique isomorphism class on all of $X$: consider the case when $G$ is nontrivial
on $X$ and $\{U_i\}$ is a trivializing cover! What this says is that $BG^s$ is in fact only a presheaf - it is NEVER a sheaf of sets. Put simply, one can not glue together isomorphism
What this shows is in fact that if we sheafify $BG^s$, then we get a point. If we only remember isomorphism classes of torsors then every $G$-torsor becomes equivalent to the trivial
torsor on some open covering of your space, which means that these torsors are identified under sheafification.
The same arguments work verbatim in algebraic geometry, since every $G$-torsor is locally trivial in the étale topology.
In any case, this is why I said above that this is not the "right" question to ask: it is not natural to expect $\X^s$ to be a sheaf in the first place. I would suggest reading Heinloth or
Fantechi's notes on stacks (they are somewhere online) and thinking over just what question it is you want to ask.
And what if $\mathcal{X}$ is the classifying stack BG or, for example, the quotient stack $[\mathbb{A}/\mathbb{Z}_n]$? Isn't $\mathcal{X}^s$ the coarse moduli for $\mathcal{X}$ in these
cases? – Nullstellensatz Feb 2 '13 at 17:10
The short answer is "no" - see the addendum above. – Dan Petersen Feb 3 '13 at 10:58
Thanks for the detailed answer, Dan. So, do I understand correctly that, if the sheafification of $\mathcal{X}^s$ is represented by a scheme, then it is a coarse moduli for $\mathcal{X}
$? – Nullstellensatz Feb 3 '13 at 13:50
And is it right that the sheafification of $\mathcal{X}^s$ is a scheme in the case of the quotient stack $[\mathbb{A}^1/\mathbb{Z}_n]$? – Nullstellensatz Feb 3 '13 at 14:00
@Scott: Thanks for the correction. @Nullstellensatz: You are correct that when the sheafification of $\X^s$ is representable, then that will be the coarse moduli space. For $\mathbb A^
1 1$ divided by the $n$th roots of unity you have to be careful, the étale sheafification is not a scheme but the fppf sheafification is (and coincides with the scheme quotient). – Dan
Petersen Feb 4 '13 at 8:08
show 2 more comments
Not the answer you're looking for? Browse other questions tagged stacks ag.algebraic-geometry coarse-moduli-spaces or ask your own question. | {"url":"http://mathoverflow.net/questions/120567/on-the-coarse-moduli-space-of-a-stack","timestamp":"2014-04-16T19:54:50Z","content_type":null,"content_length":"60923","record_id":"<urn:uuid:8fb94a9f-252c-46f4-a492-b2ffa1d3a5e0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mantua, NJ Math Tutor
Find a Mantua, NJ Math Tutor
...Furthermore, I have experience tutoring students taking math classes up to and including calculus. I would happy to share that knowledge and experience to help someone pass the math section of
the ACT test. I have a bachelor's degree in mathematics.
16 Subjects: including trigonometry, algebra 1, algebra 2, calculus
...I have a Certificate II in Special Education and also in Middle Years English, and am also rated Highly Qualified to teach High School English, Math Science and Social Studies. I am also a
Board Certified Behavior Analyst, and have extensive experience treating children with AD/HD, ODD, Asperge...
31 Subjects: including prealgebra, algebra 1, English, reading
...As a tutor with a primary focus in math and science, I not only tutor algebra frequently, but also encounter this fundamental math subject every day in my professional life. I conduct research
at UPenn and West Chester University on colloidal crystals and hydrodynamic damping. Students I tutor are mostly college-age, but range from middle school to adult.
9 Subjects: including algebra 1, algebra 2, calculus, geometry
...For the SAT, I implement a results driven and rigorous 7 week strategy. PLEASE NOTE: I only take serious SAT students who have time, the drive, and a strong personal interest in learning the
tools and tricks to boost their score. Background: I graduated from UCLA, considered a New Ivy, with a B.S. in Integrative Biology and Physiology with an emphasis in physiology and human anatomy.
26 Subjects: including precalculus, SAT math, linear algebra, algebra 1
...I have worked with Microsoft Excel for over 10 years. I use it for budgets, graphing, printing labels among other things. I have helped many people from the most basic computer skill level up
to quite advanced and can tailor my teaching style to your needs.
19 Subjects: including algebra 1, algebra 2, calculus, grammar
Related Mantua, NJ Tutors
Mantua, NJ Accounting Tutors
Mantua, NJ ACT Tutors
Mantua, NJ Algebra Tutors
Mantua, NJ Algebra 2 Tutors
Mantua, NJ Calculus Tutors
Mantua, NJ Geometry Tutors
Mantua, NJ Math Tutors
Mantua, NJ Prealgebra Tutors
Mantua, NJ Precalculus Tutors
Mantua, NJ SAT Tutors
Mantua, NJ SAT Math Tutors
Mantua, NJ Science Tutors
Mantua, NJ Statistics Tutors
Mantua, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/Mantua_NJ_Math_tutors.php","timestamp":"2014-04-21T05:19:34Z","content_type":null,"content_length":"23920","record_id":"<urn:uuid:67a06b3b-731a-4c3f-a55e-4f4dfa365866>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |
Madison Heights, MI
West Bloomfield, MI 48322
Master Certified Coach for Exam Prep, Mathematics, & Physics
...I look forward to speaking with you and to establishing a mutually beneficial arrangement in the near future! Best Regards, Brandon S.
1 covers topics such as linear equations, systems of linear equations, polynomials, factoring, quadratic equations,...
Offering 10+ subjects including algebra 1 and algebra 2 | {"url":"http://www.wyzant.com/Madison_Heights_MI_Algebra_tutors.aspx","timestamp":"2014-04-20T16:24:03Z","content_type":null,"content_length":"59600","record_id":"<urn:uuid:52d247bb-2434-4fa6-80f6-08a4d52642d2>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
need help to write code how to convert binary string to decimal
Newbie Poster
7 posts since May 2011
Reputation Points: -1 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
I am a beginner in MIPS and we're given this assignment to write a programs in MIPS to convert binary string to decimal
i do some research and I found some helpful information:
TO convert a binary number to decimal
Let X be a binary number, n digits in length, composed of bits Xn-1 ... Xo
Let D be a decimal number
Let i be a counter
1. Let D = 0
2. Let i = 0
3. While i < n do:
If Xi == 1 (i.e if bit i in X is 1), then set D = (D + 2 to the power of i)
Set i = (i + 1)
I am trying to write these in assembly language and really need help, here are what i have started, need help to guide me to finish this :( its very hard for me
.align 2
move $zero, $r3 # assume that $r3 = 0, represent a decimal num
addi $r4, $r4, 0 # i = 0, store in $r4
addi $r5, $r5, 8 #8 digits in length
li $v0, 4 #code for print string
la $a0, prompt #load address of prompt into $a0
syscall #print the prompt message
li $v0, 8 #code for read strings
la $a0, binary #addr of buffer (binary)
li $a1, 9 #size of buffer (1 byte)
syscall #
Loop: slt $r1, $r4, $r5 #while i < n
beg # am stuck here
prompt: .asciiz "Insert Binary String: \n"
output: .asciiz "The decimal number is: "
I am stuck here:
If Xi == 1 (i.e if bit i in X is 1), then set D = (D + 2 to the power of i)
Set i = (i + 1)
How do i write the Xi( bit i in X) in mips and plus look at my code above please | {"url":"http://www.daniweb.com/software-development/assembly/threads/365823/need-help-to-write-code-how-to-convert-binary-string-to-decimal","timestamp":"2014-04-20T08:33:25Z","content_type":null,"content_length":"31388","record_id":"<urn:uuid:b4a0c934-d2c6-4cf3-8184-1672622226af>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: The de Bruijn-Erdos Theorem for
Noga Alon
Keith E. Mellinger
Dhruv Mubayi
Jacques Verstrašete §
May 17, 2011
Fix integers n r 2. A clique partition of [n]
r is a collection of proper subsets
A1, A2, . . . , At [n] such that i
r is a partition of [n]
r .
Let cp(n, r) denote the minimum size of a clique partition of [n]
r . A classical theorem of
de Bruijn and Erdos states that cp(n, 2) = n. In this paper we study cp(n, r), and show in
general that for each fixed r 3,
cp(n, r) (1 + o(1))nr/2
as n . | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/848/1705932.html","timestamp":"2014-04-21T02:08:44Z","content_type":null,"content_length":"7597","record_id":"<urn:uuid:819a0e0a-d779-4b85-9c2f-d54f38f26dbc>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
The curse of dimensionality: How to define outliers in high-dimensional data?
After my post on detecting outliers in multivariate data in SAS by using the MCD method, Peter Flom commented "when there are a bunch of dimensions, every data point is an outlier" and remarked on
the curse of dimensionality. What he meant is that most points in a high-dimensional cloud of points are far away from the center of the cloud.
Distances and outliers in high dimensions
You can demonstrate this fact with a simulation. Suppose that you simulate 1,000 observations from a multivariate normal distribution (denoted MVN(0,Σ)) in d-dimensional space. Because the density of
the distribution is highest near the mean (in this case, the origin), most points are "close" to the mean. But how close is "close"? You might extrapolate from your knowledge of the univariate normal
distribution and try to define an "outlier" to be any point whose distance from the origin is more than some constant, such as five standardized units.
That sounds good, right? In one dimension, an observation from a normal distribution that is more than 5 standard deviations away from the mean is an extreme outlier. Let's see what happens for
high-dimensional data. The following SAS/IML program does the following:
1. Simulates a random sample from the MVN(0,Σ) distribution.
2. Uses the Mahalanobis module to compute the Mahalanobis distance between each point and the origin. The Mahalanobis distance is a standardized distance that takes into account correlations between
the variables.
3. Computes the distance of the closest point to the origin.
proc iml;
/* Helper function: return correlation matrix with "compound symmetry" structure:
{v+v1 v1 v1,
v1 v+v1 v1,
v1 v1 v+v1 }; */
start CompSym(N, v, v1);
return( j(N,N,v1) + diag( j(N,1,v) ) );
load module=Mahalanobis; /* or insert definition of module here */
call randseed(12345);
N = 1000; /* sample size */
rho = 0.6; /* rho = corr(x_i, x_j) for i^=j */
dim = T(do(5,200,5)); /* dim=5,10,15,...,200 */
MinDist = j(nrow(dim),1); /* minimum distance to center */
do i = 1 to nrow(dim);
d = dim[i];
mu = j(d,1,0);
Sigma = CompSym(d,1-rho,rho); /* get (d x d) correlation matrix */
X = randnormal(N, mu, Sigma); /* X ~ MVN(mu, Sigma) */
dist = Mahalanobis(X, mu, Sigma);
MinDist[i] = min(dist); /* minimum distance to mu */
The following graph shows the distance of the closest point to the origin for various dimensions.
The graph shows that the minimum distance to the origin is a function of the dimension. In 50 dimensions, every point of the multivariate normal distribution is more than 5 standardized units away
from the origin. In 150 dimensions, every point is more than 10 standardized units away! Consequently, you cannot define outliers a priori to be observations that are more than 5 units away from the
mean. If you do, you will, as Peter said, conclude that in 50 dimensions every point is an outlier.
How to define an outlier cutoff in high dimensions
The resolution to this dilemma is to incorporate the number of dimensions into the definition of a cutoff value. For multivariate normal data in d dimensions, you can show that the squared
Mahalanobis distances are distributed like a chi-square distribution with d degrees of freedom. (This is discussed in the article "Testing data for multivariate normality.") Therefore, you can use
quantiles of the chi-square distribution to define outliers. A standard technique (which is used by the ROBUSTREG procedure to classify outliers) is to define an outlier to be an observation whose
distance to the mean exceeds the 97.5th percentile. The following graph shows a the 97.5th percentile as a function of the dimension d. The graph shows that the cutoff distance is greater than the
minimum distance and that the two distances increase in tandem.
cutoff = sqrt(quantile("chisquare", 0.975, dim)); /* 97.5th pctl as function of dimension */
If you use the chi-square cutoff values, about 2.5% of the observations will be classified as outliers when the data is truly multivariate normal. (If the data are contaminated by some other
distribution, the percentage could be higher.) You can add a few lines to the previous program in order to compute the percentage of outliers when this chi-square criterion is used:
/* put outside DO loop */
PctOutliers = j(nrow(dim),1);/* pct outliers for chi-square d cutoff */
/* put inside DO loop */
cutoff = sqrt( quantile("chisquare", 0.975, d) ); /* dist^2 ~ chi-square */
PctOutliers[i] = sum(dist>cutoff)/N; /* dist > statistical cutoff */
The following graph shows the percentage of simulated observations that are classified as outliers when you use this scheme. Notice that the percentage of outliers is close to 2.5% independent of the
dimension of the problem! By knowing the distribution of distances in high-dimensional MVN data, we are able to define a cutoff value that does not classify every point as an outlier.
To conclude, in high dimensions every data point is far away from the mean. If you use a constant cutoff value then you will erroneously classify every data point as an outlier. However, you can use
statistical theory to define a reliable rule to detect outliers, regardless of the dimension of the data.
You can download the SAS program used in this article, including the code to create the graphs.
4 Comments
1. Thanks a lot -- I learned again.I wonder if you can extend this discussion to linear model. I feel many people are interested in detecting outliers/influential points at the linear model setting
(with a dependent variable).
□ Of course. This is essentially what the ROBUSTREG procedure does to detect high leverage points, but it uses robust estimates of location and scale instead of the classical mean and
2. Hi Rick
Cool. You can certainly do what you say, and the IML implementation is nice. I don't have time to play with it right now, but will do so ASAP (I hope this weekend).
But, while this definitely shows how to find points that are odder than expected on a given metric (Mahalnobis distance, in this case; clearly you could do something similar for other metrics) if
the data are MVN (and, clearly, you could use some other distribution), does this really get at "outlier"?
(Here i am just sort of thinking in public.... these thoughts are fully developed, as will be obvious)
If you have (say) 1000 data points on data with 1000 variables, with some pairs highly correlated and others hardly at all (roughly a problem that I had where I used to work) then you expect 25
or so to be above the cutoff (and, of course, you could pick a different cutoff). But, then, that is what you would *expect*. This is similar to a one-dimensional case - if you sample a LOT of
people, then some will be odd; if you sample 100,000 people you might expect to find a 7 foot tall person).
Further (and this could be investigated - or it may be known to theory) how does Mahalnobis distance work when only 1 pair of variables is weird, but neither alone is weird, and the other
variables are fine? For example, if you reported that 1 % of people in a general population in the USA were widowed, that wouldn't be weird. And if you said 10 % were under 12 years old, that
also wouldn't be weird. But if 0.1 % were widowed *and* under 12, that would be weird! In fact, even in a data set with 100,000 people, you might not expect a single widow under 12 years old.
OK, I'll stop babbling here. :-)
□ Briefly, there are multivariate outliers and there are univariate (in general, lower-dimensional) outliers. You can have a MV outlier that is not a univariate outlier in any coordinate, such
as your 12-year-old widow. You can also have univariate outliers that are not MV outliers.
For highly correlated data, the data will mostly lie in a lower-dimensional subspace. You can use principal components to reduce the dimensionality to a linear subspace. Then you can classify
outliers that are within the subspace but away from the center, or you can have outliers that are near the center but are off the subpace. Or both. The geometry of the MV case very rich,
Post a Comment | {"url":"http://blogs.sas.com/content/iml/2012/03/23/the-curse-of-dimensionality/","timestamp":"2014-04-19T14:29:41Z","content_type":null,"content_length":"69395","record_id":"<urn:uuid:07efc806-1ad5-49e8-951b-673c89fdeb83>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question not found.
Weegy: Ingredients and Equipment 2 cups whole milk 1 cup sugar 1/4 cup fat-free powdered milk 8 eggs (yolks only needed) 1 cup heavy whipping cream (OR half-and-half OR light cream for lighter more
ice cream, [ more like gelato) 1 teaspoon vanilla extract 3 cups of prepared fruit (strawberries, peaches, raspberries, mangoes, or whatever you have! See step 7 for details. 1 ice cream maker 1
large pot 1 wooden or plastic ... | {"url":"http://www.weegy.com/?ConversationId=EE50F412","timestamp":"2014-04-19T22:15:32Z","content_type":null,"content_length":"35170","record_id":"<urn:uuid:326e2995-b683-493a-9c30-e21dff2b26d7>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
truncation in an exact 2-category
In a suitably exact 2-category, we can construct truncations as quotients of suitable congruences.
This case is easy and just like for 1-categories.
Define the support $\mathrm{supp}\left(A\right)={A}_{\le -1}$ of an object $A$ to be the image of the unique morphism $A\to 1$. That, is $A\to \mathrm{supp}\left(A\right)\to 1$ is an eso-ff
factorization. Since $\mathrm{supp}\left(A\right)\to 1$ is ff, $\mathrm{supp}\left(A\right)$ is subterminal, and since esos are orthogonal to ffs, it is a reflection into $\mathrm{Sub}\left(1\right)$
Perhaps surprisingly, the next easiest case is the posetal reflection.
In any (1,2)-exact 2-category $K$ the inclusion $\mathrm{pos}\left(K\right)↪K$ of the posetal objects has a left adjoint called the (0,1)-truncation.
Given $A$, define ${A}_{1}$ to be the (ff) image of ${A}^{2}\to A×A$. Since esos are stable under pullback, ${A}_{1}\phantom{\rule{thickmathspace}{0ex}}⇉\phantom{\rule{thickmathspace}{0ex}}A$ is a
homwise-discrete category, and it clearly has a functor from $\mathrm{ker}\left(A\right)$, so it is a (1,2)-congruence. Let $A\to P$ be its quotient. By the classification of congruences, $P$ is
posetal. And if we have any $f:A\to Q$ where $Q$ is posetal, then we have an induced functor $\mathrm{ker}\left(A\right)\to \mathrm{ker}\left(f\right)$. But $Q$ is posetal, so $\mathrm{ker}\left(f\
right)$ is a (1,2)-congruence, and thus $\mathrm{ker}\left(A\right)\to \mathrm{ker}\left(f\right)$ factors through a functor ${A}_{1}\to \mathrm{ker}\left(f\right)$. This then equips $f$ with an
action by the (1,2)-congruence ${A}_{1}\phantom{\rule{thickmathspace}{0ex}}⇉\phantom{\rule{thickmathspace}{0ex}}A$, so that it descends to a map $P\to Q$. It is easy to check that 2-cells also
descend, so $P$ is a reflection of $A$ into $\mathrm{pos}\left(K\right)$.
This is actually a special case of the (eso+full,faithful) factorization system?, since an object $A$ is posetal iff $A\to 1$ is faithful. The proof is also an evident specialization of that.
The discrete reflection, on the other hand, requires some additional structure.
In any 1-exact and countably-coherent 2-category $K$, the inclusion $\mathrm{disc}\left(K\right)↪K$ of the discrete objects has a left adjoint called the 0-truncation or discretization.
Given $A$, define ${A}_{1}$ to be the equivalence relation generated by the image of ${A}^{2}\to A×A$; this can be constructed with countable unions in the usual way. Then ${A}_{1}\phantom{\rule
{thickmathspace}{0ex}}⇉\phantom{\rule{thickmathspace}{0ex}}A$ is a 1-congruence, and as in the posetal case we can show that its quotient is a discrete reflection of $A$.
There are other sufficient conditions on $K$ for the discretization to exist; see for instance classifying cosieve. We can also derive it if we have groupoid reflections, since the discretization is
the groupoid reflection of the posetal reflection.
The groupoid reflection is the hardest and also requires infinitary structure. Note that the 2-pretopos $\mathrm{FinCat}$ does not admit groupoid reflections (the groupoid reflection of the “walking
parallel pair of arrows” is $BZ$).
In any (2,1)-exact and countably-extensive 2-category $K$, the inclusion $\mathrm{gpd}\left(K\right)↪K$ of the groupoidal objects has a left adjoint called the (1,0)-truncation. | {"url":"http://ncatlab.org/michaelshulman/show/truncation+in+an+exact+2-category","timestamp":"2014-04-16T19:00:36Z","content_type":null,"content_length":"20152","record_id":"<urn:uuid:6b7fcbbe-d46e-49a3-98bd-c95af682d75d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
#3 Simplifying Radicals
September 14th 2007, 02:44 PM #1
#3 Simplifying Radicals
Hello again; this two are the last probs for some time. As usual, please check my work.
The directions are: Rationalize the denominator.
So I multilply by the conjugate & simplify: $\frac{\sqrt{6}\,-\,2\sqrt{6}}{12\,-\,6}\;=\boxed{\;-\,\frac{\,\sqrt{6}}{6}}$
Now this beastly problem w/the same directions: $\sqrt[3]{\frac{16}{9}}$
I thought to write it like this: $\frac{\sqrt[3]{16}}{\sqrt[3]{9}}$
Then multiply the top and bottom by $\sqrt[3]{3}$: $\frac{\sqrt[3]{48}}{3}\,=\,\boxed{\frac{2\sqrt[3]{6}}{3}}$
Does this look good? If you have another way that might be better, do say. Thanks once again!
[size=3]Hello, Jonboy!
Rationalize the denominator: . $\frac{1\,-\,\sqrt{2}}{2\sqrt{3}\,-\,\sqrt{6}}$
So I multilply by the conjugate & simplify: $\frac{\sqrt{6}\,-\,2\sqrt{6}}{12\,-\,6}\;=\boxed{\;-\,\frac{\,\sqrt{6}}{6}}$ . . . . Right!
Now this beastly problem w/the same directions: $\sqrt[3]{\frac{16}{9}}$
I thought to write it like this: $\frac{\sqrt[3]{16}}{\sqrt[3]{9}}$
Then multiply the top and bottom by $\sqrt[3]{3}$: $\frac{\sqrt[3]{48}}{3}\,=\,\boxed{\frac{2\sqrt[3]{6}}{3}}$ . . . . Great!
You did this one in the most efficient way possible.
Many (most?) would multiply top and bottom by $\sqrt[3]{81}$
. . I did ... many years ago.
Thanks for the confirmation Soroban! Also for posting that reply a pretty long time ago showing the different ways to rationalize the denominator, that's where I learned my shortcut.
September 14th 2007, 03:00 PM #2
Super Member
May 2006
Lexington, MA (USA)
September 14th 2007, 04:49 PM #3 | {"url":"http://mathhelpforum.com/algebra/18974-3-simplifying-radicals.html","timestamp":"2014-04-19T14:08:37Z","content_type":null,"content_length":"38885","record_id":"<urn:uuid:b50dbe9a-1c25-40ce-8dc7-46d39a91ebaa>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
CS W4241
CS W4241 - Numerical Algorithms and Complexity
Spring 2013
Monday, Wednesday, 4:10 - 5:25 pm
Professor: Joseph F. Traub
Office Hours: TBA
TA: TBA
● Midterm 30%
● Final 40%
● Homework 30%
● Extra credit homework 10%
● TOTAL 110%
The course consists of two parts, complexity and algorithms.
PART I - COMPLEXITY
Rather than a text IŽll give you handouts.
The following is an indication of the topics IŽll cover.
1. Overview
2. Integration Example
3. Breaking the Curse
4. Mathematical Finance
5. Model of Computation
6. Formal Models and Scientific Knowledge
7. Complexity of Linear Programming
8. Complexity of Verification
9. Clock Synchronization in Distributed Networks
10. Assigning Values to Mathematical Hypotheses
OPTIONAL MATERIAL
1. General Formulation of Information-Based Complexity
2. Integration Example Concluded
3. Value of Information in Computation
A number of additional complexity topics will be covered in the lectures. There will be handouts for this material also. Topics include
● Playing 20 questions against a liar
● Continuous binary search
● Fast matrix multiplication
● Fast Fourier transform (FFT)
● Polynomial Evaluation
● Effect of precomputation
PART II - ALGORITHMS
The material on algorithms will be covered by handouts.
1. Nonlinear Equations
i. Univariate
ii. Multivariate
2. Polynomial zeros
i. Bernoulli algorithm
ii. Jenkins - Traub algorithm
3. Randomization
i. High dimensional integration
4. Numerical solution of partial differential equations
i. Elliptic equations
ii. Parabolic equations
iii. Hyperbolic equations
5. Applications to science, engineering, finance. | {"url":"http://www.cs.columbia.edu/~traub/html/body_csw4241.html","timestamp":"2014-04-20T18:34:27Z","content_type":null,"content_length":"4229","record_id":"<urn:uuid:8f565e12-c0a8-47af-a358-9df3f77858f8>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
1.5 tsp equals how many ml
You asked:
1.5 tsp equals how many ml
7.393382390625 millilitres
the volume 7.393382390625 millilitres
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/1.5_tsp_equals_how_many_ml","timestamp":"2014-04-21T10:10:10Z","content_type":null,"content_length":"56684","record_id":"<urn:uuid:038844f6-e7b7-47b4-ae36-c62df0a0a688>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
Demand and Supply of Currencies of Small Denominations: A Theoretical Framework
Bhattacharya, Kaushik (2009): Demand and Supply of Currencies of Small Denominations: A Theoretical Framework.
Download (285Kb) | Preview
The paper presents theoretical framework of demand and supply of currencies of small denominations. In our framework both demand and supply equations emerge out of an optimization framework. Demand
functions for small denominations are obtained from a linear expenditure system. Our main contention is that economic agents would like to hold a fixed number of small changes, independent of their
respective total cash holdings. However, in our model the fixed quantity is influenced by the probability that in a currency transaction, the counterparty would be able to provide the small change if
needed. The supply function is derived from an optimization problem where the central bank balances its operational cost with the probability that an individual would be able to carry out “small”
transactions independently, without the help of counterparty. In this demand-supply framework, the probability that a randomly chosen individual in an economy would hold certain currency combinations
is interpreted as “price”. We attempt to show that in a dynamic environment, such interaction could be understood by specifying a cob-web type model where expectations are formed based on previous
period’s experience. As an operational rule, it is proposed that the central bank should increase the supply of small denominations at a rate marginally above the growth rate of economically active
population and stop minting as soon as some of the small denominations start return in the currency chest. We also suggest how demand for “small change” could be estimated from the “lifetime” of the
“smallest” denomination.
Item Type: MPRA Paper
Original Title: Demand and Supply of Currencies of Small Denominations: A Theoretical Framework
Language: English
Keywords: Small Change, Denomination, Currency Management, Poisson Distribution
D - Microeconomics > D0 - General
Subjects: E - Macroeconomics and Monetary Economics > E5 - Monetary Policy, Central Banking, and the Supply of Money and Credit
E - Macroeconomics and Monetary Economics > E4 - Money and Interest Rates
Item ID: 27334
Depositing User: Kaushik Bhattacharya
Date Deposited: 09. Dec 2010 13:41
Last Modified: 12. Feb 2013 13:47
Ball L and NG Mankiw, 1995: ‘Relative-Price Changes as Aggregate Supply Shocks’, Quarterly Journal of Economics, 110, 161–193.
Bhattacharya K and H Joshi, 2001: ‘Modeling Currency in Circulation in India’, Applied Economics Letters, 8, 585 – 592.
Bhattacharya K and H Joshi, 2002: ‘An Almon Approximation of the Day of the Month Effect in Currency in Circulation’, Indian Economic Review, 37, 163 –174.
Burdett K, A Trejos and R Wright, 2001: ‘Cigarette Money’, Journal of Economic Theory, 99, 117–142.
Cassino V, P Misich and J Barry, 1997: 'Forecasting the Demand of Currency', Reserve Bank Bulletin, Reserve Bank of New Zealand, 60, 27 – 33.
Cipolla CM, 1956: “Money, Prices, and Civilization in the Mediterranean World: Fifth to Seventeenth Century“, Gordian Press, New York.
Cramer JS, 1983: ‘Currency by Denomination’, Economics Letters, 12, 299 – 303.
Durand J, 1961: ‘L’Attraction des Nombres Ronds et ses Conséquences Économique’, Revue Française de Sociologie, 11, 131–151.
Ghosh JK, D Coondoo, N Sarkar and C Neogi, 1991: ‘A Stochastic Model for Forecasting Denominational Composition of Currency Requirements’, Mimeo, Indian Statistical Institute.
Jadhav N, 1994: Monetary Economics for India, Macmillan India, New Delhi.
References: Kohli U, 1988: ‘A Note on Banknote Characteristics and the Demand for Currency by Denomination’, Journal of Banking and Finance, 12, 389 -- 399.
Lee M, N Wallace and T Zhu, 2005: ‘Modeling Denomination Structures’, Econometrica, 73, 949–960.
Palanivel T and LR Klein, 1999: 'An Econometric Model for India with Emphasis on the Monetary Sector', The Developing Economies, 37, 275 – 336.
Redish A and WE Weber, 2007: ‘A Model of Small Change Shortages’, Mimeo, Federal Reserve Bank of Minneapolis.
Sargent TJ and FR Velde, 2002: “The Big Problem of Small Change”, Princeton Economic History of the Western World, Princeton University Press, Princeton, NJ.
Sarkar N, P Maiti and D Coondoo, 1993: ‘On Forecasting Denominational Requirements of Currency in India’, Journal of Quantitative Economics, 9, 301 – 313.
Sumner S, 1990: ‘Demand for Currency by Denomination’, Quarterly Review of Economics and Business, 30, 75¬ -- 89.
Sumner S, 1993: ‘Privatizing the Mint’, Journal of Money, Credit and Banking, 25, 13 – 29.
Telser LG, 1995: ‘Optimal Denominations of Coins and Currency’, Economics Letters, 49, 425 -- 427.
Tschoegl AE, 1997: ‘Optimal Denomination of Currency’, Journal of Money, Credit and Banking, 29, 546 – 554.
Wallace N, 2003: ‘Modeling Small Change: A Review Article’, Mimeo, Pennsylvania State University.
URI: http://mpra.ub.uni-muenchen.de/id/eprint/27334 | {"url":"http://mpra.ub.uni-muenchen.de/27334/","timestamp":"2014-04-19T17:44:14Z","content_type":null,"content_length":"25864","record_id":"<urn:uuid:702b201c-831a-4d00-ae29-4d38b2f358db>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
An OpenBLAS-based Rblas for Windows 64
One of the more important pieces of software that powers R is its BLAS, which stands for Basic Linear Algebra Subprograms. This is the suite of programs which, as its name implies, performs basic
linear algebra routines such as vector copying, scaling and dot products; linear combinations; and matrix multiplication. It is the engine on which LINPACK, EISPACK, and the now industry-standard
LAPACK and its variants are built. As a lower-level software suite, the BLAS can be fine-tuned to different computer architectures, and a tuned BLAS can create a dramatic speedup in many fundamental
R routines. There are a number of projects developing tuned BLAS suites such as the open source ATLAS, GotoBLAS2, and OpenBLAS, and the closed source Intel MKL and AMD ACML libraries.
In ‘R’ itself, the BLAS is stored in the shared library file Rblas.dll, and a properly compiled tuned version can just be “dropped in” to the bin subdirectory and used immediately. For people working
in Linux, there is significant support for specially tuned BLAS (and LAPACK) files, which can be found in great detail in the R Installation manual—the support for Windows is somewhat less robust, to
be charitable. For many years, I was constrained to working in a 32-bit Windows environment, and took advantage of the 32-bit Windows compiled versions that existed in R’s repository of tuned BLAS
files. However, at that time, the most recent version was for a Pentium 4, so over the next few years, I struggled and compiled ATLAS-based Rblas’s for Windows 32-bit. I did this for the Core 2 Duo
(C2D) and the quad-core SandyBridge (Core i7 called C2i7 in the repository). The speedup was dramatic (see this blog, which uses the Core 2 BLAS I compiled, for an example) However, I found ATLAS to
be difficult to compile, subject to any number of issues, and not really being a programmer, I often ran into issues I could not solve. Moreover, I was never able to successfully compile a 64-bit
BLAS which passed the comprehensive make check-all test suite.
Recently, I made the complete switch to 64 bit Windows at both work and home so finding a 64-bit Rblas become much more important. There are some pre-compiled 64-bit BLAS files for R, graciously
compiled by Dr. Ei-ji Nakama, which can be found here. Using these binaries, I found a dramatic increase in speed over the reference BLAS. However, the most recent processor-specific BLAS in that
repository is for the Nehalem architecture, and cannot take advantage of the SSE4 and AVX operations built into the new SandyBridge and IvyBridge cores. Much to my frustration, I had many failures
trying to compile a 64-bit ATLAS-based BLAS which, when compiled in R, would pass all the checks. With each try taking between six and a dozen hours, I eventually gave up trying to use ATLAS and
resigned myself to living with the GotoBLAS-based files—which honestly was not much of a resignation.
A bit more recently, came across the OpenBLAS project, which claims to have reached near-MKL speeds on the Sandy/IvyBridge architecture due to some elegant hand-coding and optimization, and I was
hoping to be able to take advantage of this in R. Unfortunately, there are no pre-compiled binaries to make use of, and so I had to attempt the compilation on my own. What made this a bit more
difficult is that officially, R for Windows does not support using Goto or ACML based BLAS routines, and even ATLAS has limited support (see this thread, specifically Dr. Ripley’s response). This
called for a lot of trial and error, originally resulting in dismal failure.
Serendipitously, around the time of the 3.0.1 release, there was an OpenBLAS update as well. Trying again, I was finally successful in compiling a single-threaded, OpenBLAS v2.8 based BLAS for the
SandyBridge architecture on Windows 64 bit that, when used in the R compilation, created an Rblas that passed make-check all! For those interested in compiling their own, once the OpenBLAS is
compiled, it can be treated as an ATLAS blas in R’s Makefile.local, with the only additional change being pointing to the compiled .a file in \src\extra\blas\Makefile.win.
Once I was successful with the SandyBridge-specific file, I compiled an Rblas that was not Sandy-Bridge dependent, but could be used on any i386 machine. I plan on submitting both to Dr. Uwe Ligges
at R, and hope that, like the other BLAS’s I submitted, they will be posted eventually.
To demonstrate the increase in speed that using a tuned BLAS can provide in R, I ran a few tests. First, I created two 1000×1000 matrices populated with random normal variables. In specific (I don’t
know why the first assignment operator has an extra space, I’ve seen that before when posting to WordPress):
1 A <- matrix(rnorm(1e6, 1000, 100), 1000, 1000)
2 B <- matrix(rnorm(1e6, 1000, 100), 1000, 1000)
3 write.csv(A, file="C:/R/A.csv", row.names=FALSE)
4 write.csv(B, file="C:/R/B.csv", row.names=FALSE)
I can provide the specific matrices for anyone interested. I then compiled a basically vanilla R 3.0.2 for 64 bit windows, using only -mtune=corei7-avx -O3 for optimizations, so the code should run
on any i386. I followed the compilation steps for a full installation, so it included base, bitmapdll, cairodevices, recommended, vignettes, manuals, and rinstaller for completeness. Using a Dell
M4700 (i7-3740 QM 2.7Ghz, Windows 7 Professional 64bit, 8GB RAM) I tested the following BLAS’s:
• Reference
• GotoBLAS Dynamic
• GotoBLAS Nehalem
• OpenBLAS Dynamic
• OpenBLAS SandyBridge
I updated all packages and installed the microbenchmark package (all from source). To test the effects of the different BLAS’s I renamed and copied the appropriate blas to Rblas.dll each time. The
test ran multiple copies of crossprod, solve, qr, svd, eigen, and lu (the last needs the Matrix package, but it is a recommended package). The actual test code is:
1 library(microbenchmark)
2 library(Matrix)
3 A <- as.matrix(read.csv(file="C:/R/BLAS/A.csv", colClasses='numeric'))
4 B <- as.matrix(read.csv(file="C:/R/BLAS/B.csv", colClasses='numeric'))
5 colnames(A) <- colnames(B) < - NULL
6 microbenchmark(crossprod(A,B), solve(A), qr(A, LAPACK=TRUE), svd(A), eigen(A),
7 lu(A), times=100L, unit='ms')
The results are illuminating:
Reference BLAS:
Unit: milliseconds
expr min lq median uq max neval
crossprod(A, B) 1173.4761 1184.7500 1190.9430 1198.4409 1291.9620 100
solve(A) 1000.1138 1011.5769 1018.7613 1027.0789 1172.6551 100
qr(A, LAPACK = TRUE) 625.3756 633.2889 638.0123 644.8645 713.7831 100
svd(A) 3045.0855 3074.7472 3093.8418 3138.0180 3621.9590 100
eigen(A) 5491.4986 5563.1129 5586.9327 5632.5695 5836.6824 100
lu(A) 170.8486 172.5362 175.5632 178.7300 247.1116 100
Unit: milliseconds
expr min lq median uq max neval
crossprod(A, B) 63.68471 83.16222 92.33265 104.5118 184.4941 20
solve(A) 207.38801 229.81548 250.53656 276.6736 391.5336 20
qr(A, LAPACK = TRUE) 171.93558 175.22074 178.57133 186.3217 996.7584 20
svd(A) 892.90194 920.55781 969.79708 1069.5740 12185.9414 20
eigen(A) 14870.65481 14943.13804 15113.49240 15513.8285 29100.0788 20
lu(A) 70.65831 76.07561 83.52785 146.1552 152.5949 20
Unit: milliseconds
expr min lq median uq max neval
crossprod(A, B) 52.86457 64.42491 73.75266 77.89007 82.05677 20
solve(A) 203.96266 209.78135 220.36275 229.09164 298.06924 20
qr(A, LAPACK = TRUE) 171.22182 174.58935 175.90882 181.63144 247.92046 20
svd(A) 895.20834 904.17256 950.78584 970.47127 1057.90653 20
eigen(A) 15429.49102 15470.62708 15552.95074 15627.39700 15759.67103 20
lu(A) 67.13746 72.27017 74.26725 77.52482 85.24281 20
Unit: milliseconds
expr min lq median uq max neval
crossprod(A, B) 102.54157 102.99851 104.42070 105.95056 177.8572 100
solve(A) 180.64641 182.99257 186.41507 188.91836 261.6805 100
qr(A, LAPACK = TRUE) 188.08381 194.37999 197.10319 199.66507 281.2404 100
svd(A) 1027.00070 1040.84424 1055.35816 1108.67206 1193.3480 100
eigen(A) 2114.96652 2190.11265 2208.06810 2250.41861 2476.9335 100
lu(A) 57.47394 58.53336 59.80013 62.60323 139.1076 100
Unit: milliseconds
expr min lq median uq max neval
crossprod(A, B) 102.41031 102.71278 104.14695 105.4141 179.8048 100
solve(A) 180.52238 181.97006 184.94760 187.6423 260.5379 100
qr(A, LAPACK = TRUE) 186.89180 191.01798 193.69381 199.2825 286.8123 100
svd(A) 1024.47896 1034.60627 1043.91214 1099.8361 1153.3881 100
eigen(A) 2108.22006 2172.05087 2192.65913 2216.7981 2375.9831 100
lu(A) 57.55384 59.10672 61.08402 62.9213 133.6456 100
All the tuned BLAS results are much better than the reference, with the exception of eigenvalue decomposition for the GotoBLAS-based Rblas. I do not know why that is the case, but the difference was
so severe that I had to run it only 20 times to have results in reasonable time. For the other routines, sometimes the OpenBLAS based version is quicker, other times not. I personally will use it
exclusively as the minor lag in comparison to some of the GotoBLAS timings is more than compensated for by the eigenvalue speedup, and overall, it is still anywhere between 3 and 10 times faster than
the reference BLAS. Regardless, it is clear that using a tuned BLAS can speed up calculations considerably.
Using a tuned BLAS is not the only way to increase R's speed. For anyone compiling R for themselves, throwing the proper flags for R in its compilation can squeeze a bit more speed out if it, and
activating byte-compilation can do a bit more. In my next post, I hope to show similar timing numbers, but this time, using an R compiled for my specific machine (Ivy Bridge) in concert with the
tuned BLAS.
5 Responses
1. I’m looking forward to something like this becoming generally available. I usually use R on a very similar system (R 3.0, Dell, Intel i7 CPU, 16GB RAM, Windows 64-bit), though I’m not sure how
much it would really help me because most of my wait time (which can be hours or days) is spent in the gbm, earth, and nnet packages, which I think do their calculations in their own C code.
1. I’m not that familiar with those packages, Andrew, but it stands to reason. The nnet source package has C source code which contains functions like “sigmoid” and “Build_Net” and the gbm has a
slew of C++ files, so it is likely that a faster BLAS will not help too much, although it cannot hurt to try. Have you considered porting any specific routines you have built, such as using
the Rcpp package?
2. What Blas should (or could) I use for:
Intel Xeon E5620 Westmere 2.4GHz (4-core)?
1. According to Wikipedia, the Westmere is the architecture between the Nehalem and the SandyBridge, and it has the AES instruction set but not the AVX instruction set, so I would suggest the
GotoBLAS compiled for Nehalem for now. If (hopefully when) the dynamic OpenBLAS I submitted to CRAN gets approved, that would be another option as well.
3. […] ← An OpenBLAS-based Rblas for Windows 64 […] | {"url":"http://www.avrahamadler.com/2013/10/22/an-openblas-based-rblas-for-windows-64/","timestamp":"2014-04-16T21:52:09Z","content_type":null,"content_length":"111537","record_id":"<urn:uuid:0ffbd853-e421-46b5-b1aa-645781d0fff8>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00587-ip-10-147-4-33.ec2.internal.warc.gz"} |
Definition of Ciphers
1. Verb. (third-person singular of cipher) ¹
¹ Source: wiktionary.com
Definition of Ciphers
1. cipher [v] - See also: cipher
Ciphers Pictures
Click the following link to bring up a new window with an automated collection of images related to the term: Ciphers Images
Lexicographical Neighbors of Ciphers
cinques cipherer cipolin
cinsault cipherers cipolins
cion cipherhood cipolline onion
ciona intestinalis ciphering cipollini
cions cipherlike cipollinis
cioppino ciphers (current term) cipollino
cioppinos ciphertext cipollinos
cipher ciphertexts cippi
cipherable ciphonies cippus
ciphered ciphony ciprianiite
Literary usage of Ciphers
Below you will find example usage of this term as found in modern and/or classical literature:
1. An Elementary Treatise on Arithmetic by Silvestre François Lacroix (1825)
"As the ciphers placed at the end of these partial products, ... When the multiplicand is terminated by ciphers, they may at first be neglected, ..."
2. Principles of the Law of Nations: With Practical Notes and Supplementary by Archer Polson, Thomas Hartwell Horne (1848)
"A double key is given to each minister previously to his departure, viz., the cipher for writing ciphers, (chiffre chiffrant,) and the cipher for ..."
3. Guide to the Materials for American History in Roman and Other Italian Archives by Carl Russell Fish (1911)
"Register of ciphers to the nuncios extraordinary at Madrid and Paris. 1713-1721. ... 215 A. Letters and ciphers of the nuncios extraordinary. 1715-1719. ..."
4. Practical Arithmetic: Embracing the Science and Application of Numbers by Charles Davies (1876)
"Also, if a decimal point be placed on the right of an integral number, and ciphers be then annexed, the value will not bo changed: thus, ..."
5. Planned Invasion of Japan, 1945: The Siberian Weather Advantage by Hatten Schuyler Yoder (1997)
"Demand for ciphers. The demand of the Soviets to turn over all ciphers and codes irritated Admiral Ernest King, CNO, rather strongly/'5 He di- 35 Admiral ..."
6. An Introduction to the Elements of Algebra: Designed for the Use of Those by Leonhard Euler, John Farrar (1821)
"If the multiplier is terminated by ciphers, we may, according to the remark in article 31, neglect these also, provided we write an equal number «n the ..."
7. New University Arithmetic: Embracing the Science of Numbers, and Their by Charles Davies (1856)
"Annexing ciphers to a decimal fraction does not alter its value. ... If ciphers are prefixed to the numerator of a decimal fraction, the same number of ..."
Other Resources Relating to: Ciphers | {"url":"http://www.lexic.us/definition-of/ciphers","timestamp":"2014-04-20T15:53:12Z","content_type":null,"content_length":"33042","record_id":"<urn:uuid:7df946b0-cb1b-4ce1-bc90-422551ef630a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
TITLE: Tic-Tac-Math
SUBJECT: 6th or 7th grade general math.
TOPIC: The topic of the board is a game called 24. The students have to add, subtract, multiply, or divide four numbers to get the number twenty- four.
INTERACTIVITY: First, two students will pick a set of game pieces and take a get a worksheet out of the folder. Second, the student with the addition sign pieces will pick a square and take one card
out of the pocket. Next, the student will do his/her work for solving the problem on the worksheet (in the square that represents the square he picked). If the student gets 24, then he/she will place
(Velcro) his/her pick on the square he/she picked. Then, the second student will choice a square and he/she will repeat the same process as player one. The game will continue until a player gets
three in a row or until there are no more squares left opened.
DIRECTIONS FOR USE: The objective of the game is to be the first player to get three squares in a row. The three squares can be aligned across, up, down, or on a diagonal. The game is for two
players. Each player should pick a set of pieces. The player who chooses the addition game pieces will go first. Next, the player should pick a square and pick one of the cards in the pocket. Then,
the player s objective is to make the number 24 using the four numbers on the card. The player can add, subtract, multiply, and divide. The player has to use all four numbers but use each number only
Example: If you have a card with the numbers 5, 6, 9, and 2.
If the player gets a correct solution the student will place one of the game pieces on the square. If the player is incorrect the player should place the card back in the square s pocket (the square
will be open for the next player to pick), then it is the other player s turn. The game will continue until all the squares are used or when a player gets three in a row.
TIME: 5 minutes to 10 minutes
SPECIAL CONSTRUCTION: The only special construction technique used in the bulletin board is there is Velcro on the pocket of each square, the back of each piece, and on the background.
CREDIT: The bulletin board will be for extra credit. Any student who gets three in a row will get 3 extra credit points. But, every student who hands in the worksheet showing they played the game
with get 3 extra credit points. | {"url":"http://faculty.kutztown.edu/schaeffe/BulletinBoards/Angstadt/Info.html","timestamp":"2014-04-16T16:59:15Z","content_type":null,"content_length":"17651","record_id":"<urn:uuid:b6c910b9-4ec9-40ec-9de5-d4933608904f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
Even Number in Lotto
An even number is an integer that is a multiple of two. Examples: 2, 30, 56. Even numbers are the opposite of
odd numbers
. A lotto number can be either even or odd. When you have a set of lotto numbers, it is usually the case that there is a mix of odd and even numbers.
How I do pick lotto numbers with odd and even strategy in mind? To track the odd/even lotto number trend, use the
Odd Even Bias Tracker Chart
in Advantage Plus. | {"url":"https://www.smartluck.com/lotteryterms/even-number.htm","timestamp":"2014-04-18T03:02:18Z","content_type":null,"content_length":"4663","record_id":"<urn:uuid:17e24231-cd75-4259-ae99-abdf90442041>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
Passing The Course
For the MSc students must take 8 taught modules and submit a project. For the Postgraduate diploma students must take 8 taught modules only. The pass mark for taught modules and the project are both
50%. The final weighted average is calculated as 2/3 times the mean of the 8 taught modules plus 1/3 times the project mark (all marks in %).
The normal requirements for a pass are 8 passes in the taught modules plus a pass in the project. However, 2 condoned passes (i.e. not less than 40%) of taught modules are permitted provided that the
final weighted average is not less than 50% and the project is passed.
There are four possible awards: Distinction, Merit, Pass, Fail.
An award of Distinction will be made if the average mark for the 8 taught modules is 70% or greater; and the mark for the dissertation is 70% or greater, and there are no marks below 50%, no condoned
marks, no resits, and all marks are based on first attempts.
An award of Merit will be made where the overall mark is 60% or greater, the mark for the dissertation is 65% or greater, there are no marks below 50%, no condoned marks, no resits, and all marks are
based on first attempts.
An award of Pass will be made where the overall mark is 50% or greater, the mark for the dissertation is 50% or greater, and 8 taught modules have been passed with no more than two condoned marks,
and with the maximum of one resit allowed per module.
Page last modified on 27 jul 12 11:33 | {"url":"http://www.ucl.ac.uk/maths/prospective-students/msc-modelling/passing-the-course","timestamp":"2014-04-19T18:17:23Z","content_type":null,"content_length":"20718","record_id":"<urn:uuid:04a22485-c977-4483-a706-9e3c903979a3>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
Castle Point, NJ Prealgebra Tutor
Find a Castle Point, NJ Prealgebra Tutor
...After post-docs at Massachusetts Institute of Technology and the Hebrew University of Jerusalem, I am in New York and ready to help you or your child learn math and science. I originally went
to school to get my Ph.D. to become a professor. Not sure I still want to be a professor, but I really miss teaching, which I did for years in graduate school.
10 Subjects: including prealgebra, physics, writing, algebra 2
...I hope that this approach will help your child gain mastery and confidence in math, and I look forward to enabling your child to reach his/her full math potential.I have taught Algebra 1/
Integrated Algebra to hundreds of students over the years, and have also prepared them to successfully take th...
8 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...I have loved math since I was about nine so I am always trying to find new ways to solve problems. This helps a lot when I'm tutoring and the typical method doesn't work. Drawing pictures,
even though I am not a great artist, and using physical objects seems to be the method that works best with students who don't enjoy math.
9 Subjects: including prealgebra, algebra 1, algebra 2, GED
...I also have a range of coursework in psychology and anthropology. I have experience teaching art, math, literacy, and homework support. My teaching philosophy involves instilling a love of
learning by connecting the subjects to students' interests and making the subjects real for them.I have mo...
29 Subjects: including prealgebra, English, reading, writing
Hello, my name is Rebecca. I am 25 years old, and I am a graduate student at Columbia University. I have tutored students since college and previously volunteered for almost a year with Boston
Cares tutoring services assisting students preparing to take their Math portion of the GED exam.
8 Subjects: including prealgebra, reading, writing, elementary math
Related Castle Point, NJ Tutors
Castle Point, NJ Accounting Tutors
Castle Point, NJ ACT Tutors
Castle Point, NJ Algebra Tutors
Castle Point, NJ Algebra 2 Tutors
Castle Point, NJ Calculus Tutors
Castle Point, NJ Geometry Tutors
Castle Point, NJ Math Tutors
Castle Point, NJ Prealgebra Tutors
Castle Point, NJ Precalculus Tutors
Castle Point, NJ SAT Tutors
Castle Point, NJ SAT Math Tutors
Castle Point, NJ Science Tutors
Castle Point, NJ Statistics Tutors
Castle Point, NJ Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Allwood, NJ prealgebra Tutors
Ampere, NJ prealgebra Tutors
Bayway, NJ prealgebra Tutors
Beechhurst, NY prealgebra Tutors
Bellerose Manor, NY prealgebra Tutors
Doddtown, NJ prealgebra Tutors
Dundee, NJ prealgebra Tutors
Five Corners, NJ prealgebra Tutors
Fort George, NY prealgebra Tutors
Greenville, NJ prealgebra Tutors
Highbridge, NY prealgebra Tutors
Hoboken, NJ prealgebra Tutors
Linden Hill, NY prealgebra Tutors
Manhattanville, NY prealgebra Tutors
Pamrapo, NJ prealgebra Tutors | {"url":"http://www.purplemath.com/castle_point_nj_prealgebra_tutors.php","timestamp":"2014-04-16T10:51:55Z","content_type":null,"content_length":"24506","record_id":"<urn:uuid:539b27ed-950e-4475-8488-84d84d0b7b69>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
Method and system for approximating value functions for cooperative games - Patent # 7079985 - PatentGenius
Method and system for approximating value functions for cooperative games
7079985 Method and system for approximating value functions for cooperative games
(7 images)
Inventor: Feldman
Date Issued: July 18, 2006
Application: 11/262,285
Filed: October 28, 2005
Inventors: Feldman; Barry E. (Chicago, IL)
Primary Raymond; Edward
Attorney Or
U.S. Class: 434/128; 702/185
Field Of 702/185; 702/189; 702/181; 702/179; 702/187; 703/1; 703/2; 434/128; 434/247; 434/248; 434/249
International G06F 19/00
U.S Patent 5742738; 5826244; 5991741; 6009458; 6026383; 6047278; 6058385; 6078901; 6078906; 6236977; 6640204
Other Bring, J., "A geometric appraoch to compare variables in a regression model," The American Statistician, v. 50, n. 1, 1996, pp. 57-62. citedby examiner.
References: Brinson, G. P. and N. Fachler, "Measuring non-U.S. equity portfolio performance," Journal of Portfolio Management, Spring 1985, pp. 73-76. cited by examiner.
Carino, D. R., "Combining attribution effects over time," Journal of Portfolio Measurement, v. 3. n. 4, Summer 1999, pp. 5-14. cited by examin- er.
Chevan, A. and M. Sutherland, "Hierarchical partitioning," The American Statistician, v. 45, n. 2, 1991, pp. 90-96..quadrature..quadrature. cited by examiner.
Fama, E. and K. French, "Common risk factors in the returns on stocks and bonds," Journal of Financial Economics, v. 33, n. 1., 1993, pp. 3-56. cit- ed by examiner.
Feldman, B., "The powerpoint," manuscript, 9th International Conference on Game Theory, 1998, Stony Brook, N.Y. cited by examiner.
Feldman, B., "The proportional value of a cooperative game," 1999, http://fmwww.bc.edu/RePEc/es2000/1140.pdf . cited by examiner.
Feldman, B., "A dual model of cooperative value," 2002, http://papers.ssrn.com/abstract=317284. cited by examiner.
Myerson, R. G., "Coalitions in cooperative games," Chapter 9 in Game Theory: Analysis of Conflict, Cambridge: Harvard University Press, 1992, pp. 417-482. cited by examiner.
Ortmann, K. M., "The proportional value of a positive cooperative game," Mathematical Method of Operations Research, v. 51, 2000, pp. 235-248. cit- ed by examiner.
Pesaran H. and Y. Shin, "Generalized impulse response analysis in linear multivariate models," Economics Letters, v. 58, 1998, pp. 17-29. cited by examiner.
Harville, D. A., "Decomposition of prediction error," Journal of the American Statistical Association, v. 80 n. 389, 1985, pp. 132-138. cited by examiner.
Kruskal, W., "Concepts of relative importance," The American Statistician, v. 41, n. 1, 1987, pp. 6-10. cited by examiner.
Lindeman, R. H., P. F. Merenda and R. Z. Gold, Introduction to Bivariate and Multivariate Analysis, Scott, Foresman, and Company, 1980, Glenview, Illinois, ISBN 0-673-15099-2, pp.
119-127. cited by examiner.
Ross, S. "The arbitrage theory of capital asset pricing," Journal of Economic Theory, v. 13, 1976, pp. 341-360. cited by examiner.
Ruiz, L. M., F. Valenciano and J.M. Zarzuelo, "The family of least square values for transferable utility games," Games and Economic Behavior, v. 24, 1998, pp. 109-130. cited by
Shapley, L. S., "Additive and Non-Additive Set Functions," Ph.D. Thesis, Princeton University, 1953. cited by examiner.
Sharpe, W. F., "Asset allocation: Management style and performance measurement," Journal of Portfolio Management, Winter 1992, pp. 7-19. cit- ed by examiner.
Sims, C., "Macroeconomics and reality," Econometrica v. 48, 1980, pp. 1-48. cited by examiner.
Vorob'ev, N. N. and A. N. Liapounov, "The proper Shapley value," in Game Theory and Applications IV, L. A. Petrosjan and V. V. Mazalov, eds., Comack, NY: Nova Science Publishers,
1998, pp. 155-159. cited by examiner.
Wilson, R. O. "Information, efficiency, and the core of an economy," Econometrica, v. 46, 1978, pp. 807-816. cited by examiner.
Young, H. P., ed., Cost Allocation: Methods, Principles, Applications, New York: North Holland, 1985. cited by examiner.
Abstract: A method and system for approximating a value functions for cooperative games. The method and system include approximating value functions for large cooperative games. The method and
the system may be applicable to other types of value function problems such as those found in engineering, finance and other disciplines.
Claim: I claim:
1. A method for approximating a value function for players in a cooperative game based on a large number of players representing an allocation problem, comprising: selecting a
measureof precision; determining a desired precision for approximated player values; selecting a collection of orderings from a set of possible permutations of player orderings;
computing at least one intermediate value function based on coalitional worthsgenerated for each selected ordering; computing, periodically, a precision of approximations of values
for players to determine if more player orderings should be generated to obtain a more precise estimate of values for players; computing a finalvalue approximation for determining
allocations to players when a desired degree of precision is reached or a selected computational limit is exceeded; and outputting said final value approximation to a display or a
computer module for furtherprocessing.
2. The method of claim 1 further comprising a computer readable medium having storing therein instructions for causing a processor to execute the steps of the method.
3. The method of claim 1 wherein: a collection of player orderings is randomly generated with all players having equal probability of appearing at any position in an ordering; an
intermediate value function generated based on each playerordering is the marginal contribution of each player; or at least one player's average marginal contribution is determined,
or allocations to players are based on average marginal contributions.
4. The method of claim 1 wherein: an approximation of a weighted value is computed; a collection of player orderings is randomly generated with a probability of a player appearing at
a particular point in an ordering proportional to a ratio ofits weight to a sum of weights of players not already ordered; an intermediate value function generated based on each
player ordering is a marginal contribution of each player; at least one player's average marginal contribution is determined; or atleast one player's value is based on its average
marginal contribution.
5. The method of claim 1 wherein: an approximation of a powerpoint is computed.
6. The method of claim 1 wherein: a standard error is a selected measure of precision; and squared values of intermediate value functions used to compute estimated values are also
computed and the estimated values squared or their sums aresaved, or sums of squared intermediate value functions and sums of intermediate value functions are used to compute sample
standard errors of estimated values for players.
7. A method for approximating the proportional value for players in a cooperative game based on a large number of players comprising: selecting a collection of orderings from a set of
possible permutations of player orderings; computingweighted marginal contributions for at least one ordering of players and one player in that ordering; determining allocations to
players in the cooperative game; and outputting said allocations to a display or a computer module for further processing.
8. The method of claim 7 further comprising a computer readable medium having stored therein instructions for causing a processor to execute the steps of the method.
9. The method of claim 7 wherein the computing step includes computing weighted marginal contributions WM.sub.i.sup.r(v) for each player i and ordering r in the cooperative game v
with: .function..function..function. ##EQU00036## wherein theweighted marginal contributions WM.sub.i.sup.r(v) are intermediate value functions used to approximate a proportional
value, M.sub.i.sup.r(v) are marginal contributions for each ordering r and OWP(r, v) is an Ordered Worth Product for an ordering r inthe cooperative game v.
10. The method of claim 7 wherein the determining step includes: determining a summation of weighted marginal contributions SWM.sub.i(v) for a player i in a cooperative game v
with:.function..times..times..epsilon..times..times..function..times..function- . ##EQU00037## wherein the summation is over all orderings r in a selected collection of orderings R*
(N) and WM.sub.i.sup.r(v) are weighted marginal contributions for player iand ordering r.
11. The method of claim 7 wherein the determining step includes: determining an estimated proportional value EstPV.sub.i of a player i in a cooperative game v including N players
with:.function..function..epsilon..times..times..times..times..function..times- ..function. ##EQU00038## wherein EstPV.sub.i is a proportional share of a worth of a grand coalition
according to weighted marginal contributions SWM.sub.i(v) and a summation ofweighted marginal contributions SWM.sub.j(v) for an ordering j.
12. The method of claim 11 wherein the control game and allocation game have a same set of players and one collection of player orderings is used for both the control game and
allocation game.
Description: FIELD OF THE INVENTION
The present invention relates to the fields of cooperative game theory and statistical analysis. More specifically, it relates to a method and system for using cooperative game theory
to resolve joint effects in statistical analysis and othercooperative allocation problems.
BACKGROUND OF THE INVENTION
Many statistical procedures estimate how an outcome is affected by factors that may influence it. For example, a multivariate statistical model may represent variations of a dependent
variable as a function of a set of independent variables. Alimitation of these procedures is that they may not be able to completely resolve joint effects among two or more
independent variables.
A "joint effect" is an effect that is the joint result of two or more factors. "Statistical joint effects" are those joint effects remaining after the application of statistical
methods. "Cooperative resolution" is the application ofcooperative game theory to resolve statistical joint effects
A "performance measure" is a statistic derived from a statistical model that describes some relevant aspect of that model such as its quality or the properties of one of its
variables. A performance measure may be related to a generalconsideration such as assessing the accuracy of a statistical model's predictions. Cooperative resolution can completely
attribute the statistical model's performance, as reflected in a performance measure, to an underlying source such as thestatistical model's independent variables.
Most performance measures fall in to one of two broad categories. The first category of performance measure gauges an overall "explanatory power" of a model. The explanatory power of
a model is closely related to its accuracy. A typicalmeasure of explanatory power is a percentage of variance of a dependent variable explained by a multivariate statistical model.
The second category of performance measure gauges a "total effect." Measures of total effect address the magnitude and direction of effects. An example of such a total effect measure
is a predicted value of a dependent variable in a multivariatestatistical model.
Some of the limits of the prior art with respect to the attribution of explanatory power and total effects may be illustrated with reference to a standard multivariate statistical
model. A multivariate statistical model is commonly used todetermine a mathematical relationship between its dependent and independent variables. One common measure of explanatory
power is a model's "R.sup.2" coefficient. This coefficient takes on values between zero percent and 100% in linear statisticalmodels, a common statistical model. An R.sup.2 of a model
is a percentage of a variance of a dependent variable, i.e., a measure of its variation, explained by the model. The larger an R.sup.2 value, the better the model describes a
dependent variable.
The explanatory power of a multivariate statistical model is an example of a statistical joint effect. As is known in the art, in studies based on a single independent variable, it is
common to report the percentage of variance explained by thatvariable. An example from the field of financial economics is E. Fama and K. French, "Common risk factors in the returns
on stocks and bonds," Journal of Financial Economics, v. 33, n. 1. 1993, pp. 3 56. In multivariate statistical models, however,it may be difficult or impossible, relying only on the
existing statistical arts, to isolate a total contribution of each independent variable.
The total effect of a multivariate statistical model in its estimation of a dependent variable is reflected in estimated coefficients for its independent variables. If there are no
interaction variables, independent variables that representjoint variation of two or more other independent variables, then, under typical assumptions, it is possible to decompose
this total effect into separate effects of the independent variables. However, in the presence of interaction variables there is noaccepted method in the art for resolving the effects
of the interaction variables to their component independent variables.
One principal accepted method to determine the explanatory power of independent variables in a multivariate statistical model is by assessment of their "statistical significance." An
independent variable is statistically significant if a"significance test" determines that its true value is different than zero. As is known in the art, a significance test has a
"confidence level." If a variable is statistically significant at the 95% confidence level, there is a 95% chance that its truevalue is not zero. An independent variable is not
considered to have a "significant effect" on the dependent variable unless it is found to be statistically significant. Independent variables may be meaningfully ranked by their
statisticalsignificance. However, this ranking may provide limited insight into their relative contributions to explained variance.
Cooperative game theory can be used to resolve statistical joint effects problems. As is known in the art, "game theory" is a mathematical approach to the study of strategic
interaction among people. Participants in these games are called"players." Cooperative game theory allows players to make contracts and has been used to solve problems of bargaining
over the allocation of joint costs and benefits. A "coalition" is a group of players that have signed a binding cooperation agreement. A coalition may also comprise a single player.
A cooperative game is defined by assigning a "worth," i.e., a number, to each coalition in the game. The worth of a coalition describes how much it is capable of achieving if its
players agree to act together. Joint effects in a cooperativegame are reflected in the worths of coalitions in the game. In a cooperative game without joint effects, the worth of any
coalition would be the sum of the worths of the individual players in the coalition.
There are many methods available to determine how the benefits of cooperation among all players should be distributed among the players. (Further information on cooperative game
theory can be found in Chapter 9 of R. G. Myerson, Game Theory:Analysis of Conflict, Cambridge: Harvard University Press, 1992, pp. 417 482, which is incorporated by reference.)
Cooperative game theory has long been proposed as a method to allocate joint costs or benefits among a group of players. In most theoretical work the actual joint costs or benefits
are of an abstract nature. The practical aspects of using ofcooperative game theory to allocate joint costs has received somewhat more attention. See, for example, H. P. Young, ed.,
Cost Allocation: Methods, Principles, Applications, New York: North Holland, 1985.
Techniques from the prior art typically cannot be used to satisfactorily resolve statistical joint effects in cooperative games. Thus, it is desirable to use cooperative game theory
to resolve statistical joint effects problems.
There have been attempts in the prior art to decompose joint explanatory power. For example, R. H. Lindeman, P. F. Merenda, and R. Z. Gold, in Introduction to Bivariate and
Multivariate Analysis, 1980, Scott, Foresman, and Company, Glenview,Illinois, ISBN 0-673-15099-2, pp. 119 127, describe a method of variance decomposition based on averaging the
marginal contribution of a variable to R.sup.2 over all possible orderings of variables. The authors discuss a method that generates theShapley value of a variable in a statistical
cooperative game using R.sup.2 as a measure of explanatory power. W. Kruskal, in "Concepts of relative importance," The American Statistician, 1987, v. 41, n. 1, pp 6 10, and A.
Chevan and M. Sutherland, in"Hierarchical partitioning," The American Statistician, 1991, v. 45, n. 2, 90 96, describe related methods based on the marginal contributions over all
possible orderings of variables.
Also, it is known in the art that the explained variance in a regression can be decomposed into linear components. The variance assigned to an independent variable i in this
decomposition is the sum over all variables j of the expression.beta..sub.i.sigma..sub.ij.beta..sub.j, where .beta..sub.j is the regression coefficient associated with a variables j
and .sigma..sub.ij is the covariance between independent variables i and j. This decomposition corresponds to the Shapley value of astatistical cooperative game using explained
variance as a performance measure and using coefficients the complete statistical model to determine the worths of all coalitions.
Statistical cooperative games based on total effects may have coalitions with negative worths. It may be desirable to use proportional allocation principles in resolving these joint
effects, however the proportional value cannot be applied tocooperative games with negative worths. It is desirable to demonstrate how proportional allocation effects determined in a
first cooperative control game may be applied in a second cooperative allocation game that has negative coalitional worths throughthe use of an intergrated proportional control value
of a controlled allocation game.
Statistical cooperative games may have large numbers of players. The calculation of value functions for large games can use large quantities of computer time. M. Conklin and S.
Lipovetsky, in "Modern marketing research combinatorialcomputations: Shapley value versus TURF tools," 1998 S-Plus User Conference, disclose a method for approximating the Shapley and
weighted Shapley values. It is desirable to approximate the powerpoint, the proportional value, and integrated proportionalcontrol values. It also desirable to show how the precision
of value approximations may be ascertained.
SUMMARY OF THE INVENTION
In accordance with preferred embodiments of the present invention, some of the problems associated with resolving joint effects in statistical analysis are overcome. A method and
system for approximating value functions for cooperative games.
One aspect of the present invention includes a method for approximating value functions for large cooperative games.
The foregoing and other features and advantages of preferred embodiments of the present invention will be more readily apparent from the following detailed description. The detailed
description proceeds with references to accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Preferred embodiments of the present inventions are described with reference to the following drawings, wherein:
FIG. 1 is block diagram illustrating a cooperative game resolution computing system;
FIG. 2 is a flow diagram illustrating a method for constructing a statistical cooperative game;
FIG. 3 is flow diagram illustrating construction of an access relationship between a statistical cooperative game and a multivariate statistical model;
FIG. 4 is a flow diagram illustrating determination of a worth of a coalition in a statistical cooperative game;
FIG. 5 is a flow diagram illustrating a method for allocating a worth of a coalition in a cooperative game on a multiplicative basis;
FIG. 6 is a flow diagram illustrating a method for constructing a controlled allocation game; and
FIG. 7 is a flow diagram illustrating a method for approximating value functions of large cooperative games.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
Exemplary Cooperative Resolution Computing System
FIG. 1 illustrates a cooperative resolution computing system 10 for embodiments of the present invention. The cooperative game resolution system 10 includes a computer 12 with a
computer display 14. In another embodiment of the presentinvention, the computer 12 may be replaced with a personal digital assistant ("PDA"), a laptop computer, a mobile computer, an
Internet appliance or other similar mobile or hand-held electronic device. The computer 12 is associated with one or moredatabases 16 (one of which is illustrated) used to store data
for the cooperative resolution system 10. The database 16 includes a memory system within the computer 12 or secondary storage associated with computer 12 such as a hard disk, floppy
disk,optical disk, or other non-volatile mass storage devices. The computer 12 can also be in communications with a computer network 18 such as the Internet, an intranet, a Local Area
Network ("LAN") or other computer network. Functionality of thecooperative game system 10 can also be distributed over plural computers 12 via the computer network 18.
An operating environment for the cooperative game system 10 includes a processing system with at least one high speed Central Processing Unit ("CPU") or other processor. In accordance
with the practices of persons skilled in the art of computerprogramming, the present invention is described below with reference to acts and symbolic representations of operations
that are performed by the processing system, unless indicated otherwise. Such acts and operations are referred to as being"computer-executed," "CPU executed," or "processor executed."
It will be appreciated that the acts and symbolically represented operations include the manipulation of electrical signals by the CPU. The electrical system represents data bits that
cause a resulting transformation or reduction of theelectrical signal representation, and the maintenance of data bits at memory locations in a memory system to thereby reconfigure or
otherwise alter the CPU's operation, as well as other processing of signals. The memory locations where data bits aremaintained are physical locations that have particular electrical,
magnetic, optical, or organic properties corresponding to the data bits.
The data bits may also be maintained on a computer readable medium including magnetic disks, optical disks, computer memory (e.g., RAM or ROM) and any other volatile or non-volatile
mass storage system readable by the computer. The data bits ona computer readable medium are computer readable data. The computer readable medium includes cooperating or
interconnected computer readable media, which exist exclusively on the processing system or distributed among multiple interconnected processingsystems that may be local or remote to
the processing system.
Cooperative Games and the Representation of Statistical Joint Effects
FIG. 2 is a flow diagram illustrating a Method 20 for constructing a statistical cooperative game. At Step 22, a set of players for a statistical cooperative game is identified. At
Step 24, an access relationship is identified betweencoalitions of the statistical cooperative game and elements of a multivariate statistical model. A selected subset of the set of
the identified players is a coalition. At Step 26, a worth is determined for selected coalitions in the statisticalcooperative game based on elements of the multivariate statistical
model accessible by a coalition.
Method 20 is illustrated with exemplary embodiments of the present invention. However, the present invention is not limited to such embodiments and other embodiments can also be used
to practice the invention.
At Step 22, a set of players is identified for a statistical cooperative game. A "statistical cooperative game" defined on a set of "players" assigns a "worth" to subsets of the set
of players: A selected subset of available players is a"coalition." A coalition is a single player or plural players, that have made a binding cooperation agreement to act together.
An empty set with no available players is also formally a coalition. At Step 24, an access relationship is identified betweencoalitions of the statistical cooperative game and
elements of a multivariate statistical model. The "access relationship" comprises a set of rules determining, for coalitions in the identified set of coalitions, any elements that are
accessible by thecoalition and how accessible elements may be used by a coalition in the multivariate statistical model. At Step 26, a worth is determined for coalitions selected in
the statistical cooperative game based on elements of the multivariate statistical modelaccessible by a coalition. A "worth" of a coalition is what these players can achieve though
mutual cooperation. In the type of statistical cooperative game used for preferred embodiments of the present invention, the worth of a coalition is a value ora number. However, the
present invention is not limited to such an embodiment and other types of values or worths can also be used. By convention, the worth of an empty set is defined to be zero.
In another embodiment of the present invention, the steps of Method 20 are applied in a recursive manner to allocate a value allocated to a player accessing a plurality of variables
in a first statistical cooperative game on a basis of a secondcooperative game embodying a second set of players.
A set of all available players, also known as a "grand coalition," is denoted by "N," and N={1, 2, . . . , n}, where the braces "{ }" identify enclosed elements as members of a set
and "n" is a number of players in a game. Numbers are used toidentify players only for convenience. A cooperative game is typically represented by a lower case letter, typically, "v."
A coalition is typically represented by "S," thus S.OR right.N: That is, S is a subset of N. A worth for a coalition S isidentified as "v(S)," and "v(S)=5" states that the worth of
coalition S in cooperative game v is 5. To simplify notation herein after, the coalition {1,2} may be written as "12," and, thus v({1,2})=v(12).
Typically, as described above, the worth of a coalition is independent of the possible organization of other players in the game that are not members of the coalitions. This is known
in the art as a cooperative game in "coalitional form." Thereis also a cooperative game in "partition function form" in which the worth of a coalition depends on the "coalitional
structure" formed by all players. This is a partition of the set of players that contains the coalition. In this case the worth of acoalition may be referred to as v(S,Q) where Q is a
partition containing S.
The term "value" has distinct meanings in the different arts related to the present invention. In a general context, value has the common meaning of the benefit, importance, or
worthiness of an object. In the statistical arts, a variable, or anobservation of a variable, may have a value. This refers to a number assigned to the variable or observation. In
cooperative game theory, value has two specialized meanings. First, it refers to a type of function that may be applied to a game, calleda "value function." Second, a value function
assigns a value to players in a game. This value may be understood as an expected payoff to a player as a consequence of participation in the game. However, the present invention is
not limited to thesemeanings of value and other meanings of value can also be used.
Access Relationships
FIG. 3 is a flow diagram illustrating a Method 28 for constructing an access relationship between a statistical cooperative game and a multivariate statistical model. At Step 30, one
or more elements of the multivariate statistical model areidentified. At Step 32, a set of coalitions is identified in the statistical cooperative game. At Step 34, an access
relationship is specified. The access relationship comprises a set of rules determining, for each coalition in the identified set ofcoalitions, any elements that are accessible by the
coalition and how accessible elements may be used by the coalition.
Method 28 is illustrated with exemplary embodiments of the present invention. However the present invention is not limited to such embodiments and other embodiments can also be used
to practice the invention.
In one illustrative embodiment, at Step 30, one or more elements of the multivariate statistical model are identified. The multivariate statistical model may include for example, but
is not limited to, an ordinary least squares model, a VAR timeseries model, an analysis of categorical effects model, an analysis of changes in proportions model, a covariance matrix,
a capital asset pricing model, an arbitrage pricing theory model, an options pricing model, a derivatives pricing model, a Sharpestyle analysis model, a macroeconomic model, a price
forecasting model, a sales forecasting model, or a basic or generalized Brinson and Fachler manager attribution model, or other models.
In preferred embodiments of this invention, the elements identified at Step 30 are "independent variables" of an analysis. Such independent variables include information whose
statistical joint effects or explanatory power is to be allocatedamong the players of the cooperative game. However, in certain types of multivariate statistical models, other
elements may be of interest. For example, in time series analyses involving vector autoregression (VAR), all variables may be endogenous tothe model, and hence, not independent.
Further, it may be desirable to identify different "lagged values" of a variable as different elements of the model. In regression with instrumental variables (IV), and when using the
generalized method of moments(GMM), it may be desirable to include the instruments as elements of the model.
At Step 32, a set of coalitions in the statistical cooperative game is identified. The choice of coalitions to be identified is guided by a number of factors. One primary factor
regards a number of players in the cooperative game. Cooperativeresolution will resolve all joint effects between the selected players. Players may be identified with individual
elements of the multivariate statistical model, they may have access to multiple elements, or more complex patterns may be desired. Once aset of players is determined, a set of
allowable coalitions of players may be restricted. This may be desirable when the allocation procedure to be used does not require the worths of all coalitions in the cooperative
For example, application of the Nash Bargaining Solution requires only the worths of individual players and the grand coalition (see Equation 19), as known to those skilled in the
art. Some solution concepts may only require coalitions up to acertain number of players. In one preferred embodiment of the present invention, the set of coalitions identified will
be a set of all possible coalitions of players. In another preferred embodiment of the present invention, the set of coalitions willbe a set of less than all possible coalitions of
players. At least two players are identified in order for nontrivial cooperative resolution to take place. These players are abstract entities that may access variables in the
multivariate statisticalmodel. It is also possible that these players will additionally represent real entities.
At Step 34, an "access relationship" is specified. The access relationship comprises a set of rules determining, for coalitions in the identified set of collations, any elements that
are accessible by the coalition and how accessible elementsmay be used by the coalition. The access relationship is determined between coalitions of the cooperative game and the
elements of the multivariate statistical model. The precise meaning of an access relationship will depend on a desired application. In a preferred embodiment of the current invention,
a coalition has access to a variable if the coalition can use the variable in a statistical procedure. An access relationship may specify restrictions on the use of a variable. For
example, access toan independent variable may only allow it to be directly entered into a statistical procedure. A variable transformation or interaction term may then be considered
to be an additional independent variable.
A coalition has "primary access" to a variable if no coalition not including, as a subset, the coalition with primary access can access the variable. A coalition may consist of a
single player. It is possible that no coalition has primaryaccess to a variable. However, at most one coalition can have primary access.
An access relationship may be explicitly defined, as, for example, if choices among alternatives are made through a graphical user interface (GUI), it may be determined by logic
embedded in hardware or software implementing the accessrelationship, or it may be created implicitly or by default in the implementation of Method 28.
A one-to-one transferable access relationship between independent variables in the multivariate statistical model and players in the statistical cooperative game is the primary and
default access relationship. In this case each player hasprimary access to an independent variable, there is no independent variable not assigned a player with primary access, and the
independent variables accessible by any selected coalition are exactly those whose primary access players are members of theselected coalition. The one-to-one transferable
relationship between players and independent variables allows statistical joint effects to be apportioned between all independent variables.
There are many alternative access relationships that might also be used. The choice of a proper form of the access relationship is based on the understanding of the structural or
theoretical relationships between the independent variables andtheir function in determining a worth of a coalition.
A common variation on the one-to-one transferable access relationship arises from understanding of the role of an "intercept term" in a multivariate statistical model to be that of a
normalizing factor. An intercept term is represented byconstructing a constant independent variable, typically a vector of ones. The regression coefficient for this variable is the
intercept term. If an intercept term represents no intrinsic information but is necessary to avoid biased estimates of theother coefficients, it is a normalizing factor. In such a
situation, the constant vector should be accessible by every coalition in the game. The resulting interpretation is that any benefit from this variable is distributed among all
players of thegame (and the other independent variables).
In other situations, however, it might be considered that the value of an intercept term contributed information, and, thus that it should be treated like other independent variables.
Thus, in many statistical models, the null hypothesis is thatthe intercept term is zero. Deviation of the intercept term from zero is then indicative of the action of some factor such
as managerial-ability or a health effect.
Another frequent device used in statistical procedures is an "interaction variable" that reflects the joint presence of two or more independent variables. For example, a exercise/diet
interaction variable could have the value "one" whenever thepatient both engaged in vigorous exercise and ate a healthy diet, and the value "zero" otherwise. A single player could be
assigned primary access to this interaction variable. However, it will often be advantageous to give primary access to aninteraction variable to the minimal coalition of players with
access to all component variables. By default, an access relationship does not allow a coalition to create an interaction variable based on a group of independent variables simply
because itcan access those variables. However, this ability could be specified in a particular access relationship.
In the example described above, all coalitions accessing both the exercise and diet variables could also access the interaction variable; but a coalition that could access only one of
these variables or neither could not access the interactionvariable. The cooperative resolution process will then divide the explanatory power of the interaction term between the
interacting variables. Allowing the interaction term to have primary access by a single player, on the other hand, would make itpossible to estimate the importance of the interaction
effect itself.
Another variation on a one-to-one correspondence between players and independent variables that will be considered here is the case of a number of binary variables accessible by a
single player. This may be desirable when all binary variablesare related to a similar factor. For example, they might correspond to different age levels in a study population. The
effect of grouping them together would be to determine the overall importance of age. If these binary variables are, instead,accessible by separate players, cooperative resolution
would determine the importance of each age interval separately.
There are also lagged realizations of an independent variable. For example, consumption, at time t, C.sub.t might be modeled as a function of variables including current and lagged
income, I.sub.t and I.sub.t-1. The influences of the currentand lagged values of I could be grouped together or analyzed separately. In the later case, they would be accessible by
separate players.
A general rule can be defined that an access relationship will ordinarily satisfy. If the coalition S is a subset of a coalition T then all independent variables collectively
accessible by S are accessible by T as well. If this requirement isnot met, the resulting game may not have a logical interpretation. The notation A(S) refers to the elements
collectively accessible by the coalition S. Equation 1 represents the general rule: if S.OR right.T, then A(S).OR right.A(T). (1) Exceptions tothis rule are within the scope of the
present invention, however, it is contemplated that they will be rare.
In games in partition function form, it is possible that an access relationship depends on the complete coalitional structure present in the game. Thus, the independent variables
accessible by a coalition typically may not be determined withoutreference to a complete coalitional structure. In this case the independent variables accessible by a coalition may be
referenced as A(S,Q). A restatement of Equation 1 extending the general rule to the partition function game is if Q={S, Q.sub.1, . .. , Q.sub.k} and Q*={T, Q.sub.1*, . . . ,
Q.sub.k*}, with S.OR right.T and Q.sub.i*.OR right.Q.sub.i for all i=1, . . . , k, then A(S,Q).OR right.A(T,Q*).
In another embodiment of the present invention, Method 28 can be used at Step 24 of Method 20. However, the present invention is not limited to such an embodiment and Method 28 is
also used as a stand alone method independently from Method 20for determining an access relationship.
Determining the Worth of a Coalition in a Statistical Cooperative Game
FIG. 4 is a flow diagram illustrating a Method 36 for determining a worth for selected coalitions in a statistical cooperative game. At Step 38, a performance measure for a
multivariate statistical model is selected. At Step 40, a performancemeasure is computed based on elements of a multivariate statistical model accessible by a coalition for a set of
selected coalitions. At Step 42, a worth of each coalition from the set of selected coalitions in the statistical cooperative game isdetermined based on the computed performance
measure for that coalition.
Method 36 is illustrated with exemplary embodiments of the present invention. However, the present invention is not limited to such embodiments and other embodiments can also be used
to practice the invention.
The type of game constructed may be either in coalitional, partition function, or other form. In partition function games, the worth of a coalition may also be influenced by the
independent variables accessible by other coalitions in thecoalition structure.
This approach is very different from traditional methods of constructing cooperative games. Information that could be represented as independent variables might be used in the
determination of the worth of a coalition in the prior art, howeverthe worth of a coalition would be determined by values of this variable that are particular to it. For example, in a
cost allocation game used to allocate utility costs, information regarding electric usage might be an input to determining the worth ofa coalition. However, the relevant information
would be the electric usage of members of the coalition. In the present invention there need not be direct association between independent variables and coalitions except those
determined by an accessrelationship.
It is, however, also possible that other factors besides an access relationship enter into the determination of the worth of a coalition.
At Step 38, a performance measure of a multivariate statistical model is selected. There are a great many possible performance measures that can be selected. One class of performance
measure considers the overall explanatory power of the entiremodel. An example of this type of measure is an R.sup.2 coefficient. As a result of this type of analysis it might be
concluded that "independent variable A explains 25% of a variance of a dependent variable B." Another class of performance measure isbased on a dependent variable and will typically
result in conclusions such as "variable A adds three years to the average patient's life expectancy." The resolution of statistical joint effects on a dependent variable may be
studied on the level of themodel itself or on the level of the individual observations that comprise the model. Other examples of performance measures include, but are not limited to,
an unadjusted R.sup.2 statistic, an R.sup.2* statistic (defined below), a predicted value of adependent variable, a value of a log likelihood function, a variance of a forecast
observation, or an out of sample mean square error.
At Step 40, a performance measure is computed for selected coalitions based on the elements of the multivariate statistical model accessible by a coalition. Exemplary methods for
computing several performance measures are described assuming thatordinary least squares (OLS) is a selected multivariate statistical modeling procedure and independent variables of a
model are elements on which an access relationship is based. However, other assumptions can also be used.
For example, at Step 40, let y=(y.sub.(1), y.sub.(2), . . . , y.sub.(t)) be a vector that represents a sequence of t observations of a dependent variable. Similarly, let X be a
(t.times.m) matrix comprising a set of m vectors of t observationseach, x.sub.i=(x.sub.i(1), x.sub.i(2), . . . , x.sub.i(t)), that represent sequences of t observations of independent
variables X=(x.sub.1, x.sub.2, . . . , x.sub.m) with X.sub.ij=x.sub.j(i). The linear regression of y onto X yields an m-vector ofcoefficients .beta.=(.beta..sub.(1), .beta..sub.(2), .
. . , .beta..sub.(m)). This regression may be computed through application of the formula illustrated in Equation 2: .beta.=(X'X).sup.-1X.sup.ty, (2) where X' is the transpose of X,
the matrixinverse of a square matrix X is written X.sup.-1, and multiplication is by matrix multiplication rules.
The use of R.sup.2 as a performance measure for the study of explanatory power proceeds as follows. An R.sup.2 statistic is calculated. An error vector is illustrated in Equation 3:
.epsilon.=y-X'.beta., (3) where .epsilon. is the differencebetween the estimated and true values of the dependent variable. A sum of squared error (SSE) of the regression can then be
written as SSE=.epsilon.'.epsilon.. The total sum of squares of the regression (SST) can be written SST=y.sup.ty-ty.sup.2, wherey is the average value of y. The R.sup.2 statistic of
the regression may then be calculated as is illustrated in Equation 4. R.sup.2=1-SSE/SST. (4) When the performance contribution of an intercept term is to be studied it may be desired
to used arevised definition of R.sup.2, an R.sup.2* statistic calculated by the formula in Equation 5. R.sup.2*=1-SSE/SST*, (5) where SST*=y.sup.ty.
A performance measure for a coalition S may also be determined as follows. For any coalition S, let X.sub.S represent the matrix composed of the vectors x.sub.i for all independent
variables i contained in the set A(S). Also, let .beta..sub.Sbe the vector of coefficients associated with the variables in A(S). Compute .beta..sub.S=
(X.sub.S'X.sub.S).sup.-1X.sub.S.sup.ty and .epsilon..sub.S=y-X.sub.S'.beta..sub.S, where .epsilon..sub.S is the error vector associated with the regression basedon the variables in S.
Define SSE.sub.S=.epsilon..sub.S'.epsilon..sub.S and, thus R.sub.S.sup.2=1-SSE.sub.S/SST, where SST is defined above. Then set v(S)=R.sub.S.sup.2. Here "v" is a cooperative game based
directly on the performance measure.
Performance measures based on total effects may be based either on submodels of the complete multivariate statistical model or on the full multivariate statistical model. An estimated
value of a dependent variable, the vector y, is the vectorX'.beta.. An estimated value of an single observation k with characteristics x.sub.k would then be x.sub.k'.beta.. The vector
x.sub.k may represent an actual observation in the data, i.e., x.sub.k may be a row vector of the matrix X, or anout-of-sample observation or a hypothetical case to be forecast.
In order to construct a total effects performance measure for OLS models based on submodels and using estimated values of an observation of the dependent variable as a performance
measure of total effects, set v(S) as illustrated in Equation 6:v(S)=x.sup.S.sub.k'.beta..sub.S, (6) where x.sup.S.sub.k is a vector of the values of the independent variables
accessible by S of the k.sup.th observation of data or a combination of values of independent variables corresponding to a value of adependent variable to be forecast and .beta..sub.S
is the vector of corresponding coefficients. This approach to total effects provides a new way to understand the interaction of independent variables.
Another approach to computing a total performance measure for OLS models based on submodels would be to set v(S)=x.sup.S'.beta..sub.S, where x.sup.S is a vector of average values of
the independent variables accessible by S over all observationsof the dataset, or over some subset of observations.
Alternatively, a total effect performance measure for a coalition may be based on the complete multivariate statistical model. The worth of a coalition S may be determined in ways
completely analogous to those just described. Define.beta..sub.S to be a vector resulting from the restriction of .beta., as estimated by Equation 3, to coefficients of independent
variables accessible by S. Then, as illustrated in Equation 7, set v(S)=x.sub.S'.beta..sub.S. (7) Note that this performancemeasure has little utility unless interaction variables are
included in the multivariate statistical model and a nontrivial access relationship is employed. In particular, when a one-to-one transferable access relationship is used, there will
be nostatistical joint effects to resolve.
A performance measure of explanatory power based only on the complete multivariate statistical model may also be constructed as is illustrated it Equation 8. Let .epsilon..sub.S=
y-X.sub.S'.beta..sub.S and setv(S)=1-.epsilon..sub.S'.epsilon..sub.S/SST. (8)
Explanatory power may also be measured with respect to a forecast value of a dependent variable. Let x* be a vector of independent variable values used to forecast y*=x*'.beta.. Also
let x*.sub.S be the restriction of x* to the variablesaccessible by the coalition S. Then the variance of the expected value of y* conditional on the coalition forming the expectation
is illustrated in Equations 9 and 10: Var.sub.S(E.sub.S(y*))=.sigma..sub.S.sup.2(1+x.sub.S*(X.sub.S'X.sub.S).su-p.-1x.sub.S*), (9) where .sigma..sub.S.sup.2=SSE.sub.S/(n-s) (10) is
the variance of the regression estimated when the submodel is restricted to the independent variables accessible by S and s is the number of independent variables accessible by S.
ForS=N, this is the forecast variance for the complete multivariate statistical model.
The choice among alternative performance measures is made according to the purpose of the cooperative resolution process and the understanding of an individual skilled in the
statistical arts. For most purposes, it is contemplated that thepreferred embodiments of performance measures of explanatory power will be based on the construction of submodels,
while total effects measures will tend to be based only on the complete model. Note that, formally, it is the access relationship thatdetermines whether a submodel is computed based
on the variables a coalition has access to or access to the coefficients of the complete model is determined by the access relationship.
Again referring to FIG. 4 at Step 42, a worth of coalitions from the selected set of coalitions is computed based on the computed performance measure for the coalition. In one
embodiment of the present invention, the computation of theperformance measure is itself represented as a construction of a cooperative game. However, the present invention is not
limited to such an embodiment. The worth of a coalition may be set equal to the performance measure for the coalition or it may bea function of the performance measure.
An example of worth as a function of a performance measure is a "dual" game. Let the worth of a coalition in the game "v" be the computed performance measure of Step 36. Let "w" be
the dual game as is illustrated in Equation 11. Then in acoalitional form game, and for any coalition S, w(S)=v(N)-v(N\S), (11) where S is any coalition of the players in N and "\" is
the set subtraction operator. (i.e., the set N\S includes the players in N that are not in S.) A dual game is constructed inthe preferred embodiments of the present invention when
using explanatory power performance measures. In one embodiment of the present invention, Method 36 can be used at Step 26 of Method 20. However, the present invention is not limited
to such anembodiment and Method 36 is also used as a stand alone method independent from Method 20 to determine a worth of a coalition. Allocation Procedures
A cooperative allocation procedure may be applied to the statistical cooperative game constructed with Method 20 and/or Method 28 in order to determine allocations to players of the
game. Preferred embodiments of the present invention use"point" allocation procedures for this purpose. A point solution procedure determines a unique solution. A value function of a
cooperative game is a type of point allocation procedure. A value function determines unique allocation of the entire worthof the grand coalition, or possibly, a subcoalition, to the
members of that coalition.
Virtually any value function may be used in this attribution process, however, four such functions described here. These are the Shapley and weighted Shapely values (L. S. Shapley,
"Additive and Non-Additive Set Functions," Ph.D. Thesis,Princeton University, 1953), the proportional value (B. Feldman, "The proportional value of a cooperative game," 1999, and K.
M. Ortmann, "The proportional value of a positive cooperative game," Mathematical Method of Operations Research, v. 51, 2000,pp. 235 248) and the powerpoint ("The Powerpoint," B.
Feldman, 1998, and N. N. Vorob'ev and A. N. Liapounov, "The Proper Shapley Value," in Game Theory and Applications IV, L. A. Petrosjan and V. V. Mazalov, eds., Comack, N.Y.: Nova
Science Publishers,1999).
A unified description of this allocation process is presented based on a method of potential functions. These potential functions may be calculated recursively. First, the potential
"P " for the game v used to calculate the Shapley value isassigned. For example, assign P({ }, v)=zero and apply the formula illustrated in Equation 12 recursively to all coalitions
S.OR right.N:
.function..times..function..times..times..epsilon..times..times..times..ti- mes..function..times..times. ##EQU00001## The Shapley value for a player i in the game v is then
illustrated by Equation 13: Sh.sub.i(v)=P(N,v)-P(N\i,v) (13) Similarly aproportional, or ratio, potential function may be constructed as follows. Set R({ }, v)=one and determine R(S,
v) recursively using Equation 14:
.function..function..times..times..times..epsilon..times..times..times..ti- mes..function..times..times. ##EQU00002## Then the proportional value of player i in the game v is
determined by Equation 15:
.function..function..function..times..times. ##EQU00003##
A similar method may be used for the calculation of weighted Shapley values. The weighted Shapley value is a value based on an exogenously specified vector of weights .omega.=
(.omega..sub.1, .omega..sub.2, . . . , .omega..sub.n) with.omega..sub.i>0 for all i. Again, set P({ }, v)=zero. Equation 16 illustrates the computation of potentials for weighted
Shapley values:
.omega..function..times..times..epsilon..times..times..times..times..omega- ..times..function..times..times..epsilon..times..times..times..times..omeg-
a..times..function..times..times. ##EQU00004## The weighted Shapley value for player i ingame v using weights .omega. is illustrated by Equation 17. wSh.sub.i(v,.omega.)=P.sub..omega.
(N,v)-P.sub..omega.(N\i,v) (17)
A "powerpoint" of a game may be found by identifying an allocation such that using this allocation as the weights .omega. to be used in the computation of the weighted Shapley value
leads to the value assigned to players being precisely theirweight. That is, the values allocated by the powerpoint satisfy Equation 18, wSh.sub.i(v,.omega.)=.omega..sub.i, (18) for
every player i.
It can be seen that these value functions are based on the worths of all coalitions in the game. However, other solutions require use of less information. For example, the Nash
bargaining solution requires only v(N) and the individual worthsv(i) for all players i. The Nash Bargaining Solution is illustrated in Equation 19.
.function..times..function..times..times..epsilon..times..times..times..fu- nction..function. ##EQU00005##
The allocation functions described satisfy an additive efficiency restriction that the sum of all allocations to individual players must equal the worth of the grand coalition. It may
sometimes be desirable to use an allocation function todistribute the worth of a subcoalition. The allocation procedures described here may be used for this purpose by substituting
this coalition S for the grand coalition N as appropriate in Equations 13, 15, 17, 18, or 19.
For the purposes of illustrating the construction of dual games and the determining the value of a game, consider the following exemplary three-player game v illustrated in Table 1.
TABLE-US-00001 TABLE 1 v({ }) = 0, v(1) = .324, v(2) = .501, v(3) = .286, v(12) = .623, v(13) = .371, v(23) = .790, v(123) = .823
The Shapley value of this game can be computed and found to be Sh(v)=[0.154, 0.452, 0.218], for players 1, 2, and 3, respectively. Similarly the proportional value is Pv(v)=[0.174,
0.445, 0.204] and the powerpoint is Ppt(v)=[0.183, 0.441,0.199].
The dual game w defined by w(S)=v(N)-v(N\S) for all S can be computed as illustrated in Table 2.
TABLE-US-00002 TABLE 2 w({ }) = 0, w(1) = .033, w(2) = .452, w(3) = .200, w(12) = .537, w(13) = .322, w(23) = .499, w(123) = .823
The proportional value of w is Pv(w)=[0.064, 0.489, 0.270]. The Shapley value of a dual game is the same as the Shapley value of the original game: Sh(w)=Sh(v). The powerpoint of w is
Ppt(w)=[0.072, 0.487, 0.264]. Simplified Calculation ofSome Values in Total Effects Games with Interactions
If total effects are to be estimated for a multivariate statistical model with interaction variables and based on the complete statistical model, the Shapley and weighted-Shapley
values may be computed according to a more efficient method basedon the potential representation of these values described above. Let x.sub.S and .beta..sub.S be vectors of values and
corresponding coefficients of variables in a total effects model that: (1) S can access; and (2) no subcoalition of S can access. Thevector x.sub.S may represent average values of the
independent variables, values of a particular sample observation, a forecast value, or some other function of these variables. Let d(S)=x.sub.S'.beta..sub.S. Then for any S, the sum
of d(T) over allsubsets of S yields the worth of S, as illustrated in Equation 20:
.function..times..times..times..function. ##EQU00006## Let |T| be the number of players in the coalition T. The Shapley value of v for a player i may be calculated as illustrated in
Equation 21:
.function..times..times..times..function. ##EQU00007## where the sum is over all coalitions T that contain player i. Similarly, the weighted Shapley value with weight vector .omega.
can be calculated as illustrated in Equation 22:
.function..omega..times..times..times..omega..times..times..epsilon..times- ..times..times..times..omega..times..function. ##EQU00008##
These derivations are related to conceptualizing the regression as a "scalable game" and calculating the Aumann-Shapley or weighted Aumann-Shapley prices of the game. They have the
advantage of being calculable directly from the results of themultivariate statistical model without the explicit construction of a cooperative game. Total effects attributions based
on the complete multivariate statistical model may be calculated in this manner. However, the present invention is not limited tosuch calculations and other calculations can also be
Multiplicative Value Allocation
The present invention discloses methods for allocating the worth of a coalition in a cooperative game on a multiplicative interaction basis. That is, for any such allocation, the
product of values allocated to individual players in the coalitionis equal to the worth of the coalition, when that product is computed in the appropriate way. This stands in contrast
to additive value allocation procedures. Cooperative game theory has been concerned with the division of costs or benefits in a mannersimilar to the division of a sum of money. The
logic of multiplicative allocation can be illustrated in the context of performance attribution. Assume a management team produces a 20% growth in sales over a single year.
Considering the outcome inpercentage rather than absolute dollar terms makes sense because it places the outcome in relative terms. Allocating that performance among the members of
the team could be done on an additive or multiplicative basis. However, assume such performanceattributions are done for several years. Then the allocation is on a multiplicative
basis if the combination of each manager's cumulative performance will be equal the cumulative performance of the firm. The only way these attributions can be doneconsistently is on a
multiplicative basis. (See, for example, David R. Carino, "Combining attribution effects over time," Journal of Portfolio Measurement, Summer 1999, v. 3. n. 4.)
The precise definition of a multiplicative product depends on the quantities being multiplied. Generally, quantities to be allocated and allocations will be percentage changes. In
this case, one is added to all percentages to be multiplied. Then the resulting terms are multiplied. Finally, one is subtracted again. Thus, the product of two percentages p.sub.1
and p.sub.2 is (1+p.sub.1)(1+p.sub.2)-1. Sometimes the quantities to be allocated will be ratios. In this case the multiplicativeproduct is the product of the ratios.
FIG. 5 is a flow diagram illustrating a Method 44 for allocating a worth of a coalition in a cooperative game on a multiplicative basis. At Step 46, a second cooperative game is
generated from a first cooperative game by setting a worth ofplural coalitions in the second game to a logarithm of a worth of a same coalition plus a constant. At Step 48, a
cooperative allocation procedure is applied to the second game. At Step 50 an allocation for a player in the first game is created from anallocation in the second game by applying an
antilog to a value allocated to a player in the second game and subtracting a constant.
Method 44 is illustrated with an exemplary embodiment of the present invention. However the present invention is not limited to such an embodiment and other embodiments can also be
used to practice the invention. Method 44 is introduced in thecontext of cooperative resolution applications, but may have other applications in diverse areas of game theory,
economics, finance, and engineering.
At Step 46, a second cooperative game is generated from a first cooperative game by setting a worth of plural coalitions in the second game to the logarithm of the worth of the same
coalition plus a constant. If v is the first game and w thesecond game, then w(S)=log(c+v(S)), where c is a constant. In the most preferred embodiments the logarithm function used is
the natural logarithm, although other logarithms may be used. In preferred embodiments the constant c will be set to one. Thisembodiment will be preferred when worths in a game are
stated in terms of percentage changes. In other preferred embodiments c is set to zero. This embodiment will be preferred when worths in a game is stated in terms of ratios.
At Step 48, a cooperative allocation procedure is applied to the second game. Any allocation procedure may be used. In particular, either point or set allocation functions may be
used. In one preferred embodiment of the present invention, theShapley value is used. However, other allocation procedures may also be used.
At Step 50, an allocation for a player in the first game is created from an allocation in the second game by applying an antilog to a value allocated to a player in the second game
and subtracting a constant. For example, let the allocation toplayer i in the second game be .phi..sup.2.sub.i(w). Then the allocation to player i in the first game is
.phi..sup.1.sub.i(v)=antilog(.phi..sup.2.sub.i(v))-d. In the preferred embodiments of the present invention an exponential function is an antilogused and a constant d is equal to a
constant c. However, other or equivalent antilog and constants can also be used.
The steps of Method 44 using the Shapley value for games with worths stated in percentage changes results in a formula for the value of a player i in a game v as is illustrated in
Equation 23:
.function..function..times..times..times..times..function..function..funct- ion..times..times. ##EQU00009## where "exp" represents the exponential function, the summation is over all
coalitions that contain player i, s is the number of playersin the set S, and "ln" is the natural logarithm function. This will be referred to as the "log-linear value."
The log-linear value applied to the game of Table 1 yields the multiplicative value allocation off [0.131, 0.377, 0.171], in contrast to the Shapley value of the game, Sh(v)=[0.154,
0.452, 0.218].
Analysis of Effects in Categorical Models
Methods 20 and 36 may also be applied when a multivariate statistical model including categorical independent variables is used in the process of determining the worth of a coalition.
The nature of interaction between categorical independentvariables allows for additional types of analysis beyond those of models with purely continuous independent variables. Methods
described in this section include techniques used in the field of analysis of variance. The principal difference is thatclassical analysis of variance seeks to determine which effects
and interactions are statistically significant whereas the present invention seeks to decompose the variance.
The following notational framework will facilitate the exemplary description of methods to represent interactions among categorical independent variables. Modeling categorical effects
as contemplated by the present invention is not limited tothese methods and others may be used. In practice, categorical effects may be computed more efficiently using standard
techniques known to those familiar with the statistical arts. Let P, Q, and R represent categorical independent variables, which willalso be referred to as dimensions. For the
purposes of description, each dimension is assumed to be composed of a set of mutually exclusive and collectively exhaustive categories. This means that, for every observation of data
and every dimension,there is a single category that the data observation is assigned to. It is said to take on the value of that category. There may be categories such as "other,"
"none of the above," or "missing data." Thus, in practice, nonassignment to a category of adimension may be turned into a default assignment.
The number of categories in dimension P is n.sub.p. Let C(P) be all the categories associated with any dimension P and let .beta..di-elect cons.C(P) be a specific category of P. The
notation P.sub..beta. refers to the set of all observations ofdata where the categorical independent variable P takes on value .beta..
Let S be an ordered set of dimensions, for example S=(P, Q). Note that, here, S is a set of independent variables and not a coalition of players in a game. For the present, a
one-to-one transferable access relationship is assumed such that anyset of independent variables corresponds to a coalition with players that each have primary access to one of the
independent variables.
Let C(S) be the set of all combinations of categories of the individual dimensions. A.beta.=(.beta..sub.1, .beta..sub.2).di-elect cons.C(S) is an s-tuple of categories, one
corresponding to each dimension in S. Then S.sub..beta. refers to theset of all observations of data where categorical independent variable P takes on value .beta..sub.1 and variable
Q takes value .beta..sub.2.
Let .OMEGA. represent the set of all dimensions. Then C(.OMEGA.) represents the "finest-grain" of categorization and an .alpha..di-elect cons.C(.OMEGA.) represents a complete
selection of categories, one from every dimension. Let n.sub..OMEGA. represent the number of such possible combinations. Let .OMEGA..sub..alpha. be a set containing all observations
of data whose category assignments correspond to .alpha.. For any S.OR right..OMEGA. and every .alpha..di-elect cons.C(.OMEGA.) such that.OMEGA..sub..alpha. is nonempty there is
exactly one .beta..di-elect cons.C(S) such that all data observations in .OMEGA..sub..alpha. are also in S.sub..beta..
The preceding categorical framework is next applied to computing the effects associated with different dimensions. The methods described here are used to construct a design matrix X.
Let D(S) be a function that, for any dimensional set S,returns a matrix of t rows and c columns, where t is the number data observations and c is the number of category combinations
in C(S). Each row r.sub.i is associated with a category .alpha.(r.sub.i).di-elect cons.C(.OMEGA.) and each column correspondsto a category .beta..di-elect cons.C(S). Let M=D(S) and
let M(i,j) be the value of the i.sup.th row of column j. Then M(r.sub.i,.beta.)=one if and only if .OMEGA..sub..alpha.(ri).OR right.S.sub..beta. and M(r.sub.i,.beta.) =zero otherwise.
Also, letD.sup.-.beta.(S) define a matrix of t rows and c-1 columns, identical to D(S) except that the column corresponding to category .beta. is removed.
There are several ways to represent the categorical effects associated with a dimensional set S. In a preferred embodiment, an ordered collection consisting of S and the remaining
individual dimensions is constructed. This approach will bereferred to as a model of "Type I" categorical effects. Let this collection be W={S, P, Q, R}, where it is understood that:
(1) every dimension must either be included in S or appear as a singleton; and (2) no dimension can both be included in S andappear as a singleton or appear more than once as a
singleton. Apply the function D to S, and apply D.sup.-.beta..sub.P to the remaining dimensions, where, for each dimension P, .beta..sub.P is a category. The design matrix X results
from thehorizontal concatenation of the resulting matrices. Thus if W=(S, P, Q, R), then X may be constructed as illustrated in Equation 24. X=[D(S), D.sup.-.beta..sub.P(P),
D.sup.-.beta..sub.Q(Q), D.sup.-.beta..sub.R(R)]. (24)
For convenience, the matrix of the categories of the dimensional set under study will always be complete and the matrices associated with other dimensions or dimensional sets will be
minus a category. The categories are left out so that thedesign matrix is not singular and effects may be determined by Equation 25 illustrated below. The deleted categories become
default categories along the associated dimensions.
In another preferred embodiment of the present invention, no interactions are taken account of in the design matrix. This will be referred to as a model of "Type II" categorical
effects. Here, W={P, Q, R, . . . } contains all the dimensions asindividual dimensional sets. The design matrix is then X=[D(P), D.sup.-.beta..sub.Q(Q), D.sup.-.beta..sub.R(R), . . .
In another preferred embodiment of the present invention, the design matrix is based only on S. This will be referred to as a model of "Type III" categorical effects. Here, W={S},
where S may represent a single dimension or multiple dimensions. The design matrix is then X=D (S).
In another preferred embodiment of the present invention, the design matrix is based on a number of individual dimensions of S. This will be referred to as a model of "Type IV"
categorical effects. Here, W={P, Q, . . . }. The design matrix isthen X=[D(P), D.sup.-.beta..sub.Q(Q), . . . ].
In another preferred embodiment of the present invention, the design matrix is based on two dimensional sets S and T that have no dimensions in common and together comprise all
dimensions. This will be referred to as a model of "Type V"categorical effects. Here, W={S, T} and the design matrix is X=[D(S), D.sup.-.beta..sub.T(T)].
In another preferred embodiment of the present invention, the design matrix is based on a partition of .OMEGA. that includes S. This will be referred to as a model of "Type VI"
categorical effects. Here, W={S, T. U, . . . } and the designmatrix is X=[D(S), D.sup.-.beta..sub.T(T), D.sup.-.beta..sub.U(U), . . . ].
The choice of type of effects depends on the understanding of the subject under study. Type I, Type III, Type V, and Type VI effects include interaction between the categorical
dimensions that comprise S. Type II and Type IV models do notmeasure such interactions. Type III and Type IV methods do not include dimensions whose effects are not being measured in
the design matrix. Thus all variations in a dependent variable are attributed to the dimensions of S. This will be appropriateunder certain conditions. Type V effects models are
similar to Type I models except that interaction is allowed among all the dimensions not included in S as well. In general, this will not be appropriate when studying explanatory
power, but may beappropriate in studies of total effects. In Type VI models, an arbitrary pattern of interaction among the dimensions of .OMEGA. not included in S is allowed.
Once a design matrix is constructed, based on any type of categorical effects, dimensional effects may be computed as follows. Let Y be a vector of observations of a dependent
variable to be analyzed, where Y has an observation for every.alpha..di-elect cons.C(.OMEGA.). Then dimensional effects may be computed by the standard least squares regression
formula as illustrated in Equation 25, b=(X'X).sup.-1X'Y, (25) where b is a vector with coefficients for the estimated effects for all ofthe included categories of the dimensions of
W. Identify an element of b with its category by using the category as a subscript.
The effect of a dimensional S set on any observation of a dependent variable Y.sub.i is the predicted value of Y.sub.i taking into account effects associated with the dimensions of S.
This will be denoted E.sub.S(Y.sub.i) and can be computed asfollows. For Type I, Type III, Type V, and Type VI models, the effect is the coefficient of b corresponding to the set
S.sub..beta.. Then Equation 26 illustrates the determination of E.sub.S(Y.sub.i): E.sub.S(Y.sub.i)=b.sub..beta., whereY.sub.i.di-elect cons.S.sub..beta.. (26) For Type II and Type IV
models, the effect is the sum of all coefficients corresponding to categories P.sub..beta. such that Y.sub.i.di-elect cons.P.sub..beta. and P.di-elect cons.S. Then the determination
ofE.sub.S(Y.sub.i) is illustrated in Equation 27:
.function..times..times..epsilon..times..times..times..times..beta..times.- .times..times..times..epsilon..times..times..beta. ##EQU00010##
Before considering a determination of a worth for a coalition based on either measure of explanatory power, the possibility of specifying an access relationship more general than the
one-to-one transferable relationship at Step 24 of Method 20 orthe steps of Method 28 should be considered. Two restrictions on an access relationship typically are taken into
account. In the treatment of models of categorical independent variables it is evident that the existence of interaction effects is afunction the type of interaction model chosen. In
consequence, the independent variables subject to the access relationship of Step 24 should not normally include interaction variables based on categorical independent variables.
Further, Type I, TypeV, and Type VI interaction models involve a partition of the independent categorical variables. In consequence, the access relationship should be such that the
determination of the worth of any coalition of players does not result in the creation of apartition of the set of players such that the independent categorical variables or
interaction variables accessible by any two coalitions overlap.
The determination of the worth of a coalition of players using total effects as a performance measure at Step 26 or Step 42 in a categorical effects model for a single observation k
may then be made by selecting a type of interaction effect modeland then setting v(S) as illustrated in Equation 28, v(S)=E.sub.S(Y.sub.k), (28) where k either represents an actual
observation or an observation to be forecast. Other methods of determining a worth by combining predicted values for sets ofobservations may also be used, including those described in
the OLS examples illustrating Method 40.
The determination of a worth of a coalition of players using R.sup.2 as a performance measure at Step 26 or Step 42 in a categorical effects model may be made by selecting a type of
interaction effect model and calculating v(S) as is illustratedby Equation 29.
.function..times..times..function..times..times. ##EQU00011## where {overscore (Y)} is the average value of Y.sub.i.
Equations 28 and 29 are exemplary methods for pure models of analysis of effects in categorical models. These models have many applications. One exemplary application is the analysis
of survey data. For example, a poll may be conducted to seewhether voters favor a referendum. Demographic information is also collected. Then .OMEGA. is the set of demographic
dimensions, C(.OMEGA.) is the set of all n.sub..OMEGA. possible combinations of demographic attributes, and Y.sub..alpha. for an.alpha..di-elect cons.C(.OMEGA.) is the proportion of
voters with characteristics .alpha. that favor the referendum. In this example, the Type III interaction model would generally be preferred. The preferred performance measure will
generally be ameasure of explanatory power rather than total effects.
Analysis of Changes in Proportions in Categorical Models
Methods 20 and 36 may also be applied when a multivariate statistical procedure using frequency data to compute marginal frequencies is used in the process of determining the worth of
a coalition. This type of model is considered an analysis ofchanges in proportions model. This model is exemplary. Changes in proportions as contemplated under the present invention
are not limited to this model and other models may be used. An analysis of changes in proportions also utilizes the categoricalinteraction framework described in the section "Analysis
of Effects in Categorical Models," above. As in that section, assume, initially, the default one-to-one access relationship between independent variables and players in a game.
Let Y.sup.1 and Y.sup.2 be two dependent variables representing measures of the same quantity at two different time periods or under two different conditions. For example, these could
be measures of sales or holdings of securities at two pointsin time. The observations of both Y.sup.1 and Y.sup.2 are associated with categorical independent variables that categorize
relevant dimensions associated with the dependent variables. The analysis of changes in proportions reveals which dimensions aremost important to understanding changes in the
dependent variable and how much of that change is contributed by each dimension.
For any dimensional set S and category .beta..di-elect cons.C(S), let w.sup.1 be a set of weights such that w.sup.1(S.sub..beta.) represents the percentage of the dependent variable
Y.sup.1 associated with observations O.sub.i such thatO.sub.i.di-elect cons.S.sub..beta.. This relationship is illustrated in Equation 30:
.function..beta..times..epsilon..times..times..beta..times..times..times..- times. ##EQU00012## where there are t observations of Y.sup.1 and Y.sup.2. Define w.sup.2(S.sub..beta.)
The pure effects of changes from Y.sup.1 to Y.sup.2 along a number of dimensions S will be denoted by w.sup.S and may be determined by computing marginal weights with respect to the
dimensions under study and then reweighting all fine-grain cellweights w.sup.1(.OMEGA..sub..alpha.) for all .OMEGA..sub..alpha..OR right.S.sub..beta. by the ratio of the relevant
Y.sup.1 to Y.sup.2 marginals. The weight associated with .OMEGA..sub..alpha. when taking into account changes along the dimensions of Sis illustrated by Equation 31. w.sup.S
(.OMEGA..sub..alpha.)=w.sup.1(.OMEGA..sub..alpha.)w.sup.2(S.sub..b- eta.)/w.sup.1(S.sub..beta.), (31) where w.sup.S is a function representing the weights resulting from inclusion of
changes along the dimensions inS, .OMEGA..sub..alpha..OR right.S.sub..beta., and for w.sup.1(S.sub..beta.)>0. The value of w.sup.S for any collection of .OMEGA..sub..alpha.. C
(.OMEGA..sub..alpha.).OR right.S.sub..beta., is then the sum of w.sup.S(.OMEGA..sub..alpha.) over all.OMEGA..sub..alpha..di-elect cons.S.sub..beta..
The case where w.sup.1(S.sub..beta.)=0 for some category .beta..di-elect cons.C(S) requires special treatment. One effective approach is to use the proportions found in the
complementary dimensional set. Let T=.OMEGA.\S and let .gamma..di-electcons.C(T). For every .alpha..di-elect cons.C(.OMEGA.) there is one y .gamma..di-elect cons.C(T) such that
.OMEGA..sub..alpha..OR right.T.sub..gamma.. An appropriate weight for .OMEGA..sub..alpha. taking into account changes along the dimensions of Swhen w.sup.1(S.sub..beta.)=0 and
.OMEGA..sub..alpha..OR right.S.sub..beta. is illustrated by Equation 32. w.sup.S(.OMEGA..sub..alpha.)=w.sup.2(S.sub..beta.)w.sup.1(T.sub..gamma.). (32) Thus, the weight w.sup.2
(S.sub..beta.) is distributed inproportion to Y.sup.1 weighting in the complementary dimensions. Because S.orgate.T=.OMEGA., S.andgate.T=O, .OMEGA..sub..alpha..OR right.S.sub..beta.,
and .OMEGA..sub..alpha..OR right.S.sub..beta., and .OMEGA..sub..alpha..OR right.T.sub..gamma., itfollows that .OMEGA..sub..alpha.=S.sub..beta..andgate.T.sub..gamma.. Therefore, the
sum of w.sup.S(.OMEGA..sub..alpha.) over all .OMEGA..sub..alpha..OR right.S.sub..beta. must equal w.sup.2(S.sub..beta.).
The nature of an analysis of changes in proportions model is such that the categorical interaction models described in the section labeled "Analysis of Effects in Categorical Models"
are not relevant. Interaction is always assumed among thedimensions of set of dimensions whose effect is to be evaluated. Also, only the dimensions to be evaluated enter into the
calculation of effects (except when the initial weight on some category of S is zero, when a complementary set of dimensions may beused, as described above).
Analysis of total effects in a pure changes in proportions model may be done as follows. Select a subset of fine grain categories G.OR right.C(.OMEGA.). Let S*=A (S) be the dimensions
accessible by any coalition S. Then a worth v(S) for anycoalition S may be calculated as is illustrated in Equation 33:
.function..times..times..epsilon..times..times..times..times..function..OM- EGA..function..OMEGA. ##EQU00013## Note G must be a proper subset of C(.OMEGA.) because if G=C(.OMEGA.), v
(S)=w.sup.2(C(.OMEGA.))-w.sup.1(C(.OMEGA.)) for any coalitionof players S. Often, G might be expected to be a single element of C(.OMEGA.). The game v represents the various
contributions to w.sup.2(G) of the separate dimensions as modulated by the access relationship. The value of a player in this game willrepresent the contribution of the dimensions the
player controls. The proportional value will not ordinarily be used for attribution in this type of game because it will be common to find that v(S)<0 for some coalitions S and the
proportional valueis not defined on such games. The Shapley value or log-linear values are the preferred values to be used in this case.
Consider an example of the application of Equation 33. Let Y.sup.1 and Y.sup.2 represent total new home sales in dollars in two successive years for a state or region. These data are
categorized along the dimensions of city, price range, andstyle of home. Observations of Y.sup.1 and Y.sup.2 are available for every fine-grain combination of categories. Possible
choices for G include a specific city, a price range, a style of home, a price range within a specific city or combination ofcities, or a price range and home style within a single
city. Assume a one--one transferable access relationship. The worth associated with any single dimension reflects the change in new home sales implied by average changes along that
dimension, andsimilarly for any pair of dimensions. The worth associated with all three dimensions taken together is the actual change in new home sales for the homes included in G. A
value of the game v then attributes total changes among geographic factors,demographic factors, and style preferences for the homes in the set identified by G.
Using pure analysis of changes in proportions in categorical models and explanatory power as a performance measure, Equation 34 illustrates a definition for the worth of a coalition S
similar to the R.sup.2 statistic, where, again, S*=A(S):
.function..times..times..epsilon..times..times..function..OMEGA..times..ti- mes..function..OMEGA..function..OMEGA..times..times..epsilon..times..times-
..function..OMEGA..times..function..OMEGA. ##EQU00014## where {overscore (w)}2 is the averagevalue of w.sub.2. In this case, the game v defined by Equation 34 will provide a
representation of the joint contributions of the various dimensions to the total observed variance. In preferred embodiments, the proportional value of the dual of thisgame will be
used to resolve these joint contributions. With reference to the preceding example, Equation 34 is based on the assumption that G=C(.OMEGA.). A value of a game v based on Equation 34
estimates the relative explanatory power of eachdimension over all of the data. Should it be desired, Equation 34 could be altered to consider explanatory power over a subset of the
data G by altering the sums to be for an .alpha..di-elect cons.G.OR right.C(.OMEGA.). Variance Decomposition of aVariance-Covariance Matrix
Cooperative resolution methods may also be applied directly to a variance-covariance matrix. The matrix may itself be considered a statistical model showing how the variance of a
composite entity is related to the variances and covariances ofits components. Variance decomposition in this situation is a kind of risk attribution. Let X be a (t.times.n) matrix of
n variables N={1, 2, . . . , n} with associated (n.times.n) covariance matrix .SIGMA., where .SIGMA..sub.ij=.SIGMA..sub.ji is thecovariance between variables i and j. These variables
may represent diverse situations from the returns of individual assets in a portfolio to the failure probabilities of components in a mechanical system under different conditions. Let
v be a game ofn players where the worth of any coalition S associated with variables S* is their collective variance 1.sub.S'.SIGMA.1.sub.S, where 1.sub.S is a (n.times.1) vector with
i.sup.th value equal to one if i.di-elect cons.S* and zero otherwise:v(S)=1.sub.S'.SIGMA.1.sub.S. The dual game w may again be defined as w(S)=v(N)-v(N\S). The variance attributable
to any variable may then be determined by applying a value to one of these cooperative games.
Variance decomposition by use of the Shapley value has several desirable properties. The Shapley value of any variable i (in either game v or w) is the sum of all variances and
covariances associated with i. Shapley value decompositions are"aggregation invariant." If two variables are combined that value assigned to the new combined variable will be the sum
of the values of the original variables. Use of the Shapley value for variance attribution, however, also has the undesirableproperty that a variable can be assigned a negative share
of the variance. This can happen when at least one of a variable's covariances with other variables is negative.
The preferred type of statistical cooperative game and value function depends greatly on the situation being analyzed. Preferred embodiments of the present invention may employ the
Shapley value in situations where covariances are predominantlypositive and aggregation invariance is considered an important property. Conversely, the proportional value may be
preferred when there are significant negative covariances.
This type of variance decomposition may be applied in many circumstances. These include portfolio analysis, where the variables represent individual investments or classes of
investments. Another application concerns decomposition of errorvariances in normal regressions or vector autoregressions (VARs) when the more general approach based on the method of
the section "Determining the Worth of a Coalition in a Statistical Cooperative Game" are not desired. In both of the later cases, asis known in the art, there are standard methods for
constructing a variance-covariance matrix associated with a predicted value.
Exemplary Applications
Preferred embodiments of the present invention are further illustrated with a number of specific examples. However, the present invention is not limited to these specific examples.
The present invention can be used in a number of othersituations in a number of other disciplines not related to these specific examples.
(a) Arbitrage Pricing Theory and Other Explicit Factor Models
The Arbitrage Pricing Theory (APT) of S. Ross ("The arbitrage theory of capital asset pricing," Journal of Economic Theory, v. 13, 1976, pp. 341 360) assumes that the returns of a
financial security may be explained by a k-factor linear model. APT models are routinely used in the analysis and forecasting of economic and financial data. The k factors may be
identified by a factor analysis method or they may be explicitly identified by an investigator. In the later case, the APT model istypically estimated with a regression procedure. One
application of the present invention concerns the estimates of the percentages of variance accounted for by explicitly determined factors. As is known in the art, such variances are
typicallyreported when a factor analytic method is used to identify factors, but are not currently reported when the factors are explicitly specified.
The present invention may be used to determine the percentages of variance explained by explicitly selected factors in a conventional APT model. In preferred embodiments used for this
purpose the factors are the elements of the multivariatestatistical model governed by an access relationship. In explicit models constructed with "mimicking portfolios," an intercept
term and a one-to-one transferable access relationship is used in the preferred embodiments. Access is understood to allowuse of the factors as independent variables in the
construction of a submodel as described in the paragraph following the paragraph containing Equation 5. The R.sup.2 of the resulting models is determined, for each S, v(S)=
R.sup.2.sub.S, and a dual gameis constructed. The proportional value of the dual game provides the estimate of the percentage of explanatory power contributed by a explicit factor.
The intercept term may then be interpreted as a measure of "abnormal" performance analogous to"Jensen's alpha." The use of cooperative resolution thus enables an analyst to better
compare explicit and derived factor APT models.
A further application to APT models involves the analysis of interaction terms. The k factors of an APT model are linearly independent, but they may still include interaction terms
derived from a subset of "primitive" factors. In an APT modelwith interactions, it may be desirable to attribute the total effects of all interaction factors to the primitive factors.
This may be done by specifying a total effects access relationship where the basic independent variables correspond to theprimitive factors; the players of the cooperative game each
have primary access to a primitive factor; a coalition has access to an interaction factor if and only if all players with primary access to a component of the interaction term are
members ofthe coalition; and access allows use of the corresponding estimated coefficients from the full model. The worth of a coalition is then determined by Equation 7. The Shapley
value of the resulting game will then provide a complete attribution of allfactor effects to the primitive factors. This procedure computes the Aumann-Shapley prices of the primitive
factors. The value of the game may be computed as described by Equations 12 and 13 or Equations 20 and 21.
The explained variance of a k-factor model with interaction factors may also be attributed to its primitive factors. In the preferred embodiments of the present invention the dual of
this game is computed according to Equation 11 and theproportional value of the dual game is used to determine the explained variance of the primitive factors.
(b) Style Analysis
The returns-based style analysis method described by W. Sharpe in "Asset allocation: Management style and performance measurement," Journal of Portfolio Management, Winter 1992, pp. 7
19, is an example of a related model. The methods describedabove may also be applied to style analysis models. Style analysis may be used to estimate the composition of a mutual fund.
Sharpe's method of performing style analysis is to regress returns of a mutual fund on a set of benchmarks representingdifferent asset classes. In this regression the coefficients are
constrained to be non-negative and to add up to one. As is known in the art, this type of regression may be estimated using quadratic programming techniques.
The interpretation of the regression coefficients in a Sharpe style analysis is that they represent the weights on passive index-type funds associated with the different equity
classes that best approximate the returns process of the mutual fund. The present invention may be used to determine the percentage of returns variability associated with the
different asset classes.
A statistical cooperative game may be constructed from the R.sup.2 coefficients of the Sharpe style model maintaining the constraints that regression coefficients must be non-negative
and sum to one; or one or both of these constraints may beremoved. In one preferred embodiment of this invention both the nonnegativity and the summation constraint are removed and
variance decomposition is presented as a way of interpreting the resulting coefficients. It is also possible to remove only thenonnegativity constraint and set the worth of coalitions
with negative R.sup.2 (due to the summation constraint) equal to zero. The proportional value of the dual game is the preferred allocation procedure for variance decomposition of
style analysismodels.
A style or factor model may be used to construct a passive or idealized model of a financial security as a mixture of benchmarks or mimicking portfolios, as is known to those familiar
with the art. Variance decomposition may also be performed onthis passive model and the results compared with the variance decomposition of the security itself. This type of
comparison can be helpful in understanding the volatility of the financial instrument relative to its benchmarks. Let b be a vectorrepresenting the results of a variance decomposition
of the passive model of the financial instrument. Let f be a vector representing the results of a variance decomposition on a financial instrument using a set of n benchmarks, and let
f* be thenormalization of the decomposition such that all components sum to 100%. This normalization may be used so that the variance decomposition of the financial instrument will be
properly comparable to the passive model's decomposition. Other approachesare possible. With this approach, when the same benchmarks used to build the passive model are used in the
decomposition, the explained variance will be 100%. Then, for each benchmark i, the ratio of the variance share of the financial securitycompared to the passive benchmark may be
constructed. This variance ratio is illustrated in Equation 35.
.times..times. ##EQU00015##
Variance ratios greater than one indicate that the financial instrument variance associated with a particular benchmark is greater than the variance associated with that benchmark in
the passive model. This condition is analogous to a regressioncoefficient or ".beta." greater than one in a factor model. For some purposes it may be desirable to subtract one from
this ratio to obtain an excess variance ratio. In preferred embodiments of the present invention, the proportional value of astatistical cooperative game will be used to effect the
variance decomposition. However, the use of any variance decomposition method is claimed as within the scope of the present invention.
The APT, style analysis, and variance ratio procedures described here may be easily utilized in a "rolling window" framework where results are estimated for a number of periods of
time based on temporal subsets of the data. Such techniques arewell known to those familiar with this art. Additionally, techniques such as (exponentially) weighted or flexible least
squares may be used to focus the estimation procedure on a particular point in time.
(c) Manager Performance Attribution
One object of the present invention is to improve the methods by which the performance of managers is analyzed. This is an extension of methods commonly used to analyze the
performance of money managers, individuals responsible for investingmoney, however, they may be applied to many other management contexts. These methods are an extension to the
accounting approach to performance attribution first developed by G. P. Brinson and N. Fachler in "Measuring non-U.S. equity portfolioperformance," Journal of portfolio Management,
Spring 1985, pp. 73 76, incorporated herein by reference, and subsequently developed by many others. These procedures, in general, produce interaction terms which complicate results
and may make them moredifficult to interpret.
In Brinson and Fachler (1985) the performance of a portfolio or fund manager is over a period of time is compared to a benchmark. Performance is broken down into "timing" and
"selection" effects across at least one dimension, and, in some casestwo dimensions of interest. Timing refers to the ability to shift investment to "categories" of the economy that
will perform better, as reflected in the performance of the associated benchmark, in the subsequent period. Selection refers to the abilityto identify a weighting of securities within
a category that will do better than the benchmark weighting of securities in that same category in the subsequent period. Typical dimensions in these procedures are choice of
industrial sector or country,although other dimensions are possible. These techniques are typically applied to one, or, at most, two dimensions of interest. It is straightforward to
adapt techniques already described in this application in order to resolve these statistical jointeffects. It is, however, possible to combine the methods of analysis of effects in
categorical models and analysis of proportions in categorical models, described above, to enable manager performance attribution across an arbitrary number of dimensions.
Assume that every security in a manager's portfolio is classified along the all dimensions of a dimensional set .OMEGA.. Let w.sup.B(S.sub..beta.) be the benchmark weight of all
securities in any S.sub..beta..OR right.C(S) with S.ORright..OMEGA.. Define w.sup.M(S.sub..beta.) to be the manager's weight on securities in S.sub..beta.. Weights are based on market
capitalization. Similarly, define r.sup.B(S.sub..beta.) and r.sup.M(S.sub..beta.) to be the benchmark and manager returnsassociated with these securities. The return on a security of
set of securities is the percentage change in their value over the period in question. A benchmark is a standard of comparison. Common benchmarks include indices such as the Standard
andPoor's 500 and the Russell 2000. Other benchmarks may be chosen. In particular, a benchmark may be the manager's holdings in the previous time period.
In order to construct a cooperative game to represent contributions of timing and selection among the various dimensions, it is possible to determine a return due to a combination of
selection and timing dimensions. Timing skill relates tochanges in proportions and may be analyzed by the methods for analyzing changes in proportions, described above. Selection
skill is better analyzed by the methods of analysis of categorical interaction, previously described here. Let S be the set ofdimensions associated with selection skill and T be the
set of dimensions associated with timing skill. An incremental return due to selection in the dimensions of S and timing in the dimensions of T can then be calculated as is
illustrated in Equation36:
.DELTA..times..times..epsilon..times..times..function..OMEGA..times..funct- ion..OMEGA..times..function..OMEGA..function..OMEGA..times..function..OMEG- A. ##EQU00016## where w.sup.B
and r.sup.B are the benchmark weights and returns, respectively,w.sup.T is the manager's weight when timing is limited to the dimensions of T, and r.sup.S is the manager's return when
skills are limited to the dimensions of S. Equations 31 and 32 may be used to determine w.sup.T(.OMEGA..sub..alpha.), withw.sup.B=w.sup.1 and w.sup.T=w.sup.2. In the preferred
embodiment of this method, return r.sup.S(.OMEGA..sup..alpha.) is estimated using a Type I interaction model and is then found as the element of b from Equation 25 corresponding
to.OMEGA..sub..alpha., as defined in Equation 26.
In order to use this model in Method 28, the relation between selection and timing dimensions and the players of the game are specified. The manager performance attribution model is a
fusion of two separate models, one analyzing selection andthe other timing. Thus, the same independent categorical variable may appear in two different contexts. The access
relationship is understood to cover the categorical independent variables of both models. Let SA(S) be the selection independentvariables accessible by a coalition S and let TA(S) be
the timing independent variables accessible by S.
When total effect is the performance measure, the preferred-embodiment of the present invention defines the worth of a coalition S to be as illustrated in Equation 37: v(S)
=.DELTA..sup.SA(S),TA(S). (37) When v is defined by Equation 37 theShapley or log-linear values may be used to allocate the worth of v to individual players in the preferred
embodiments of this invention. The proportional value and the powerpoint are not appropriate because it should be expected that v(S)<0 for manycoalitions. Controlled allocation games,
described below, provide an alternative approach for determining total effects.
A preferred method of defining a measure of explanatory power for manager performance is to calculate an R.sup.2 type of measure in the following way. First calculate the total sum of
squares for the variations in manager performance asillustrated in Equation 38:
.times..times..epsilon..times..times..function..OMEGA..times..function..OM- EGA..times..function..OMEGA..times. ##EQU00017## where {overscore (w)}.sup.M and {overscore (r)}.sup.M are
average manager weights and returns. Then, for a coalition S,calculate the sum of squared error resulting from the selection and timing dimensions accessible by S as illustrated in
Equation 39:
.function..times..times..epsilon..times..times..function..OMEGA..times..fu- nction..function..OMEGA..times..function..function..OMEGA..function..OMEGA- ..times..function..OMEGA. ##
EQU00018## Finally, set the worth of S as illustrated in Equation40: v(S)=1-SSE(S)/SST (40) In preferred embodiments, the proportional value of the dual of the game defined by
Equation 40 will be used to resolve joint effects in the attribution of explanatory power. It is possible that v(S)<0 for some S. Theseoccurrences should be infrequent and
inconsequential. The proportional value may still be used by setting v(S)=.epsilon.>0 for these coalitions.
Equation 37 can be used to define an allocation game in a controlled allocation game, see below, and Equation 40 can be used to define the control game. The integrated proportional
control value can then be used to determine manager performanceattributions.
Controlled Allocation Games
Some applications of the present invention involve allocations in games where the worth of a coalition may be zero or negative. A proportional value typically cannot be computed under
these circumstances. If the zero and negative worths are asmall number and small in magnitude in comparison to positive values, it may be reasonable to set zero and negative worths to
a small positive number and then use a proportional value. This step will be reasonable when these exceptional worths do notcontain important information relevant to the allocation
process, but result, instead from incidental computational or statistical effects. In other circumstances, zero and negative coalitional worths may convey essential information. For
example inthe manager performance attribution model described above, negative worths are associated with poor managerial performance along a particular set of dimensions.
It may at times be desirable to incorporate proportional effects into allocations in games with consequential non-positive coalitional worths. This is an example of a broader class of
situations that will be called "controlled allocation games."A controlled allocation game is an arrangement, based on two cooperative games, where coalitional worths of the first game
influence value allocation in the second game. The first game is called the "control game" and the second game is the "allocationgame." Controlled allocation games allow the
bargaining power of coalitions in one cooperative game to influence allocations in a second cooperative game.
Statistical cooperative games fit well into the controlled allocation game framework because separate games based on explanatory power and total effects can be associated with the
same statistical model. In particular, the present inventionillustrates how to introduce proportional bargaining power effects generated from a positive control game based on
explanatory power into an allocation game based on total effects. Controlled allocation games may find other applications besides thoseassociated with statistical cooperative games
and the present invention is not limited to those described.
FIG. 6 is a flow diagram that illustrates a Method 52 for allocating value among players in a cooperative allocation game in order to resolve joint effects in a allocation problem. At
Step 54, a control game and its players are identified. AtStep 56, an allocation game and its players are identified. At step 58, a control relationship between players or coalitions
in the control game and players or coalitions in the allocation game is established. At Step 60, a set of coalitions in thecontrol game is selected. At step 62, a set of worths of the
selected set of coalitions in the control game are determined. At Step 64, one or more control functions using the determined set of worths of coalitions in the control game are
evaluated todetermine a set of values for the control functions. At Step 66, a set of coalitions in the allocation game is selected. At Step 68, a set of worths for the selected set
of coalitions in the allocation game are determined. At Step 70, the set ofvalues for the one or more control functions evaluated are combined with the determined set of worths for
the selected set of coalitions in the allocation game to determine allocations to players in the allocation game.
Method 52 may be applied to virtually any cooperative game. In preferred embodiments of the present invention, the Method 52 is computer-based method and is embodied in a computer
program. An allocation problem in the form of such a cooperativegame, including the set of players N, is assumed to be to already identified to the program. This identification may be
a direct result of instructions in the program or may result from the choice of a user of the program. Allocation games utilizingthis method may involve resolution of joint effects of
a statistical nature, and also those involving risk, cost, or benefit allocation.
In such an embodiment, at Step 54, a control game and its players are identified. At Step 56, an allocation game and its players are identified. These identifications may be a direct
result of instructions in the computer program or may resultfrom the choice of a program user from a number of options. Let w represent the control game. Typically, the set of players
will be the same in both the control game and the allocation game v. This, however, need not be the case. The control game mayhave the same or different players as the allocation
game. There may be greater or fewer players in the control game than in the allocation game. In one embodiment of the present invention, the allocation game is a statistical
cooperative game based ontotal effects and the control game based on explanatory power and utilizes the same set of players, multivariate statistical model, and access relationships.
However, the present invention, is not limited to this embodiment and other types of allocationand control games can also be used to practice the invention.
Typically the control game will be different from the allocation game. It is, however, possible that an allocation game might serve-as-its own control game. These cooperative games
may be stored in the database 16 in memory or in files that maybe accessed by the processing system 10. These files may be text files, or files in a format for a particular database
or spreadsheet program. These cooperative games may be accessible through a network or internet connection 18.
These cooperative games may exist as a list that enumerates the worths of various coalitions. One technique for constructing such a list when a worth is provided for all coalitions is
to let the position in the list correspond to the binaryrepresentation of the coalition. For example, position 13 would then correspond to coalition {4, 3, 1} because 13 has the
binary representation "1101." A cooperative game may also be stored as a list of pairs, where the first element is a binaryrepresentation of the coalition and the second element is
the worth of the coalition. A cooperative game represented in other ways. For example, the worth of a coalition may be the solution of a mathematical problem.
At Step 58, a control relationship between players or coalitions in the control game w and the allocation game v is established. A "control relationship" is a mapping from players or
coalitions in the control game to players or coalitions in theallocation game. The control relationship may result directly from instructions in the computer program or may result
from user choice from a number of options. Typically, in games with the same set of players, this control relationship will be anidentity relationship for each player in the control
game to the same player in the allocation game. This also implies that any coalition S in the control game corresponds to same coalition S in the allocation game. In this case it
could be said thatthe power of a player or coalition in the allocation game is based on its power in the control game, where power is used as a general term for the effect of this
A control relationship may also be a mapping from coalitions in the control game to coalitions in the allocation game. The control game may have additional players not present in the
allocation game and then it will be common for the immediatelypreviously described relationships to hold for all players in the control game that are also in the allocation game. In
control games with fewer players than the allocation game a player in the control game may correspond to a class of players in theallocation game. Many other types of control
relationships are possible and the present invention is not limited to those decribed.
At Step 60, a set of coalitions in the control game is selected. This set may comprise all possible coalitions or only a subset of them. This selection may be determined directly by
instructions in the program or may result from user choicefrom a number of options. Coalitions may be selected by size or generated by a subset of players. Coalitions may be randomly
selected. Coalitions may be generated from randomly selected permutation orderings of players. Selected coalitions are mappedby the established control relationship to coalitions in
the allocation game.
At Step 62, a set of worths of selected coalitions in the control game are determined. Worths in the control game may be determined by the program by reference to memory locations or
files. Alternatively, worths in the control game may becomputed based on an externally or internally supplied formula.
At Step 64, one or more control functions using the determined set of worths are evaluated. The one or more control functions may be a value function or other function generating an
allocation of the control game. Examples of such functionsinclude the Shapley and proportional values. The choice of the one or more-control function selection may be determined
directly by instructions in the program or may result from user choice from a number of options. The computer program evaluates thecontrol functions. Values for all players in the
control game need not be computed under some circumstances.
Alternatively, a control function may determine other properties of the control game that are inputs to determining value allocation in the allocation game. An example of such a
control function is an ordered worth product (see Equation 43,below) for a set of coalitions generated from an ordering of players.
At Step 66, a set of coalitions in the allocation game is selected. If the control game v and allocation game w have the same set of players, the same coalitions may be selected. This
selection may be determined directly by instructions in theprogram or may result from user choice from a number of options. Alternately, a different set of coalitions may be selected.
Coalitions in the allocation game may alternatively be selected in the same manner as for the control game and described inStep 58. Coalitions may also be selected by other means.
At Step 68, a set of worths for the selected set of coalitions in the allocation game is determined. At Step 70, the determined set of values for the one or more control functions in
the control game are combined with the determined set ofworths for the selected set of coalitions in the allocation game to determine allocations to players in the allocation game.
The way this combination is effected may be determined directly by instructions in the program or may result from user choicefrom a number of options.
One example of combining values of control functions in the control game with worths of selected coalitions in the allocation game is when the control function is a value function
such as the Shapley or proportional value and a player's value inthe control game w is used as the weight of a player in the allocation game v. This weight is then used by a weighted
value such as the weighted Shapley value (illustrated in Equation 17) as a value function to determine allocations to players in theallocation game. This embodiment is illustrated in
Equations 41 and 42 using a proportional value to determine values in the control game w. .omega.=Pr(w) (41) x=wSh(v,.omega.) (42) In Equations 41 and 42 .omega. is a vector of
weights for each player,v is the control game, and x is the resulting vector of allocations to players. In this example, the players in the control and allocation games are the same,
all coalitions are selected as the worths of all coalitions are necessary to calculate theproportional value, and the control function is the proportional value.
Many variations on this example are possible and the invention is not limited to this embodiment. Many different allocation functions may be substituted for the proportional value in
Equation 41. Transformations of allocations c may be used asweights in Equation 42. Other weighted value functions know to those familiar with the art may be used in place of the
weighted Shapley value, for example the weighted proportional value, or NTU versions of these values.
One preferred embodiment of the present invention called the "integrated proportional control" game is based on the representation of the proportional value as a weighted sum of
marginal contributions. B. Feldman, "A dual model of cooperativevalue," 2002, Lemma 2.9 shows that the proportional-value has a representation as a weighted sum of marginal
contributions over all possible player orderings. The weight for each ordering in this sum is based on the ordered worth product for thatordering. Let r be an ordering of players and
let S.sub.m.sup.r be the coalition formed by the first m players in the ordering r. Then a formula for calculating an Ordered Worth Product ("OWP") for ordering r in an allocation
game v is illustrated inEquation 43:
.function..times..times..function. ##EQU00019## where product operator .PI. indicates the product of the coalitional worths v(S.sub.m.sup.r) as m increments from one to n.
A proportional value of a player i according to this weighted marginal contribution representation may then be illustrated by Equation 44:
.function..function..times..times..times..epsilon..times..times..OMEGA..fu- nction..times..times..function..times..function..function..function..funct- ion..times..times. ##EQU00020##
where R(N, w) is a proportional or ratio potential of a grandcoalition in control game w, R.sup..OMEGA.(N) is a set of all orderings of the players in N, and S.sub.r(i).sup.r is a
coalition formed by player i and all players before player i in the ordering r. R(N, v) may be calculated according to Equation 14,however, in practice, this calculation is not
necessary as this quantity can be inferred if values for all players are to be calculated. In Equation 44, the sum operator E indicates a sum over all orderings of the players of N.
Finally, the difference(v(S.sub.r(i).sup.r)-v(S.sub.r(i).sup.r\i)) is the marginal contribution of player i in the ordering r. The marginal contribution of player i is the worth of
the coalition that contains player i and all players before it, minus the worth of the coalitionthat contains only all the players before player i. The inverse (i.e., -1) of the
ordered worth product for the ordering r is the relative weight applied to a player's marginal contribution in that ordering. The sum over all orderings is a player'srelative
proportional value. A player's actual proportional value is this relative value times the ratio potential of the grand coalition.
This representation of the proportional value is easily adapted to the controlled allocation game framework described by Method 52. Assume for simplicity that control and allocation
games have the same set of players and a default identitycontrol relationship is used. Select all coalitions in both the control and allocation games in Steps 60 and 66. Use ordered
worth products of player orderings in the control game as the control functions in Step 64, and determine their values. Also,the potential of the grand coalition may also be
considered a control function and may be calculated. Combining the control functions with the worths of selected coalitions in Step 70 to compute the integrated proportional control
value for a player maybe done by computing the sum over all orderings of the product of the ratio potential of the grand coalition, the inverse of the ordered worth product for
ordering r in the control game w, and the marginal contribution of the player in the ordering r inthe allocation game v. The computation of the integrated proportional control value
for a player i is illustrated by Equation 45.
.function..function..times..times..times..epsilon..times..times..OMEGA..fu- nction..times..times..function..times..function..function..function..funct- ion..times..times. ##EQU00021##
where, R(N, w) is a ratio potential of a grand coalition forcontrol game w, R.sup..OMEGA.(N) is the set of all permutations of orderings of the players in N, S.sub.r(i).sup.r is the
coalition formed by player i and all players before player i in the ordering r, and v(S.sub.r(i).sup.r) is a worth of the coalition. Note that R(N, w) is effectively a normalizing
factor. R(N, w) can solved for by noting that the sum of all player values must equal v(N). The difference between Equations 44 and 45 is that the ratio potential of the grand
coalition and all orderedworth products are based on the control game w.
There are other representations of the proportional value as a weighted sum of marginal contributions and the present invention is not limited to those described. See, for example, B.
Feldman, "A dual model of cooperative value," 2002, Corollary2.1. Such representations may similarly be adapted to represent the integrated proportional control value as a sum
involving weights determined in a control game and marginal contributions determined in an allocation game.
In one preferred embodiment of the present invention, the control game is a statistical cooperative game using explanatory power as a performance measure; the allocation game is a
statistical cooperative game using total effects as performancemeasure; the control function is the proportional value; and the combination of the control function and worths of the
allocation game is effected by use of the weighted Shapley value or another weighted value. The weight assigned to a player and usedin the calculation of the weighted value in the
allocation game is that players' proportional value in the control game.
Many variations on Method 52 are possible. One variation is described in the section "Approximation Games," below.
Approximation Games
The number of computational steps needed to compute cooperative game value functions such as the Shapley and proportional values increases quickly with the number of players in a
game. A game with n players has 2.sup.n-1 coalitions. Computingvalues by means of potential functions such as exemplified in Equations 12 and 14 requires at least 2.sup.n-1
evaluations of these potential functions. Computing an exact value for a game with 40 players may then involve determining more than a trillioncoalitional worths and executing more
than a trillion functional evaluations. Approximation methods can greatly reduce the computational resources required to estimate a value function when exact results are not
necessary. The following methods allowcomputation of approximate values. These methods are useful not only for large statistical cooperative games, but also for large (but finite)
cooperative games generally.
FIG. 7 is a flow diagram illustrating a Method 72 for approximating a value function for players in a cooperative game v based on a large number of players n and representing an
allocation problem. At Step 74, a measure of precision is selected. At Step 76, a desired precision for estimated player values is determined. At Step 78, a collection of orderings
from a set of possible permutations of player orderings is selected. At Step 80, at least one intermediate value function of coalitionalworths generated for each selected ordering is
computed. At Step 82, a precision of approximations of values for players is periodically computed to determine if more player orderings should be generated to obtain a more precise
estimate of values forplayers. At Step 84, a value approximation for determining allocations to players is computed when a desired degree of precision is reached or a selected
computational limit is exceeded.
Method 72 is an illustrative embodiment. However, the present invention is not limited to such an embodiment and other embodiments can also be used to practice the invention.
In such an embodiment at Step 74, a measure of precision is selected. A standard error of the value approximation is a typical measure of precision. In some cases other or additional
measures may be selected. Mean absolute deviation is anexample of an alternative measure of precision. Mean absolute deviation is less sensitive to the effect of realizations that are
far from the mean. Kurtosis is an example of an additional measure of precision that may be useful in assessing the qualityof an approximation. The measure of precision may be
determined by the procedure embodying this method or be selected by the user of the procedure.
At Step 76, a desired precision for approximated player values is determined. This precision may be determined by the procedure embodying this procedure or may be selected by the user
of the procedure. The desired level of precision may be fora particular player, the minimum over all players whose value is to be approximated, or for some other criterion such as an
average standard error of all values to be approximated.
At Step 78, a collection of orderings from a set of possible permutations of player orderings is selected. The first time Step 78 is executed to orderings are selected. The initial
number of orderings may be a function of the measure ofprecision and the desired level of precision. It may also be a function of other factors such as the number of players in the
An "ordering" of the players is a list of the players giving each player a unique position in the ordering. Two orderings are the same when every player has the same position in each
ordering. There are n! possible orderings of the players. Aset of all orderings of a coalition N is represented as R.sup..OMEGA.(N). A game of 15 players generates more than a
trillion unique orderings. In large games, t.sub.0 will be much smaller then n!.
In one of the preferred embodiments of this invention, orderings are generated with the use of a random number generator. Methods for random number generation are well known to those
familiar with the art. A player i may have an equallikelihood of appearing any position in such an ordering, or some positions may be more likely than others. In particular, in
calculating the weighted Shapley value, the probability of a player i appearing at any point in an ordering may be calculatedas the ratio of its weight to the sum of the weights of
all unordered players. As described below, stratified sampling of orderings may sometimes be desirable, with some subsets of the set of all orderings R.sup..OMEGA.(N) more likely than
Alternatively, a list of orderings may be predetermined or may be described in mathematical form. Orderings may be selected from this list, either randomly or by a deterministic rule.
Let R*(N) be the collection of orderings used in the approximation process and assume the sampling process is not stratified. Thus R*(N).OR right.R.sup..OMEGA.(N). If Step 78 is
executed only once then R*(N) will contain to orderings. Everytime Step 78 is executed in the approximation process more orderings are added to R*(N).
At Step 80, at least one intermediate value function of coalitional worths generated by each selected ordering in R*(N) is computed. The set of coalitions generated by an ordering r
of the n players is the coalitions S.sub.i.sup.r, where ivaries from 1 to n, composed of the first i players of the ordering r. Intermediate value functions are used in the
approximation process. For example, in the case of computing an approximation of the Shapley value, the marginal contribution of at leastone player with respect to a selected ordering
is calculated. The computed values of the intermediate value functions may be stored in memory or calculations based on these values may be stored. Storing the actual computed values
may use considerablememory in games with many players. It may thus be preferable to instead save in memory only the sum or other aggregate functions of these intermediate value
Additionally, any functions required for the computation of precision statistics may also be computed at Step 80. For example, in computing a standard error for a value approximation
a squared value of an intermediate value function may becalculated.
At Step 82, a precision of approximations of values for players is periodically computed to determine if more player orderings should be generated to obtain a more precise estimate of
values for players. If the precision is equal or greater thanthe desired precision, Step 84 is executed immediately.
If the precision is less than the desired precision other considerations may still lead to passing from Step 82 to Step 84. For example, there may be a limit such that, if the desired
precision is not reached after a certain number of orderingshave been evaluated, after using a certain amount of computer time, or some other measure of cost is exceeded; execution
passes to Step 84 although the desired precision has not been achieved.
If the precision is less than the desired precision and no iteration limit has been exceeded Steps 78 and 80 may be executed again in a loop and the precision again determined at Step
82. This loop may be repeated until the desired precision isreached or an iteration limit is exceeded.
The number of additional orderings generated each time Step 80 is executed may vary in the process. The number of additional orderings may be conditioned on factors such as the
difference between the estimated and desired precision. Forexample, if increasing precision corresponds to a lower value of the precision statistic and precision is approximately
inversely proportional to the square root of the total number of orderings evaluated 1, then an estimate of the required number ofadditional orderings t.sub.A to be evaluated is
illustrated by Equation 46
.function. ##EQU00022## where p.sub.M is the measured precision and p.sub.D is the desired precision. If t.sub.A exceeds an iteration limit the number of additional orderings selected
may be reduced or execution can pass to Step 84.
At Step 84, allocations to players in the cooperative game are determined based on the intermediate value functions generated for each ordering. Final precision statistics may also be
calculated. In this step the computed values of theintermediate value functions, or aggregate functions based on the intermediate value functions, are used to approximate the value
function of the game. When approximating the Shapley and weighted Shapley values an average of the intermediate valuefunctions may be calculated. The Shapley value for a player may be
estimated as the average marginal contribution over all selected player orderings.
An approximation for the Shapley value for a large game may be computed using Method 72 as follows. Select standard error as the precision statistic and select a desired precision at
Steps 74 and 76. At Step 78, determine the initial number ofrandom orderings to be generated and generate the random orderings. Random orderings may by using a permutation algorithm.
One such algorithm is to generate a uniformly distributed random number for each player and order (i.e. sort) the playersaccording to these random values. Methods of sorting are well
known to those familiar with the art. At Step 80, for each such ordering r, calculate a marginal contribution M.sub.i.sup.r(v) of each player i in game v whose value is to be
estimated. Thecalculation of the marginal contribution of a player i according to a player ordering is illustrated in Equation 47 M.sub.i.sup.r(v)=v(S.sub.r(i))-v(S.sub.r(i)-1) (47)
where v refers to the specific cooperative game, S.sub.r(i) is the coalitioncontaining the player i and all the players before i in the ordering r, and S.sub.r(i)-1 is the coalition
of players coming before i in the ordering r. These intermediate value functions, the marginal contributions M.sub.i.sup.r(v), may be storedseparately in memory or may be summed for
each player, so that only the sum need be stored in memory. Also at Step 80 calculate the squared value of calculated marginal contributions M.sub.i.sup.r(v) and store these values or
their sum.
At Step 82, calculate the standard error of the approximation. For the Shapley value of a player i, this is the standard error of the mean, which may be calculated according to the
formula illustrated by Equation 48.
.times..times..sigma..function..times..times..times..times..epsilon..times- ..times..function..times..function..times..times..function. ##EQU00023## If the precision is less than the
desired precision, i.e., if the standard error is greater thanthe desired standard error, and the iteration limit has not been reached then Steps 78 and 80 are executed again.
At Step 84, an average marginal contribution for each selected player i, {overscore (M.sub.i.sup.r(v))} is computed. This is an unbiased estimate of the Shapley value.
To compute an approximation of the weighted Shapley value using Method 72, the procedure for calculating the Shapley value may be modified by using a weighted random ordering
procedure at Step 78 such that each ordering r is consistent withweights w. An example of an iterative algorithm to generate such a random ordering is to divide the unit interval into
contiguous segments that are assigned to each unordered player. Initially all players are unordered. The length of the intervalassigned to each unordered player is proportional to its
relative weight. Then a random number uniformly distributed between 0 and 1 is generated.
The player associated with the interval that contains the random number is selected as next in the ordering. The algorithm is repeated until only a single player is left, this player
is last in the ordering. The expected value of the averagemarginal contributions of a player i resulting from this modification of Step 78 re equal i's weighted Shapley value with
weights w.
A powerpoint of a game v (illustrated in Equation 18) may be approximated by the following procedure. First approximate the Shapley value of v. Then approximate the weighted Shapley
value using, as weights, players' Shapley values. Next useplayers weighted Shapley values as weights in another approximation of the weighted Shapley value. Players' values are then
updated and used as weights in succeeding approximations until the difference between each player's weight and its value issufficiently small. No problems with the convergence of this
iterative approximation process have been observed in positive weakly monotonic games. A game v is positive if v(S)>0 for all coalitions S. And v is weakly monotonic if S.OR
right.Timplies v(S).ltoreq.v(T).
The approximation process may be speeded up by raising the precision of estimates of the successive approximations of the weighted Shapley value toward a desired final precision
rather than making all approximations at this level of precision. The final approximation precision for the weighted Shapley values should ordinarily be greater than the precision
desired for the approximation of the powerpoint.
Approximation of the proportional value is facilitated by the random order representation illustrated by Equation 44. This relationship may be interpreted as showing that the
proportional value is a type of expected value, in the statisticalsense of this term. An approximation of the proportional value may be computed using Method 72 as follows. Assume
again at Step 74 that standard error is the precision statistic. Equally weighted orderings are used in Step 78, as with the Shapleyvalue. At Step 80 for each ordering r, marginal
contributions M.sub.i.sup.r(v) are calculated.
Using this embodiment, a value is calculated for all players. Additionally, the ordered worth product for ordering r is calculated. The ordered with product OWP(r, v) is the product
of the worths in game v of all coalitions formed as playersare sequentially added to a coalition according to the position in the ordering r. A formula for the ordered worth product
is illustrated in Equation 43 in the section above on controlled allocation games. Then create the weighted marginal contributionsWM.sub.i.sup.r(v) for each player i, and ordering r,
as illustrated in Equation 49.
.function..function..function. ##EQU00024## These weighted marginal contributions WM.sub.i.sup.r(v) are the intermediate value functions used to approximate the proportional value.
The proportional value may be approximated from these intermediate value functions as follows. Sum the weighted marginal contributions for a player i as illustrated in Equation 50:
.function..times..times..epsilon..times..times..function..times..times..fu- nction. ##EQU00025## where the summation is over all t orderings in the selected collection of orderings R*
(N). Only the accumulating weighted sums need be stored inmemory.
An estimated proportional value of a player i, EstPV.sub.i, is its proportional share of the worth of the grand coalition according to weighted marginal contributions, as illustrated
in Equation 51.
.function..function..epsilon..times..times..times..times..function..times.- .function. ##EQU00026## An approximation of the standard error of the approximation may be computed at Step
82 as illustrated in Equation 52 where Std is understood torepresent the standard error function.
.function..function..times..times..function..function..times..function..ti- mes..times..epsilon..times..times..times..times..function. ##EQU00027## In order to compute the standard
deviation of WM.sub.i(v) the squared value of theWM.sub.i.sup.r(v) terms for all players are be computed at Step 80 and their sum stored. Note that is an approximation of the standard
error. The exact standard error may be computed by determining the variance of the denominator of Equation 52 andthen utilizing the approach for computing the variance of a ratio of
random variables illustrated in Equation 54. In order to compute this sample variance the sample covariances cov(WM.sub.i, WM.sub.j) are computed for all pairs i and j. In order to
dothis, the products WM.sub.iWM.sub.j are computed for all pairs i and j at Step 80 and their sums are stored. At Step 84, Equation 51 provides the approximation of the proportional
An alternative embodiment for approximating the proportional value that does not require calculation of values for all players involves estimating the ratio potentials necessary to
calculate the proportional value according to Equation 15. Toestimate the proportional value of a player i, both R(N, v) and R(N\i, v) are estimated.
In order to estimate R(N\i, v) for a player i, ordered worth products are also computed at Step 80 without the inclusion of player i. Given an ordering r of the n players, let
r.sub.-i be the ordering of n-1 players formed by removing player ifrom r. Then the calculation of the ordered worth product with player i removed from the ordering is illustrated by
Equation 53.
.function..times..times..function. ##EQU00028## Ordered worth products OWP(v,r.sub.-j) are computed at Step 80 for all players j whose proportional values are to be estimated.
Let .THETA.(S) be the collection of inverses of the ordered worth products for coalition S generated by the t orderings in R*(N) at any point in the approximation process for a game
with n players. If S=N then this is the collection of allinverses of ordered worth products OWP(v,r).sup.-1. If S=\i then this is the collection of all inverses of ordered worth
products OWP(v,r.sub.-i).sup.-1. Let Var be the variance function and Avg be the average function. Then an estimate of thestandard error of the approximation that may be computed at
Step 82 is illustrated in Equation 54.
.sigma..times..function..THETA..function..times..times..function..THETA..f- unction..function..THETA..function..times..function..THETA..function..time-
s..times..function..THETA..function..times..function..THETA..function..THE-TA..function..times..times..times..function..THETA..function..times..times- ..function..THETA..function. ##
At Step 84, the expected value of the potential R(N, v) is computed as the harmonic mean of the ordered worth products OWP(R, v), as illustrated in Equation 55.
.function..times..times..epsilon..times..times..function..times..times..fu- nction. ##EQU00030##
The expected value of the potential R(N\i, v) is then the harmonic mean of the ordered worth products OWP.sub.r-i multiplied by the number of orderings t. This correction is necessary
because in expectation, for any ordering r and any player i,the ordering r.sub.-i will be t times as likely to occur as the ordering r. Thus the calculation of potential R(N\i, v) at
Step 84 is illustrated by Equation 56.
.function..times..times..function..times..times..epsilon..times..times..fu- nction..times..times..function. ##EQU00031##
The estimated proportional value for player i in the game v may then be computed as illustrated in Equation 57.
.function..function..function..times..times. ##EQU00032##
If estimated values are computed for all players, these values may be normalized by dividing by the sum of the estimated proportional values and multiplying by the worth of the grand
coalition. Calculation of the estimated ratio potentials atStep 84 may be based on intermediate value functions stored in memory. Calculation may also be done incrementally by
computing the inverses of the relevant ordered worth products and accumulating their sums. Also at Step 84, squared values of orderedworth products for the computation of variances
and cross products for the computation of covariances are computed and summed.
Approximations for Integrated Proportional Control Games
Two games may have a control game relationship as described in the section "Controlled Allocation Games," above, and one game or both games may be too large to compute values for its
players exactly. An approximation of values for an integratedproportional control game may be obtained using Method 52, illustrated in FIG. 6, and Method 72 in the following fashion.
Let w be the control game and let v be the allocation game identified at Steps 54, 56 of Method 52. Assume both games have thesame set of players and that the control relationship of
Step 58 is the default identity relationship. The coalitions identified at Steps 60 and 66 are determined by the orderings selected at Step 78 of Method 72. The worths determined at
Steps 62 and68 of Method 54 are also determined by the selected orderings. The control functions evaluated at Step 64 of Method 54 and the intermediate value functions evaluated at
Step 80 of Method 72 are the weighted marginal contributions in cooperative game vwith respect to orderings r and ordered worth products in w with respect to orderings r, as
illustrated in Equation 58:
.function..function..function. ##EQU00033##
Then at Step 70 of Method 54 and Steps 82 and 84 of Method 72 the sum of weighted marginal contributions SWM.sub.i(v, w) are calculated as illustrated by Equation 59:
.function..times..times..epsilon..times..times..function..times..function. ##EQU00034## where, again, the summation is over all y orderings in the collection of orderings R*(N).
The computation of precision statistics is very similar to the computation of precision statistics for the approximation of the proportional value. For example, to estimate a standard
error in Step 82 substitute WM.sub.i(v, w) for WM.sub.i(v)and SWM.sub.i(v, w) for SWM.sub.i(v) in Equation 52.
The estimated integrated proportional control value of a player i, EstIPC.sub.i, determined in Step 84 is its proportional share of the worth of the grand coalition according to
weighted marginal contributions, as illustrated in Equation 60
.function..function..times..times..epsilon..times..times..times..times..fu- nction..times..function. ##EQU00035## where the sum is over all players j in the game. Reliability of
Accuracy Statistics
The reliability of standard error statistics as a measure of the accuracy of the approximation of the proportional and integrated proportional control values depends on the
distribution of weighted marginal contributions. This distribution isgreatly influenced by the distribution of ordered worth products. In particular, as the ratio of the mean ordered
worth product to the minimum ordered worth product gets large, the very small ordered worth products have increasingly disproportionateeffect on the approximation, as is made evident
by Equation 49. This is because the inverse of the ordered worth product is introduced into this sum through the relationship illustrated in Equations 50 and 51. Accurate
approximation of the proportionalvalue in these circumstances depends on a balanced representation of these orderings in the overall sample of permutations. In games with sufficiently
many players and sensitivity to orderings with very small ordered worth products, it may be desirableto sample separately from this population. In this case, weighted marginal
contributions WM.sub.i.sup.r(v) must additionally be weighted by a sample selection weight sw.sub.r. These weights are applied to computed weighted means and weighted standarderrors,
the formulas for such are well known to those familiar with the art. In a stratified random ordering procedure sample selection weights may be set so that the probability of selecting
any ordering times its sample selection weight is a constant.
If the sample standard error is greater than the selected desired level of precision for any the value of any player, Steps 78 to 82 may be repeated by generating another collection
of orderings, calculating intermediate value functions based onthese orderings, and computing new estimates for the sample standard error of players' values based the cumulative
number of orderings evaluated. This iterative process may continue until the desired level of precision is reached, at which pointplayers' estimated values may be computed.
Estimates of precision statistics such as the standard error are often stable relative to sums or averages, as is known to those familiar with the statistical arts. It is thus
possible that the calculation of some or all of the precisionstatistics for intermediate value functions may be discontinued before the approximation process is completed. With
reference to Equation 58, the standard error of the approximation of the proportional value for a player i may be based on updatednumbers of orderings used and updated sums of
weighted marginal contributions, but without updating the standard error of the sums of the marginal contributions. The sum in the denominator of Equation 58 increases approximately
with 1. Thus, the truestandard deviation of approximation of the proportional value must decline approximately with the square root of the number of orderings used. The use of such a
procedure, however, will not save a dramatic amount of computer time. Further, insituations where the distribution of ordered worth products is sufficiently skewed toward zero
discontinuing the computation of precision statistics could lead to considerable overestimation of the precision of the approximation.
The methods and system described herein help solve some of the problems associated with resolving joint effects in statistical analysis. The present invention can be used to construct
statistical cooperative games and use cooperative game theoryto resolve statistical joint effects in a variety of situations. The methods may be applicable to other types of joint
effects problems such as those found in engineering, finance and other disciplines.
A number of examples, some including multiple equations were used to illustrate aspects of the present invention. However, the present invention is not limited to these examples or
equations, and other examples or equations can also be used withthe present invention.
It should be understood that the programs, processes, methods and system described herein are not related or limited to any particular type of computer or network system (hardware or
software), unless indicated otherwise. Various types ofgeneral purpose or specialized computer systems may be used with or perform operations in accordance with the teachings
described herein.
In view of the wide variety of embodiments to which the principles of the present invention can be applied, it should be understood that the illustrated embodiments are exemplary
only, and should not be taken as limiting the scope of the presentinvention. For example, the steps of the flow diagrams may be taken in sequences other than those described, and more
or fewer elements may be used in the block diagrams.
The claims should not be read as limited to the described order or elements unless stated to that effect. In addition, use of the term "means" in any claim is intended to invoke 35
U.S.C. .sctn.112, paragraph 6, and any claim without the word"means" is not so intended. Therefore, all embodiments that come within the scope and spirit of the following claims and
equivalents thereto are claimed as the invention.
* * * * *
Randomly Featured Patents | {"url":"http://www.patentgenius.com/patent/7079985.html","timestamp":"2014-04-21T02:54:51Z","content_type":null,"content_length":"168758","record_id":"<urn:uuid:0e4c30f3-6adf-4a77-a45c-f834a7796cbe>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sage 2.10.4.rc0: dsage/dist_functions/dist_factor.py timeout issue with -long
Reported by: mabshoff Owned by: yi
Priority: blocker Milestone: sage-3.0.1
Component: doctest coverage Keywords:
Cc: Merged in:
Authors: Reviewers:
Report Upstream: Work issues:
Branch: Commit:
Dependencies: Stopgaps:
sage -t -long devel/sage/sage/dsage/dist_functions/dist_factor.py
File "dist_factor.py", line 29:
sage: f.wait(timeout=60) # long time
Exception raised:
Traceback (most recent call last):
File "/scratch/mabshoff/release-cycle/sage-2.10.4.rc0/local/lib/python2.5/doctest.py", line 1212, in __run
compileflags, 1) in test.globs
File "<doctest __main__.example_0[5]>", line 1, in <module>
f.wait(timeout=Integer(60)) # long time###line 29:
sage: f.wait(timeout=60) # long time
File "/scratch/mabshoff/release-cycle/sage-2.10.4.rc0/local/lib/python2.5/site-packages/sage/dsage/dist_functions/dist_function.py", line 183, in wait
File "/scratch/mabshoff/release-cycle/sage-2.10.4.rc0/local/lib/python2.5/site-packages/sage/dsage/dist_functions/dist_function.py", line 179, in handler
raise RuntimeError('Maximum wait time exceeded.')
RuntimeError: Maximum wait time exceeded.
File "dist_factor.py", line 30:
sage: f.done # long time
File "dist_factor.py", line 32:
sage: print f # long time
Factoring "42535295865117307932921825928971026431"
Prime factors found so far: [31, 601, 1801, 269089806001, 4710883168879506001]
Factoring "42535295865117307932921825928971026431"
Prime factors found so far: [31, 601, 1801]
1 items had failures:
3 of 8 in __main__.example_0
***Test Failed*** 3 failures.
For whitespace errors, see the file .doctest_dist_factor.py
While the above doctest usually only takes about 25 seconds wall time when I do parallel testing it times out every couple doctests. Raising the limit for this long doctest to something larger might
be a solution.
Change History (4)
Interesting. How do I turn parallel testing on to try and reproduce locally? It would be better to see why it's taking more than 60 seconds than to simply raise the timeout. doctests that take 60
seconds (even for long time) are probably pretty bad.
Pinging Michael...
Is this still an issue? I still don't understand what you mean by "parallel testing" or how to go about reproducing this.
Replying to yi:
Pinging Michael...
Is this still an issue? I still don't understand what you mean by "parallel testing" or how to go about reproducing this.
Yes, it still regularly happens. Run "sage -tp 10 devel/sage/sage" on sage.math to trigger this. I am seeing it regularly with 3.0.alpha[0-3].
• Resolution set to fixed
• Status changed from new to closed
I have not seen this for several dozen "-tp 8 -long" on sage.math. Since I was the one who was able to trigger this reliably I am considering this fixed. | {"url":"http://trac.sagemath.org/ticket/2539","timestamp":"2014-04-16T07:13:16Z","content_type":null,"content_length":"23200","record_id":"<urn:uuid:1b404156-1e6e-45f4-8faa-ccb86cf2e4a6>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Evaluate an Improper Integral that Is Vertically Infinite
Improper integrals are useful for solving a variety of problems. A vertically infinite improper integral contains at least one vertical asymptote. Vertically infinite improper integrals are harder to
recognize than those that are horizontally infinite. An integral of this type contains at least one vertical asymptote in the area that you’re measuring. (A vertical asymptote is a value of x where f
(x) equals either or –.) The asymptote may be a limit of integration or it may fall someplace between the two limits of integration.
Don’t try to slide by and evaluate improper integrals as proper integrals. In most cases, you’ll get the wrong answer!
There are two cases where you’ll need to handle vertically infinite improper integrals.
Handling asymptotic limits of integration
Suppose that you want to evaluate the following integral:
At first glance, you may be tempted to evaluate this as a proper integral. But this function has an asymptote at x = 0. The presence of an asymptote at one of the limits of integration forces you to
evaluate this one as an improper integral.
1. Express the integral as the limit of a proper integral:
Notice that in this limit, c approaches 0 from the right — that is, from the positive side — because this is the direction of approach from inside the limits of integration. (That’s what the
little plus sign in the limit means.)
2. Evaluate the integral:
This integral is easily evaluated as
using the Power Rule:
3. Evaluate the limit:
At this point, direct substitution provides you with your final answer:
= 2
Piecing together discontinuous integrands
If a function is continuous on an interval, it’s also integrable on that interval. Some integrals that are vertically infinite have asymptotes not at the edges but someplace in the middle. The result
is a discontinuous integrand — that is, a function with a discontinuity on the interval that you’re trying to integrate.
Discontinuous integrands are the trickiest improper integrals to spot — you really need to know how the graph of the function that you’re integrating behaves.
To evaluate an improper integral of this type, separate it at each asymptote into two or more integrals. Then evaluate each of the resulting integrals as an improper integral.
For example, suppose that you want to evaluate the following integral:
Because the graph of sec x contains an asymptote at
the graph of sec^2 x has an asymptote in the same place. For example, a graph of the improper integral
in shown in this figure.
To evaluate this integral, break it into two integrals at the value of x where the asymptote is located:
Now evaluate the sum of the two resulting improper integrals.
You can save yourself a lot of work by noticing when two regions are symmetrical. In this case, the asymptote at
splits the shaded area into two symmetrical regions. So you can find one integral and then double it to get your answer:
Now evaluate this integral:
1. Express the integral as the limit of a proper integral:
In this case, the vertical asymptote is at the upper limit of integration, so c approaches
from the left — that is, from inside the interval where you’re measuring the area.
2. Evaluate the integral:
3. Evaluate the limit:
Note that
is undefined, because the function tan x has an asymptote at
so the limit does not exist (DNE). Therefore, the integral that you’re trying to evaluate also does not exist because the area that it represents is infinite. | {"url":"http://www.dummies.com/how-to/content/how-to-evaluate-an-improper-integral-that-is-verti.navId-611286.html","timestamp":"2014-04-20T07:47:48Z","content_type":null,"content_length":"59043","record_id":"<urn:uuid:059abfaa-e6be-474d-9657-dd33d0aa8b50>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Regression and Correlation: The Regression Equation
If you know a person's pinky (smallest) finger length, do you think you could predict that person's height? Collect data from your class (pinky finger length, in inches). The independent variable,
xx, is pinky finger length and the dependent variable, yy, is height.
For each set of data, plot the points on graph paper. Make your graph big enough and use a ruler. Then "by eye" draw a line that appears to "fit" the data. For your line, pick two convenient points
and use them to find the slope of the line. Find the y-intercept of the line by extending your lines so they cross the y-axis. Using the slopes and the y-intercepts, write your equation of "best
fit". Do you think everyone will have the same equation? Why or why not?
Using your equation, what is the predicted height for a pinky length of 2.5 inches? | {"url":"http://cnx.org/content/m17090/latest/?collection=col10732/latest","timestamp":"2014-04-20T13:39:08Z","content_type":null,"content_length":"148942","record_id":"<urn:uuid:b111317c-7538-4080-a54a-8c76e6b4791f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00383-ip-10-147-4-33.ec2.internal.warc.gz"} |
19 search hits
Second order dissipative fluid dynamics from kinetic theory (2011)
Barbara Betz Gabriel Denicol Tomoi Koide Etele Molnár Harri Niemi Dirk-Hermann Rischke
We derive the equations of second order dissipative fluid dynamics from the relativistic Boltzmann equation following the method of W. Israel and J. M. Stewart [1]. We present a frame independent
calculation of all first- and second-order terms and their coefficients using a linearised collision integral. Therefore, we restore all terms that were previously neglected in the original
papers of W. Israel and J. M. Stewart.
Relativistic shock waves and Mach cones in viscous gluon matter (2010)
Ioannis Bouras Etele Molnár Harri Niemi Zhe Xu Andrej El Oliver Fochler Francesco Lauciello Felix Reining Christian Wesp Carsten Greiner Dirk-Hermann Rischke
To investigate the formation and the propagation of relativistic shock waves in viscous gluon matter we solve the relativistic Riemann problem using a microscopic parton cascade. We demonstrate
the transition from ideal to viscous shock waves by varying the shear viscosity to entropy density ratio n/s. Furthermore we compare our results with those obtained by solving the relativistic
causal dissipative fluid equations of Israel and Stewart (IS), in order to show the validity of the IS hydrodynamics. Employing the parton cascade we also investigate the formation of Mach shocks
induced by a high-energy gluon traversing viscous gluon matter. For n/s = 0.08 a Mach cone structure is observed, whereas the signal smears out for n/s >=0.32.
Decay widths of resonances and pion scattering lengths in a globally invariant sigma model with vector and axial-vector mesons (2008)
Denis Parganlija Francesco Giacosa Dirk-Hermann Rischke
We calculate low-energymeson decay processes and pion-pion scattering lengths in a two-flavour linear sigma model with global chiral symmetry, exploring the scenario in which the scalar mesons f0
(600) and a0(980) are assumed to be ¯qq states.
Phase diagram of neutral quark matter at moderate densities (2006)
Stefan Bernhard Rüster Verena Werth Michael Buballa Igor A. Shovkovy Dirk-Hermann Rischke
We discuss the phase diagram of moderately dense, locally neutral three-flavor quark matter using the framework of an effective model of quantum chromodynamics with a local interaction. The phase
diagrams in the plane of temperature and quark chemical potential as well as in the plane of temperature and lepton-number chemical potential are discussed.
The phase diagram of neutral quark matter : the effect of neutrino trapping (2006)
Stefan Bernhard Rüster Verena Werth Michael Buballa Igor A. Shovkovy Dirk-Hermann Rischke
We study the effect of neutrino trapping on the phase diagram of dense, locally neutral three-flavor quark matter within the framework of a Nambu--Jona-Lasinio model. In the analysis, dynamically
generated quark masses are taken into account self-consistently. The phase diagrams in the plane of temperature and quark chemical potential, as well as in the plane of temperature and
lepton-number chemical potential are presented. We show that neutrino trapping favors two-flavor color superconductivity and disfavors the color-flavor-locked phase at intermediate densities of
matter. At the same time, the location of the critical line separating the two-flavor color-superconducting phase and the normal phase of quark matter is little affected by the presence of
neutrinos. The implications of these results for the evolution of protoneutron stars are briefly discussed. PACS numbers: 12.39.-x 12.38.Aw 26.60.+c
The phase diagram of neutral quark matter : self-consistent treatment of quark masses (2005)
Stefan Bernhard Rüster Verena Werth Michael Buballa Igor A. Shovkovy Dirk-Hermann Rischke
We study the phase diagram of dense, locally neutral three-flavor quark matter within the framework of the Nambu--Jona-Lasinio model. In the analysis, dynamically generated quark masses are taken
into account self-consistently. The phase diagram in the plane of temperature and quark chemical potential is presented. The results for two qualitatively different regimes, intermediate and
strong diquark coupling strength, are presented. It is shown that the role of gapless phases diminishes with increasing diquark coupling strength.
Pion and thermal photon spectra as a possible signal for a phase transition (2005)
Adrian Dumitru Ulrich Katscher Joachim A. Maruhn Horst Stöcker Walter Greiner Dirk-Hermann Rischke
We calculate thermal photon and neutral pion spectra in ultrarelativistic heavy-ion collisions in the framework of three-fluid hydrodynamics. Both spectra are quite sensitive to the equation of
state used. In particular, within our model, recent data for S + Au at 200 AGeV can only be understood if a scenario with a phase transition (possibly to a quark-gluon plasma) is assumed. Results
for Au+Au at 11 AGeV and Pb + Pb at 160 AGeV are also presented.
Phase diagram of dense neutral three-flavor quark matter (2004)
Stefan Bernhard Rüster Igor A. Shovkovy Dirk-Hermann Rischke
We study the phase diagram of dense, locally neutral three-flavor quark matter as a function of the strange quark mass, the quark chemical potential, and the temperature, employing a general
nine-parameter ansatz for the gap matrix. At zero temperature and small values of the strange quark mass, the ground state of matter corresponds to the color-flavor-locked (CFL) phase. At some
critical value of the strange quark mass, this is replaced by the recently proposed gapless CFL (gCFL) phase. We also find several other phases, for instance, a metallic CFL (mCFL) phase, a
so-called uSC phase where all colors of up quarks are paired, as well as the standard two-flavor color-superconducting (2SC) phase and the gapless 2SC (g2SC) phase.
Gapless phases of colour-superconducting matter (2004)
Igor A. Shovkovy Stefan Bernhard Rüster Dirk-Hermann Rischke
We discuss gapless colour superconductivity for neutral quark matter in β equilibrium at zero as well as at nonzero temperature. Basic properties of gapless superconductors are reviewed. The
current progress and the remaining problems in the understanding of the phase diagram of strange quark matter are discussed.
Effect of color superconductivity on the mass and radius of a quark star (2003)
Stefan Bernhard Rüster Dirk-Hermann Rischke
We compare quark stars made of color-superconducting quark matter to normal-conducting quark stars. We focus on the most simple color-superconducting system, a two-flavor color superconductor,
and employ the Nambu-Jona-Lasinio (NJL) model to compute the gap parameter and the equation of state. By varying the strength of the four-fermion coupling of the NJL model, we study the mass and
the radius of the quark star as a function of the value of the gap parameter. If the coupling constant exceeds a critical value, the gap parameter does not vanish even at zero density. For
coupling constants below this critical value, mass and radius of a color-superconducting quark star change at most by ca. 20% compared to a star consisting of normal-conducting quark matter. For
coupling constants above the critical value mass and radius may change by factors of two or more. | {"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Dirk-Hermann+Rischke%22/start/0/rows/10/sortfield/year/sortorder/desc","timestamp":"2014-04-18T15:46:43Z","content_type":null,"content_length":"49632","record_id":"<urn:uuid:722479b1-a23b-44d7-9b2e-e7130ecced0e>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
question #636
mathematical and computing sciences question #636
Peter Schmuecking, a 41 year old male from Roetgen nr Aachen/Germany asks on February 3, 2002,
My son, 13 y, has found a really good idea to prove the four colour map problem without computers, but he needs support to answer a question for a topology math problem. My son says, when on any map
there is a region that needs five colours then there is a way to draw a graph with 5 vertices and edges from any to any vertex without cutting each other. This is the point to prove: that this isn't
viewed 14177 times
the answer
Aaron Abrams
answered on February 4, 2002,
Your son's idea of transforming a map into a graph is a good one. And he is correct that it is impossible to draw a complete graph on 5 vertices in the plane without any edges crossing.
Unfortunately, his other statement is not at all obvious, namely that a map requiring 5 colors would imply that you could draw the complete graph on 5 vertices without edges crossing. Indeed, no one
has ever been able to prove this directly. What it does show is that you cannot have a map in which there are 5 countries with every pair sharing a border.
To perhaps clarify this further, he could try as an exercise to find a map which requires 4 colors (meaning you can't color it with 3), but in which there are no 4 countries which all share borders.
In graph language, this means find a graph in the plane which requires 4 colors but which contains no complete graph on 4 vertices.
Or an easier one: find a graph that requires 3 colors but contains no complete graph on 3 vertices (i.e. triangles). (The easiest answer to this one is a pentagon, as your son will surely discover
So you see, there may be a map which requires 5 colors, even though it contains no complete graph on 5 vertices. This is definitely one of the subtleties of the four-color problem.
If you found this answer useful, please consider
making a small donation to science.ca. | {"url":"http://www.science.ca/askascientist/viewquestion.php?qID=636","timestamp":"2014-04-21T12:08:27Z","content_type":null,"content_length":"17417","record_id":"<urn:uuid:bd071a35-334d-4ab0-b27a-7783fbf08ca5>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00244-ip-10-147-4-33.ec2.internal.warc.gz"} |
nature of cost curves
February 21st 2013, 05:12 AM
nature of cost curves
Q1. Given the following total cost functions, investigate the nature of average and marginal cost curves.
i) C=0.0003Q^2+6.75Q-10485
ii) C=0.8Q+3000
In case of (i), AC= 0.0003Q+6.75-10485/Q. Here d(AC)/dQ>0 and d^2(AC)/dQ^2 <0. So this AC curve is rising upwards and is convex upwards.
Again MC=0.0006Q+6.75. Here d(MC)/dQ>0 and d^2(MC)/dQ^2 =0. So this MC curve is rising upwards and is a straight line.
Is it correct to write the answer like this? What am I supposed to write in case of (ii)? | {"url":"http://mathhelpforum.com/business-math/213548-nature-cost-curves-print.html","timestamp":"2014-04-19T15:46:42Z","content_type":null,"content_length":"3506","record_id":"<urn:uuid:bb69d607-0d0d-44f2-84b6-22bb2c58321c>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
• What does the Fundamental Theorem of Algebra indicate with respect to this equation? x^3 - 10x^2+24x+192
• one year ago
• one year ago
Best Response
You've already chosen the best response.
The theorem says that you have as many roots as the degree of your equation.
Best Response
You've already chosen the best response.
Is that any help?
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/507b7369e4b07c5f7c1f2fc1","timestamp":"2014-04-20T00:40:17Z","content_type":null,"content_length":"32279","record_id":"<urn:uuid:b6b1af2b-a1be-4b08-bac7-7c0d9cd7a001>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
Clifton, NJ SAT Math Tutor
Find a Clifton, NJ SAT Math Tutor
...Currently I am working as a Project Engineer for a Machine Manufacturing company. I believe that tutoring the subjects that I know best will help students not only gain the most essential
conceptual knowledge, but also they will benefit with the guidance and tips from a person who has successful...
15 Subjects: including SAT math, geometry, algebra 1, ASVAB
...I teach students how to sift through data and scientific jargon to find only what they need to score points. In many ways the ISEE and SSAT are miniaturized versions of the ACT and SAT, which
were the first two tests I taught. While the material on the ISEE is more age-appropriate for the stude...
18 Subjects: including SAT math, geometry, GRE, algebra 1
...In college I took several advanced chemistry courses including chemical thermodynamics, organic chemistry, and biochemistry. I also have recently tutored in chemistry. I have five years
experience teaching high school geometry.
8 Subjects: including SAT math, chemistry, geometry, algebra 1
...My teaching experience includes varied levels of students (high school, undergraduate and graduate students).For students whose goal is to achieve high scores on standardized tests, I focus
mostly on tips and material relevant to the test. For students whose goal is to learn particular subjects,...
15 Subjects: including SAT math, chemistry, calculus, geometry
...Being born and having lived in a Spanish speaking country for many years of my life and attending school there, as well as here in the United States, I can fluently speak, write, and translate
English and Spanish. As a result, I have also been contracted as an in classroom tutor for bilingual cl...
25 Subjects: including SAT math, Spanish, English, algebra 1
Related Clifton, NJ Tutors
Clifton, NJ Accounting Tutors
Clifton, NJ ACT Tutors
Clifton, NJ Algebra Tutors
Clifton, NJ Algebra 2 Tutors
Clifton, NJ Calculus Tutors
Clifton, NJ Geometry Tutors
Clifton, NJ Math Tutors
Clifton, NJ Prealgebra Tutors
Clifton, NJ Precalculus Tutors
Clifton, NJ SAT Tutors
Clifton, NJ SAT Math Tutors
Clifton, NJ Science Tutors
Clifton, NJ Statistics Tutors
Clifton, NJ Trigonometry Tutors
Nearby Cities With SAT math Tutor
Bloomfield, NJ SAT math Tutors
East Orange SAT math Tutors
Elmwood Park, NJ SAT math Tutors
Garfield, NJ SAT math Tutors
Montclair, NJ SAT math Tutors
Nutley SAT math Tutors
Passaic SAT math Tutors
Passaic Park, NJ SAT math Tutors
Paterson, NJ SAT math Tutors
Rutherford, NJ SAT math Tutors
Union City, NJ SAT math Tutors
Wallington SAT math Tutors
Wayne, NJ SAT math Tutors
Weehawken SAT math Tutors
Woodland Park, NJ SAT math Tutors | {"url":"http://www.purplemath.com/Clifton_NJ_SAT_Math_tutors.php","timestamp":"2014-04-16T22:14:35Z","content_type":null,"content_length":"23989","record_id":"<urn:uuid:237172f0-dc89-4965-9401-d7302db53107>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
Barycentric Julian Date
Ultimately, we want an accurate "time stamp" of when an astrophysical event occurs. There are two basic components of a time stamp: the "reference frame", and the "time standard." Both are required
to specify the time stamp more accurately than 1 minute.
Reference Frame
Due to the finite speed of light, as the Earth travels in its orbit, light from a particular object
may be early or delayed by as much as 8.3 minutes (1 AU/c). Left uncorrected, the time of an
astronomical event measured by our clocks on Earth will vary by 16.6 minutes over the course of a
The solution to this problem is to calculate the time the light from a given object would have
arrived at a non-accelerating reference frame. Historically, we have used the Heliocentric Julian
Date (HJD), referenced to the center of the Sun because it is easy to compute. However, the Sun The green circle is the Earth's orbit and the "(+)" is the Earth. The black lines point from the
moves due to the gravitational pull of the planets, which introduces errors as large as 8 seconds. Earth to the target, from the SSB to the target, and from the SSB to the Earth. The yellow arcs are
To correct for this, we use the Barycentric Julian Date (BJD), which is referenced to the Solar the spherical wavefront from the target, as seen from Earth and the Barycenter. The yellow straight
System Barycenter (SSB), or the center of mass of the Solar System, and does not accelerate (for line is the plane wave approximation used in this calculation. The blue line is the extra distance,
all intents and purposes). according to the plane wave approximation, that the light must travel to get the SSB (or the extra
distance it has traveled to get to the Earth). The red line is the error in the plane wave
Note: Assuming you observe from the geocenter is much easier, but introduces a ~20 ms error approximation.
(R_earth/c) -- the time it takes light to get from the surface of the Earth to the center of the
To the right is an animation of the effect. The left-most figure is to scale for an object at 3000
AU, and the right-most figure is to scale for an object at 3 AU (roughly the Main Asteroid Belt).
You can see the plane wave is not a very good approximation for nearby objects.
Time Standards
The other part of the time stamp depends on the time standard we use, or how our clock ticks.
Universal Time (UT1) is based on the length of the mean solar day (the average time it takes for the Sun to arrive at the same place it was the day before). This is roughly 24 hours, but it speeds up
and slows down with the changing rotation rate of the Earth. Generally, the Earth's rotation rate is slowing down due to the tidal braking of the Moon, and days are becoming longer (each year is
about 1 second longer than the year 1900 was).
Coordinated Universal Time (UTC) is fundamentally based on atomic clocks, but it is not allowed to differ from UT1 by more than 0.9 seconds. When the difference gets too large, we add a leap second
to UTC. Therefore, UTC is discontinuous and drifts with respect to "uniform" atomic time. UTC is the international standard for broadcasting time, and so most clocks are in UTC (modulo time zones and
daylight savings time).
Note: Universal Time (UT) can refer to UT1, UTC, or many other variants which can differ by up to 0.9 seconds. Be sure you know which one you are using if 1 second accuracy is important.
Barycentric Dynamical Time (TDB) takes into account Relativity -- the fact that moving clocks tick at different rates. As the Earth moves, our atomic clocks actually change rates. TDB is a truly
uniform time, as we would measure it on Earth if it were not moving around the Sun or being pulled by the Moon and other celestial bodies.
The BJD can be specified in any time standard, and there has been much ambiguity as to which standard is used for a particular BJD. The Barycentric Dynamical Time (TDB) is the best to use in
practice, though many use UTC, which is discontinous and drifts with respect to a uniform time standard. Always specify the time standard of your BJD, and never compare BJDs in different time
standards, which may differ by more than 1 minute.
There are other things that affect the arrival time of photons at the ~1 ms level, which are discussed in our (more technical) paper, along with a more detailed explanation of the above. If you have
made use of this calculator (or the source code) in a scientific paper, please cite it.
Lastly, we provide a tool for observers to do the reverse correction. That is, given the BJD_TDB of a future event, what UTC time on my computer clock should I observe it? However, most will find
that +/- 10 minute accuracy is enough to schedule observations, and can simply use BJD_TDB ~ JD_UTC.
Our php code calls our IDL code and makes use of routines written by Craig Markwardt. We find the position of the Barycenter using JPL's DE405 ephemeris and the positions of spacecraft via telnet to
Copyright © Jason Eastman () All Rights Reserved.
Last Updated: May 11, 2010 | {"url":"http://astroutils.astronomy.ohio-state.edu/time/bjd_explanation.html","timestamp":"2014-04-17T09:55:17Z","content_type":null,"content_length":"7456","record_id":"<urn:uuid:781e31df-722b-4096-b6ab-cc68b3ec12cb>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
sponge function family
This page is dedicated to the cryptographic sponge function family called Keccak, which has been selected by NIST to become the new SHA-3 standard.
Keccak in a nutshell
Keccak is a family of sponge functions. The sponge function is a generalization of the concept of cryptographic hash function with infinite output and can perform quasi all symmetric cryptographic
functions, from hashing to pseudo-random number generation to authenticated encryption.
For a quick introduction, we propose a pseudo-code description of Keccak. The reference specification, analysis, reference and optimized code and test vectors for Keccak can be found in the file
As primitive used in the sponge construction, the Keccak instances call one of seven permutations named Keccak-f[b], with b=25, 50, 100, 200, 400, 800 or 1600. In the scope of the SHA-3 contest, we
proposed the largest permutation, namely Keccak-f[1600], but smaller (or more “lightweight”) permutations can be used in constrained environments. Each permutation consists of the iteration of a
simple round function, similar to a block cipher without a key schedule. The choice of operations is limited to bitwise XOR, AND and NOT and rotations. There is no need for table-lookups, arithmetic
operations, or data-dependent rotations.
Keccak has a very different design philosophy from its predecessor RadioGatún. This is detailed in our paper presented at Dagstuhl in 2009.
Strengths of Keccak
Keccak inherits the flexibility of the sponge and duplex constructions.
• As a sponge function, Keccak has arbitrary output length. This allows to simplify modes of use where dedicated constructions would be needed for fixed-output-length hash functions. It can be
natively used for, e.g., hashing, full domain hashing, randomized hashing, stream encryption, MAC computation. In addition, the arbitrary output length makes it suitable for tree hashing.
• As a duplex object, Keccak can be used in clean and efficient modes as a reseedable pseudo-random bit generator and for authenticated encryption. Efficiency of duplexing comes from the absence of
output transformation.
• Keccak has a simple security claim. One can target a given security strength level by means of choosing the appropriate capacity, i.e., for a given capacity c, Keccak is claimed to stand any
attack up to complexity 2^c/2 (unless easier generically). This is similar to the approach of security strength used in NIST's SP 800-57.
• The security claim is disentangled from the output length. There is a minimum output length as a consequence of the chosen security strength level (i.e., to avoid generic birthday attacks), but
it is not the other way around, namely, it is not the output length that determines the security strength level. For an illustration with the classical security requirements of hashing (i.e.,
collision and (second) preimage resistance), we refer to our interactive page.
• The instances proposed for SHA-3 make use of a single permutation for all security strengths. This cuts down implementation costs compared to hash function families making use of two (or more)
primitives, like the SHA-2 family. And with the same permutation, one can make performance-security trade-offs by way of choosing the suitable appropriate capacity-rate pair.
Design and security
• Keccak has a thick safety margin. In [Keccak reference, Section 5.4], we estimate that the Keccak sponge function should stand by its security claim even if the number of rounds is almost divided
by two (i.e., from 24 down to 13 in the case of Keccak-f[1600]).
• Keccak was scrutinized by third-party cryptanalysis. For more details, we refer to the cryptanalysis page.
• We showed that the Keccak-f permutations have provable lower bounds on the weight of differential trails.
• The design of the permutations follows the Matryoshka principle, where the security properties of the seven permutations are linked. The cryptanalysis of the smaller permutations, starting from
the “toy” Keccak-f[25], is meaningful to the larger permutations, and vice-versa. In particular, differential and linear trails in one Keccak-f instance extend to symmetric trails in larger
• The sponge and duplex constructions used by Keccak are provably secure against generic attacks. This covers also the joint use of multiple Keccak instances with different rate/capacity
• Unlike SHA-1 and SHA-2, Keccak does not have the length-extension weakness, hence does not need the HMAC nested construction. Instead, MAC computation can be performed by simply prepending the
message with the key.
• From the mode down to the round function, our design choices are fairly different from those in the SHA-1 and SHA-2 hash functions or in the Advanced Encryption Standard (AES). Keccak therefore
provides diversity with respect to existing standards.
• Keccak excels in hardware performance, with speed/area trade-offs, and outperforms SHA-2 by an order of magnitude. See for instance the works of Gürkaynak et al., Gaj et al., Latif et al., Kavun
et al., Kaps et al. and Jungk presented at the Third SHA-3 Candidate Conference.
• Keccak has overall good software performance. It is faster than SHA-2 on modern PCs and shines when used in a mode exploiting parallelism. On AMD™ Bulldozer™, 128-bit and 256-bit security hashing
tops at 4.8 and 5.9 cycles/byte, respectively. On Intel™ Sandy Bridge™, the same functions reach 5.4 and 6.9 cycles/byte. On constrained platforms, Keccak has moderate code size and RAM
consumption requirements.
• For modes involving a key, protecting the implementation against side-channel attacks is wanted. The operations used in Keccak allow for efficient countermeasures against these attacks. Against
cache-timing attacks, the most efficient implementations involve no table lookups. Against power analysis attacks and variants, countermeasures can take advantage of the quadratic round function.
Latest news
8 April 2014 — The FIPS 202 draft is available
Last Friday, NIST released the draft of the FIPS 202 standard. It proposes six instances: the four SHA-2 drop-in replacements with fixed output length SHA3-224 to SHA3-512, and the two
future-oriented extendable-output functions SHAKE128 and SHAKE256.
The latest version of the Keccak Code Package is in line with the draft and contains test vectors for the six aforementioned instances.
8 February 2014 — KeccakTools moved to GitHub
Recently, we decided to move KeccakTools to GitHub. This allows easier updates as well as an easier integration of potential contributions from others.
As a reminder, KeccakTools is a set of documented C++ classes that can help analyze Keccak. It also contains the best differential and linear trails we found in the various Keccak-f instances.
4 October 2013 — Yes, this is Keccak!
SUMMARY: NIST's current proposal for SHA-3 is a subset of the Keccak family, and one can generate test vectors for that proposal using our reference code submitted to the contest.
In the end, it will be NIST's decision on what exactly will be standardized for SHA-3, but we would like, as the Keccak team, to take the opportunity to remind some facts about Keccak and give some
opinion on the future SHA-3 standard.
First some reminders on Keccak
• Keccak is a family of sponge function instances, encompassing capacity values ranging from 0 to 1599 bits. All these instances are well-defined and so are their security claim. Our SHA-3
submission highlighted instances with capacities c=448, 512, 768 and 1024 for strictly meeting NIST's SHA-3 requirements on the SHA-2 drop-in replacement instances, plus a capacity of 576 for a
variable-output-length instance. Nevertheless, the capacity is an explicitly tunable parameter, in the line of what NIST suggested in their SHA-3 call, and we therefore proposed in our SHA-3
submission document that the capacity would be user-selectable.
• The capacity is a parameter of the sponge construction (and of Keccak) that determines a particular security strength level, in the line of the levels defined in [NIST SP 800-57]. Namely, for a
capacity c, the security strength level is c/2 bits and the sponge function is claimed to resist against all attacks up to 2^c/2, unless easier with a random oracle. As we make a clear security
claim for each possible value of the capacity, a user knows what the expect and a cryptanalyst knows her target. Conversely, we provide a tool that helps determine the minimum capacity and output
length given collision and pre-image resistance requirements.
• The core of Keccak, namely the Keccak-f permutations, has not changed since round 2 of the SHA-3 competition. When Keccak was selected for the 2^nd round, we increased the number of rounds to
have a better safety margin (from 18 to 24 rounds for Keccak-f[1600]). The round function has not changed since the original submission in 2008.
• Keccak is the result of using the sponge construction on top of the Keccak-f permutations and applying the multi-rate padding to the input. Using multi-rate padding causes each member of the
Keccak family (and in particular for each value of the capacity) to act as an independent function.
• As a native feature, Keccak provides variable output length, that is, the user can dynamically ask for as many output bits as desired (e.g., as a mask generating function such as MGF1).
Keccak in the SHA-3 standard
NIST's current proposal for SHA-3, namely the one presented by John Kelsey at CHES 2013 in August, is a subset of the Keccak family. More concretely, one can generate the test vectors for that
proposal using the Keccak reference code (version 3.0 and later, January 2011). This alone shows that the proposal cannot contain internal changes to the algorithm.
We did not suggest NIST to make any change to the Keccak components, namely the Keccak-f permutations, the sponge construction and the multi-rate padding, and we are not aware of any plans that NIST
would do so. However, the future standard will not include the entire Keccak family but will select only specific instances of Keccak (i.e., with specific capacities), similarly to the block and key
lengths of AES being a subset of those of Rijndael. Moreover, it will append some parameter-dependent suffix to the input prior to processing (see below) and fix the output length (for the SHA-2
drop-in replacements) or keep it variable (for the SHAKEs).
Here are further comments on these choices.
First, about suffixes (sometimes referred to as padding).
In Sakura, we propose to append some suffix to the input message, before applying Keccak. This is sometimes presented as a change in Keccak's padding rule because adding such a suffix can be
implemented together with the padding, but technically this is still on top of the original multi-rate padding.
The suffixes serve two purposes. The first is domain separation between the different SHA-3 instances, to make them behave as independent functions (even if they share the same capacity). The second
is to accomodate tree hashing in the future in such a way that domain separation is preserved.
The security is not reduced by adding these suffixes, as this is only restricting the input space compared to the original Keccak. If there is no security problem on Keccak(M), there is no security
problem on Keccak(M|suffix), as the latter is included in the former.
Second, about the output length.
Variable output length hashing is an interesting feature for natively supporting a wide range of applications including full domain hashing, keystream generation and any protocol making use of a mask
generating function. In its current proposal, NIST plans on standardizing two instances: SHAKE256 and SHAKE512, with capacity c=256 and c=512 and therefore security strength levels of 128 and 256
bits, respectively.
The traditional fixed output-length instances acting as SHA-2 drop-in replacement (SHA3-xxx) are obtained from truncating Keccak instances at the given output length.
Third, about the proposed instances and their capacities.
The capacity of the SHAKEs is given above and we now focus on the SHA-2 drop-in replacement instances with fixed output length n, with n in {224, 256, 384, 512}.
The SHA-3 requirements asked for a spectrum of resistance levels depending on the attack: n/2 for collision, n for first pre-image and n-k for second pre-image (with 2^k the length of the first
message). To meet the requirements and avoid being disqualified, we set c=2n so as to match the n-bit pre-image resistance level, and the requirements on other attacks followed automatically as they
were lower. However, setting c=2n is also a waste of resources. For instance, Keccak[c=2n] before truncation provides n-bit collision resistance (in fact n-bit resistance against everything), but
after truncation to n bits of output it drops to n/2-bit collision resistance.
Instead, adjusting the capacity to meet the security strength levels of [NIST SP 800-57] gives better security-performance trade-offs. In this approach, one aims at building a protocol or a system
with one consistent security target, i.e., where components are chosen with matching security strength levels. The security strength level is defined by the resistance to the strongest possible
attack, i.e., (internal) collisions so that, e.g., SHA-256 is at 128 bits for digital signatures and hash-only applications. Hence, setting c=n simply puts SHA3-n at the n/2-bit security level.
Among the Keccak family, NIST decided to propose instances with capacities of c=256 for n=224 or 256, and c=512 bits for n=384 or 512. This proposal is the result of discussions between the NIST hash
team and us, when we visited them in February and afterwards via mail. It was then publically presented by John Kelsey at CT-RSA later in February and posted on the NIST hash-forum mailing list soon
after. It was then presented at several occasions, including Eurocrypt 2013, CHES 2013 at UCSB, etc.
The corresponding two security strength levels are 128 bits, which is rock-solid, and an extremely high 256 bits (e.g., corresponding to RSA keys of 15360 bits [NIST SP 800-57]).
Comments on some of the criticism
Finally, we now comment on some criticism we saw in the discussions on the NIST hash-forum mailing list.
• “128 bits of security are not enough in particular in the light of multi-target pre-image attacks.” We addressed this specifically in a message to the NIST SHA-3 mailing list, we explained why
this fear is unfounded and why the 128 bits of security do not degrade for multi-target pre-image attacks. And anyway the SHA-3 proposal includes functions with 256-bit security, which the user
is free to choose as well.
• “SHA3-256 does not provide 256-bit pre-image resistance.” With c=256, this is correct indeed. We proposed to reduce the capacity of SHA3-256 to 256 bits to follow our security-strength oriented
approach, which better addresses actual user requirements than the traditional way of inferring resistance of hash functions from the output length. Nevertheless, to avoid confusion for people
expecting 256-bit resistance from SHA3-256, we made a 2^nd proposal that sets c=512 for all SHA-2 drop-in replacement instances, hence providing the traditional 256-bit pre-image resistance.
• “There is no instance providing 512-bit pre-image resistance.” Again, this is correct. The answer is similar to the previous point, except that our new proposal does not extend to capacities
higher than c=512 bits, simply because claiming or relying on security strength levels above 256 bits is meaningless. Setting c=1024 would induce a significant performance loss, and there are no
standard public-key parameters matching 512 bits of security. Also we believe that this security level was more a side-effect and not a security target in itself. All conventional hash functions
that would aim at 256-bit collision resistance would automatically provide 512-bit preimage resistance. Keccak however is a different cryptographic object and SHA3-512 can safely provide a
security strength of 256 bits against all attacks without the need to boost the security level beyond any meaning.
• “Claiming a higher security level provides a safety margin.” In the Keccak design philosophy, safety margin comes from the number of rounds in Keccak-f, whereas the security level comes from the
selected capacity. We have designed Keccak so as to have a strong safety margin for all possible capacities. At this moment, this safety margin is very comfortable (4 to 5 rounds out of 24 are
broken). Of course, the user can still increase the capacity to get a security level that is higher than the one he targets, and hence somehow artificially increase the safety margin. But, there
is simply no need to do so. We also refer to Martin Schläffer's excellent summary, posted on the NIST hash-forum mailing list on October 1st, 2013 at 10:16 GMT+2 (thanks Martin!).
As explained in our new proposal, we think the SHA-3 standard should emphasize the SHAKE functions. The SHA-3 user would keep the choice between lean SHAKE256 with its rock-solid security strength
level and the heavier SHAKE512 with its extremely high security strength level. In implementations, the bulk of the code or circuit is dedicated to the Keccak-f[1600] permutation and from our
experience supporting multiple rates can be done at very small cost.
Recommended reading from third parties:
Other references:
2 October 2013 — A concrete proposal
This article is a copy of a message we posted on the NIST hash-forum mailing list on September 30, 2013.
SUMMARY: In the SHA-3 standard, we propose to set the capacity of all four SHA-2 drop-in replacements to 512 bits, and to make SHAKE256 and SHAKE512 the primary choice.
Technically, we think that NIST's current proposal is fine. As said in our previous post, we have proposed to reduce the capacities of the SHA-3 hash functions at numerous occasions, including during
our last visit to NIST in February. Nevertheless, in the light of the current discussions and to improve public acceptance, we think it would be indeed better to change plans. For us, the best option
would be the following (taking inspiration from different other proposals).
• Set the capacity of the SHA-2 drop-in replacements (i.e., SHA3-224 to SHA3-512) to c=512. This guarantees the same claimed security properties as for the corresponding SHA-2 instances up to the
256-bit security level. (In particular, the pre-image resistance of SHA3-256 would be raised to 256 bits.)
• Keep the SHAKEs as they are (i.e., SHAKE256 with c=256 and SHAKE512 with c=512) and make them the primary choice for new applications of hash functions, for replacing mask generating functions
(MGFs) and for those who wish to follow the security strength levels approach of [NIST SP 800-57].
For the SHAKEs, we think it would be good to include in the standard a short procedure for replacing a hash function or MGF based on SHA-1 or SHA-2. For instance, if there is only one to be replaced,
here is a sketch.
1. Choose between SHAKE256 and SHAKE512. If the user can determine the required security level and it is 128 bits or smaller, choose SHAKE256. Otherwise (or if unsure), choose SHAKE512.
2. Let the output length be determined by the application.
We have seen proposals for keeping instances with c=1024 in SHA-3. We think that claiming or relying on security strength levels above 256 bits is meaningless and that c=1024 would induce a
significant performance loss, which should be avoided.
This proposal means that SHA-3 standard will offer drop-in primitives with the same security level as SHA-2 (modulo the comment on c=1024), but also gives protocol and product designers the
possibility to use SHAKE256, which is more efficient and is in practice not less secure than SHAKE512 or the drop-ins.
2 October 2013 — On 128-bit security
This article is a copy of a message we posted on the NIST hash-forum mailing list on September 30, 2013.
SUMMARY: Keccak instances with a capacity of 256 bits offer a generic security strength level of 128 bits against all generic attacks, including multi-target attacks. 2^128 is an astronomical number
and attacks with such complexities are expected to remain unreachable for decades to come.
Among other options, we have proposed instances with capacity c=256 as an option because they have a generic security strength of 128 bits. This means any single-stage (*) generic attack has an
expected complexity of 2^128 computations of Keccak-f, unless easier on a random oracle. This is such an astronomical amount of work that one may wonder why we would ever need more than 128 bits of
security (see also Tune Keccak to your requirements).
In the discussions on SHA-3 we have seen some remarks on 128-bit security not being sufficient in the light of multi-target attacks. Multi-target attacks can be illustrated nicely with block ciphers.
• Single-target attack: Say we have a 32-byte ciphertext C that is the result of applying AES-128 in ECB mode on a known plaintext with some unknown key K. Then K can be found by exhaustive key
search: enciphering P by all possible values K until C appears. The right key will be hit after about 2^127 trials and so the security strength is around 128 bits. This is the security strength
the layman typically expects when using AES-128.
• Multi-target attack: Say we now have M ciphertexts C[i] obtained by enciphering the same plaintext P with M different keys K[i]. And assume the attacker is satisfied if he can find at least one
key K[i]. Then if he applies exhaustive key search, the expected number of trials is 2^128/M. So the security strength is reduced to 128-log[2](M). If M is very large, this can reduce the
security strength quite a lot. E.g., M = 2^40 reduces the time complexity to only 2^88. This is still a huge number, but it can no longer be dismissed as science-fiction. Among cryptographers
this security degeneration is well-known and there are methods of avoiding this, such as salting, enciphering in CBC mode with random IVs etc.
If the application does not allow avoiding multi-targets, one can decide to use AES-192 or AES-256. The reason to use 192-bit or 256-bit keys is not because the security strength level 128 is too
small, but because in the light of multi-target attacks, we need a block cipher with a key longer than 128 bits to offer a security strength level of 128 bits. Summarizing, AES-128, AES-192 and
AES-256 have key lengths of 128, 192 and 256 bits, but this does not mean they offer a generic security strength of 128, 192 and 256 bits. This is not specific for AES, it is true for any block
cipher. This is also not a problem. A protocol designer who understands these issues can easily build efficient protocols offering excellent generic security strengths.
Multi-target also applies to finding (first or second) pre-images. Finding one pre-image out of M 128-bit hashes only takes 2^128/M hash computations.
So it is tempting to think that the 128-bit generic security strength of Keccak instances with 256-bit capacity will also degrade under multi-target attack. Fortunately, this is not the case, as the
generic security strength level c/2 follows from the bound in our indifferentiability proof for the sponge construction. More specifically, the success probability of a generic attack on a sponge
function is upper bounded by the sum of the attack probability of that attack on a random oracle plus the RO-differentiating advantage N^2/2^c+1. We have explained that in our Eurocrypt 2008 paper on
Sponge indifferentiability and this was formalized by Elena Andreeva, Bart Mennink and Bart Preneel in Appendix B, Theorem 2 of their paper Security Reductions of the Second Round SHA-3 Candidates,
and this is also true for multi-target attacks.
If one wants a hash function (any) that offers a generic security strength level of 128 bits against multi-target attacks with at most say 2^64 targets, then one must take the output length equal to
128+64=192 bits. For a sponge function, the capacity does not need to be increased to twice the output length; if we target a security strength level of 128 bits, c=256 is still sufficient.
So a 256-bit capacity offers a generic security strength level of 128 bits that is absolute and does not degenerate under multi-target attacks.
For the record, we as Keccak team proposed setting c=256 (and even a user-chosen capacity) as an option in our SHA-3 proposal: “If the user decides to lower the capacity to c=256, providing a claimed
security level equivalent to that of AES-128, the performance will be 31% greater than for the default value c=576.” (See page 4 of The Keccak SHA-3 submission and page 3 of our Note on Keccak
parameters and usage published in February 2010.) Furthermore, the option of c=256 was also presented at numerous occasions:
(*) See Thomas Ristenpart, Hovav Shacham, and Thomas Shrimpton, Advances in Cryptology - Eurocrypt 2011.
3 July 2013 — A software interface for Keccak
We published a new note in which we propose an interface to Keccak at the level of the sponge and duplex constructions, and inside Keccak at the level of the Keccak-f permutation. The purpose is
• First, it allows users of Keccak making best use of its flexibility. As focused on by the SHA-3 contest, Keccak is sometimes viewed solely as a hash function and some implementations are
inherently restricted to the traditional fixed-output-length instances. Instead, the proposed interface reflects the features of the sponge and duplex constructions, from the arbitrary output
length to the flexibility of choosing security-speed trade-offs.
• Second, it simplifies the set of optimized implementations on different platforms. Nearly all the processing of Keccak takes place in the evaluation of the Keccak-f permutation as well as in
adding (using bitwise addition of vectors in GF(2)) input data into the state and extracting output data from it. The interface helps isolate the part that needs to be most optimized, while the
rest of the code can remain generic. If they share the same interface, optimized implementations can be interchanged and a developer can select the best one for a given platform.
As a concrete exercise, we adapted some implementations from the “Reference and optimized code in C” to the proposed interface and posted them in a new “Keccak Code Package”. For the optimized
implementations, it appears that the impact on the throughput is negligible while it significantly improves development flexibility and simplicity.
Contact Information
Email: keccak-at-noekeon-dot-org | {"url":"http://keccak.noekeon.org/","timestamp":"2014-04-16T04:37:30Z","content_type":null,"content_length":"48446","record_id":"<urn:uuid:d7f2b3c8-91ed-49f2-b5ec-469ae4a371c3>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00637-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rome, GA Algebra 2 Tutor
Find a Rome, GA Algebra 2 Tutor
...Thanks!I am captain for the NCAA Division III Women's soccer team at Berry College. In the past I played for the NCAA Division II Women's soccer team at Lindenwood University in St. Louis,
23 Subjects: including algebra 2, reading, elementary (k-6th), geometry
...Call on me for all your Excel needs. I teach AP Physics to High School seniors. AP means it is a college level course.
19 Subjects: including algebra 2, physics, GED, SAT math
...I am however, highly qualified to tutor in Study Skills and Test Preparation for the CRCT, ACT, and SAT. As a tutor for your child, I am dedicated to their academic improvement and success. I
recognize and accept that each child has their own learning style.
47 Subjects: including algebra 2, reading, English, biology
...These include most * Math courses including algebra, geometry and pre-calc, calculus, linear and coordinate math, statistics differential equations and computer modeling and more. * Science
courses including biology, chemistry, physics, organic chemistry, biochemistry, physics, astronomy, geneti...
126 Subjects: including algebra 2, English, chemistry, biology
...Mathematics is not always easy to grasp. Effort is always required on the part of the student and the teacher for this guarantee to become a reality. I taught a unit of logic every year for 14
years that I taught geometry.
12 Subjects: including algebra 2, calculus, geometry, ASVAB
Related Rome, GA Tutors
Rome, GA Accounting Tutors
Rome, GA ACT Tutors
Rome, GA Algebra Tutors
Rome, GA Algebra 2 Tutors
Rome, GA Calculus Tutors
Rome, GA Geometry Tutors
Rome, GA Math Tutors
Rome, GA Prealgebra Tutors
Rome, GA Precalculus Tutors
Rome, GA SAT Tutors
Rome, GA SAT Math Tutors
Rome, GA Science Tutors
Rome, GA Statistics Tutors
Rome, GA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Acworth, GA algebra 2 Tutors
Armuchee algebra 2 Tutors
Austell algebra 2 Tutors
Calhoun, GA algebra 2 Tutors
Canton, GA algebra 2 Tutors
Cartersville, GA algebra 2 Tutors
Doraville, GA algebra 2 Tutors
Forest Park, GA algebra 2 Tutors
Hiram, GA algebra 2 Tutors
Kennesaw algebra 2 Tutors
Lindale, GA algebra 2 Tutors
Shannon, GA algebra 2 Tutors
Silver Creek, GA algebra 2 Tutors
Union City, GA algebra 2 Tutors
Villa Rica, PR algebra 2 Tutors | {"url":"http://www.purplemath.com/Rome_GA_Algebra_2_tutors.php","timestamp":"2014-04-20T02:11:52Z","content_type":null,"content_length":"23484","record_id":"<urn:uuid:b6685930-874c-4181-810c-1cfcf43935d0>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cycle transversals in bounded degree graphs
Marina Groshaus, Pavol Hell, Sulamita Klein, Loana Tito Nogueira, Fabio Protti
In this work we investigate the algorithmic complexity of computing a minimum C[k]-transversal, i.e., a subset of vertices that intersects all the chordless cycles with k vertices of the input graph,
for a fixed k ≥3. For graphs of maximum degree at most three, we prove that this problem is polynomial-time solvable when k ≤4, and NP-hard otherwise. For graphs of maximum degree at most four, we
prove that this problem is NP-hard for any fixed k ≥3. We also discuss polynomial-time approximation algorithms for computing C[3]-transversals in graphs of maximum degree at most four, based on a
new decomposition theorem for such graphs that leads to useful reduction rules.
Full Text:
PDF PostScript | {"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/1534","timestamp":"2014-04-16T04:18:57Z","content_type":null,"content_length":"11902","record_id":"<urn:uuid:bdb525e3-f8f7-4327-b33e-9c1ba57a9e92>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
Angle Measurement ( Read ) | Geometry
What if you were given the measure of an angle and two unknown quantities that make up that angle? How would you find the values of those quantities? After completing this Concept, you'll be able to
use the Angle Addition Postulate to evaluate such quantities.
Watch This
James Sousa: Animation of Measuring Angles with a Protractor
Then look at the first part of this video.
An angle is formed when two rays have the same endpoint. The vertex is the common endpoint of the two rays that form an angle. The sides are the two rays that form an angle.
Label It Say It
$\angle ABC$ Angle $ABC$
$\angle CBA$ Angle $CBA$
The vertex is $B$$\overrightarrow{BA}$$\overrightarrow{BC}$Always use three letters to name an angle, $\angle$
Angles are measured with something called a protractor. A protractor is a measuring device that measures how “open” an angle is. Angles are measured in degrees and are labeled with a $^\circ$
There are two sets of measurements, one starting on the left and the other on the right side of the protractor. Both go around from $0^\circ$$180^\circ$$0^\circ$
Note that if you don't line up one side with $0^\circ$
Sometimes you will want to draw an angle that is a specific number of degrees. Follow the steps below to draw a $50^\circ$
1. Start by drawing a horizontal line across the page, 2 in long.
2. Place an endpoint at the left side of your line.
3. Place the protractor on this point, such that the line passes through the $0^\circ$$50^\circ$
4. Remove the protractor and connect the vertex and the $50^\circ$
This process can be used to draw any angle between $0^\circ$$180^\circ$http://www.mathsisfun.com/geometry/protractor-using.html for an animation of this.
When two smaller angles form to make a larger angle, the sum of the measures of the smaller angles will equal the measure of the larger angle. This is called the Angle Addition Postulate. So, if $B$$
\angle ADC$$m \angle ADC = m \angle ADB + m \angle BDC$
Example A
How many angles are in the picture below? Label each one.
There are three angles with vertex $U$
So, the three angles can be labeled, $\angle XUY$$\angle YUX$$\angle YUZ$$\angle ZUY$$\angle XUZ$$\angle ZUX$
Example B
Measure the three angles from Example 1, using a protractor.
Just like in Example A, it might be easier to measure these three angles if we separate them.
With measurement, we put an $m$$\angle$$m\angle XUY = 84^\circ, \ m\angle YUZ = 42^\circ$$m\angle XUZ = 126^\circ$
Example C
What is the measure of the angle shown below?
This angle is lined up with $0^\circ$$50^\circ$
Guided Practice
1. What is the measure of the angle shown below?
2. Use a protractor to measure $\angle RST$
3. What is $m \angle QRT$
1. This angle is not lined up with $0^\circ$
Inner scale: $140^\circ - 15^\circ = 125^\circ$
Outer scale: $165^\circ - 40^\circ = 125^\circ$
2. Lining up one side with $0^\circ$$100^\circ$
3. Using the Angle Addition Postulate, $m \angle QRT = 15^\circ + 30^\circ = 45^\circ$
1. What is $m \angle LMN$$m \angle LMO = 85^\circ$$m \angle NMO = 53^\circ$
2. If $m\angle ABD = 100^\circ$$x$
For questions 3-6, determine if the statement is true or false.
3. For an angle $\angle ABC, C$
4. For an angle $\angle ABC, \overline{AB}$$\overline{BC}$
5. The $m$$m \angle ABC$
6. The Angle Addition Postulate says that an angle is equal to the sum of the smaller angles around it.
For 7-12, draw the angle with the given degree, using a protractor and a ruler.
7. $55^\circ$
8. $92^\circ$
9. $178^\circ$
10. $5^\circ$
11. $120^\circ$
12. $73^\circ$
For 13-16, use a protractor to determine the measure of each angle.
Solve for $x$
17. $m\angle ADC = 56^\circ$ | {"url":"http://www.ck12.org/geometry/Angle-Measurement/lesson/Angle-Measurement/","timestamp":"2014-04-16T23:04:46Z","content_type":null,"content_length":"130962","record_id":"<urn:uuid:202c59be-c395-45f1-b73c-ddec873dc39b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Annie saw the picture of a bridge made with triangles. The measure of one angle in each triangle is 90˚. Which type of triangle has this angle property? Right Obtuse Acute Equiangular
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
"RIGHT" triangle.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f42a3b8e4b065f388dc4882","timestamp":"2014-04-20T16:15:26Z","content_type":null,"content_length":"30046","record_id":"<urn:uuid:e57ff2a1-9837-4f24-90dc-3ebd1cef572b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is the converse statement true?
June 1st 2010, 06:18 PM #1
Jun 2010
Is the converse statement true?
Let $m,n$ be natural numbers.
Does $\tau(mn)=\tau(m)\tau(n)$
, where $\tau(k)$ is number of positive divisors of k, implies that
Last edited by melese; June 2nd 2010 at 06:37 AM.
You probably mean $\tau(mn)=\tau(m)\tau(n)$?
Too slow
Correct me if I'm wrong anyone but I think it might be true...
Here's some reasoning:
Suppose $mn=p_1^{a_1}p_2^{a_2}\cdots p_k^{a_k}$ and $m=p_1^{a_1}p_2^{a_2}\cdots p_i^{b_i}\cdots p_j^{b_j}$ and $n=p_i^{a_i-b_i}\cdots p_j^{a_j-b_j}p_{j+1}^{a_{j+1}}\cdots p_k^{a_k}$.
$\tau(mn)=\prod_{s=1}^k (a_s+1) = \tau(m)\tau(n)$
Omitting the simplification we get $\prod_{s=i}^j (a_s+1)=\prod_{s=i}^j (b_s+1)(a_s-b_s+1) = \prod_{s=i}^j (b_s(a_s-b_s)+a_s+1)$
But $b_r>0\implies a_r+1<b_r(a_r-b_r)+a_s+1$, so we're forced to have $b_s=0 \; \forall \; s$ if we want this equality to hold.
If you prove it, then it's true!
Here's a less computational argument. Let $D_n$ denote the set of divisors of $n$, and suppose that $g=(m,n)>1$. I exhibit a surjection $f : \ D_m \times D_n \to D_{mn}$ which is not an
injection. For every $(u,v) \in D_m \times D_n$, let $f(u,v)=uv$. This map is clearly a surjection. But $f(m/g,n)=f(m,n/g)$. Therefore $|D_m\times D_n|>|D_{mn}|$.
yes I do, I wrote that early in the morning...
I have an argument but I'm not sure...
Allowing the exponents to be zero, if necessary, we can write the following (similar) prime factorizations:
$m=p_1^{m_1}p_2^{m_2}\dots p_r^{m_r}$ and $n=p_1^{n_1}p_2^{n_2}\dots p_r^{n_r}$, where for $1\leq i\leq r$$m_i$ and $n_i$ are nonnegative. Also, for any prime $p_i$, $p_i$ divides at leat one of
$m$ and $n$.
Now, $mn=p_1^{m_1+n_1}p_2^{m_2+n_2}\dots p_r^{m_r+n_r}$ and then $\tau(mn)=(m_1+n_1+1)(m_2+n_2+1)\dots (m_r+n_r+1)$.
Computing $\tau(m)\tau(n)$ gives $\tau(m)\tau(n)=(m_1+1)(n_1+1)\dots... (m_r+1)(n_r+1)$$=(m_1n_1+m_1+n_1+1)\dots (m_rn_r+m_r+n_r+1)$.
Now comes my difficulty...
For each $i=1,2,\dots ,r$, $m_in_i+m_i+n_i+1\leq m_i+n_i+1$, so if we want $\tau(mn)=\tau(m)\tau(n)$ we must have
$m_in_i+m_i+n_i+1= m_i+n_i+1$ for $i=1,2,\dots ,r$. Otherwise $\tau(mn)<\tau(m)\tau(n)$.
But then $m_in_i=0$. Without loss of generality $m_i=0$ and this means that $p_i$ divides n but not m. In this manner $m$ and $n$ are relatively prime.
By the way exactly one of $m_i$ and $n_i$ equals $0$ due to: "for any prime $p_i$, $p_i$divides at leat one of $m$ and $n$."
Is my argument right? Thanks for your help.
Last edited by melese; June 2nd 2010 at 08:06 AM. Reason: I scribled to see if I can re-edit.
Allowing the exponents to be zero, if necessary, we can write the following (similar) prime factorizations:
$m=p_1^{m_1}p_2^{m_2}\dots p_r^{m_r}$ and $n=p_1^{n_1}p_2^{n_2}\dots p_r^{n_r}$, where for $1\leq i\leq r$$m_i$ and $n_i$ are nonnegative. Also, for any prime $p_i$, $p_i$ divides at leat one of
$m$ and $n$.
Now, $mn=p_1^{m_1+n_1}p_2^{m_2+n_2}\dots p_r^{m_r+n_r}$ and then $\tau(mn)=(m_1+n_1+1)(m_2+n_2+1)\dots (m_r+n_r+1)$.
Computing $\tau(m)\tau(n)$ gives $\tau(m)\tau(n)=(m_1+1)(n_1+1)\dots... (m_r+1)(n_r+1)$$=(m_1n_1+m_1+n_1+1)\dots (m_rn_r+m_r+n_r+1)$.
Now comes my difficulty...
For each $i=1,2,\dots ,r$, $m_in_i+m_i+n_i+1\leq m_i+n_i+1$, so if we want $\tau(mn)=\tau(m)\tau(n)$ we must have
$m_in_i+m_i+n_i+1= m_i+n_i+1$ for $i=1,2,\dots ,r$. Otherwise $\tau(mn)<\tau(m)\tau(n)$.
But then $m_in_i=0$. Without loss of generality $m_i=0$ and this means that $p_i$ divides n but not m. In this manner $m$ and $n$ are relatively prime.
By the way exactly one of $m_i$ and $n_i$ equals $0$ due to: "for any prime $p_i$, $p_i$divides at leat one of $m$ and $n$."
Is my argument right? Thanks for your help.
You mean $m_in_i+m_i+n_i+1\geq m_i+n_i+1$.
Other than that, it looks fine to me.
Again, you corrected me. Thank you so much for your help!
June 1st 2010, 06:27 PM #2
June 1st 2010, 06:29 PM #3
June 1st 2010, 06:30 PM #4
June 1st 2010, 06:46 PM #5
June 1st 2010, 08:27 PM #6
June 2nd 2010, 06:33 AM #7
Jun 2010
June 2nd 2010, 06:44 AM #8
Jun 2010
June 2nd 2010, 10:50 AM #9
June 3rd 2010, 05:23 AM #10
Jun 2010 | {"url":"http://mathhelpforum.com/number-theory/147383-converse-statement-true.html","timestamp":"2014-04-18T15:21:38Z","content_type":null,"content_length":"78329","record_id":"<urn:uuid:7e39b6c3-9a2b-447e-86e8-d626ff2eb5c0>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monyberumen wrote:
Hello all,
I recently took a practice cat and got a very very disapointing score.. Lets just say its around low 400. EEEK. I obviously need more practice and math and need to know if there is any practice that
has helped anyone immensly. Math has ALWAYS been the struggle in my life and continues to do so. I am currently doing the
Manhattan GMAT books
for math which are good. But i would like to diversify and get the optimal practice. any tricks or tactics? Any suggestions?any chance of hope?????
thank you
Math frustrated.
My friend, I'm happy to help.
First of all, here's a free blog with tons of math advice and practice problems:
I would suggest reading every article on that blog.
For more help, I would recommend
. We have over 400 GMAT math practice questions, and each one has its own video explanation, for accelerated learning. We have over 150 math lessons, covering all the content and strategies you need.
The blog & the
product also provide full verbal preparation, if you are interested in that as well. Here's a free PS question:
Here's a free DS question:
Here's our testimonial page:
Even if you decide not to take advantage of the
product, I hope you get the most out of the free blog.
Mike McGarry
Magoosh Test Prep | {"url":"http://gmatclub.com/forum/need-help-in-quantative-section-math-158227.html?kudos=1","timestamp":"2014-04-24T11:14:34Z","content_type":null,"content_length":"133119","record_id":"<urn:uuid:be772299-334d-4d6c-addb-4b82dd351f74>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sparklines and Data Bars in Excel 2010 - Peltier Tech Blog
There are two Conditional Formatting features in Excel 2010 which allow for graphical displays right in the worksheet. Sparklines, the word-sized graphical elements invented by Edward Tufte, are a
new addition to Excel 2010. Data Bars were introduced in Excel 2007, but they have been improved and expanded in 2010. I gave each a test drive today.
There are numerous products which add sparklines to Excel. Two popular commercial products are BonaVista Microcharts and Bissantz SparkMaker, which work by forming the graphics using specially
designed fonts. Sparklines For Excel is an open source add-in which works by drawing sets of shapes to construct the sparklines. These are full-featured sparkline programs which accommodate many
chart types and styles.
Microsoft has gotten its start in sparklines in Excel 2010. The Microsoft Excel Team Blog has discussed this new feature in Sparklines in Excel, Adding Some Spark to Your Spreadsheets, and Formatting
Sparklines. There are only three chart types: line, column, and high-low; and they do not have such features as baselines or axes or shaded zones. However, I think it’s a promising start.
To try out the sparklines, I loaded my blog stats into a pivot table, with months in rows and day of the month in columns. Creating the charts was easy. I selected the block of data to plot, then
clicked the Sparklines – Lines button on the Insert tab.
A simple dialog pops up with two RefEdit boxes, one for selecting the data, the other for selecting the cell(s) to contain the graphics. If you’ve selected cells containing data, the Data Range box
indicates this range; if you’ve selected a range of empty cells, the Location Range box indicates the selected range. These two ranges do not need to be on the same worksheet.
Click OK and the sparklines appear in the indicated position.
The weekly cycles within each month are readily apparent, but the scales aren’t quite right. The statistics begin in March 2008, about one month after starting my blog. The numbers on the first day
were nowhere near where thay have been for the past few months.
The default setting is that each sparkline scales the vertical position of the data point so that its minimum and maximum fill the cell. When I selected the ‘Same for All Sparklines’ setting, the
scales of all sparklines are the same, so the earlier months are pretty much flat compared to more recent months.
It’s easy to change from one sparkline style to another: just click in the range of sparklines, then click the button for the desired style.
Above are the line and column styles. A third style is Win-Loss, which plots a positive block for any positive number, a negative block for any negative number, and a blank for a zero.
The sparklines in a set are grouped together, and when you select one cell, the whole range of sparklines is outlined with a thin blue border, as shown in the sine waves below. When you apply
sparkline formatting to a cell containing a sparkline, all sparklines in the group assume this formatting. You can ungroup the sparklines, and format them individually.
Although there are a limited number of sparkline chart types, within each type you have a range of formatting options. In the line charts, you can draw the line only, you can add markers, and you can
format positive and negative markers differently (there is apparently no distinction between zero and positive markers). You can highlight the first and last points, and the high and low points. I
added the last group to see what happened when the cells contained both sparklines and content. Apparently the sparklines decorate the back of the cell, and the text appears in front.
You can format the bar chart sparklines all the same (top left) or different positive and negative bars (top right). You can highlight the first and last bars, or the high and low bars (bottom left
and center). Any text in the cell appears in front of the sparkline.
The top center group below shows how one sparkline can be ungrouped in order to format it differently. The other three have remained grouped despite not being contiguous.
Data Bars
Data Bars appeared in Excel 2007 as a way to show values visually using bars within the cells containing the values. In 2010 the capabilities of Data Bars were expanded, and the ability to make
deceptive Data Bars was reduced.
Data Bars in 2007 had their base at the left edge of the cell, and they extended to the right. If a cell had a zero value, it still had a small length of bar, giving the wrong impression that the
cell really did contain some value. Also, the bars started out with a good enough color at the base, but they faded to the right, so in some cases it became very difficult to judge where the bars
These deficiencies were corrected in Excel 2010. Below left is the Excel 2007 representation of data bars. The first few cells contain zeros, but the cells have data bars of finite length. The bars
increase in size appropriately, but they fade out from left to right, and eyes are forced to work just too hard to distinguish their ends. Finally, at the bottom the values decline asymptotically
towards zero, but the bars don’t completely vanish.
In the center is the Excel 2010 version of the same data bars. Zero equals zero, and the bars have a distinct endpoint. You still can make the faded bars, but you know better. I’m not sure whether
you can make nonzero bars for zero values; I don’t think so, and I hope not.
In Excel 2010 you can make your bars go right to left, as shown below right.
The unorthodox treatment of non-positive values in Excel 2007′s data bars is further illustrated below. At left, since Excel 2007 didn’t allow for negative or right-to-left data bars, the sine wave
showed positive bars, even for the most negative value. The Excel 2010 data bars plot negative values in the opposite direction, optionally in a different color.
You can control the color of the axis where positive meets negative, but unfortunately you cannot change the line style. The designers have picked a dashed line, which by nature of its discontinuous
dashes, draws more attention to itself than a solid line would. However, data bars have been improved so much, that a small cosmetic problem like this isn’t too important.
Backwards Compatibility
A commenter on one of the Sparklines blog posts wondered what would happen if a workbook with sparklines were opened in a previous version of Excel. I tried it and discovered:
• A workbook containing sparklines will show blank cells when opened in Excel 2007 or 2003.
• A workbook containing Excel 2010 data bars will show Excel 2007 style data bars when opened in Excel 2007, and blank cells when opened in Excel 2003.
In either case, you receive a warning about the file being created in a later version of Excel, and while your version of Excel will do its best to open the workbook, there may be some formatting
which will not be faithfully displayed.
I also tried round-tripping a workbook through previous versions of Excel:
• When a workbook with 2010 data bars and sparklines is opened and saved in Excel 2003, and reopened in Excel 2010, the sparklines have vanished, and the data bars have reverted to Excel 2007 style
(bars fade at the ends and all start at the minimum at the left and extend to the right, although there are no positive length zero values).
• When a workbook with 2010 data bars and sparklines is opened and saved in Excel 2007, and reopened in Excel 2010, the sparklines reappear, and the data bars retain their Excel 2010 style.
Apparently these features survive displacement by one version, but not by two.
Excel 2010 introduces a simple version of sparklines for compact visualization in the worksheet. The native Excel sparklines are not as comprehensive as existing third-party solutions can produce,
but they can still be useful in many cases.
Excel 2010 also fixes conceptual problems and cosmetic issues with the data bars that were introduced in Excel 2007.
1. sam says:
The only small deficiency that I found with the Data Bar is the inablity to adjust the width of the bar.
If you increase the Row Height then the Data bar widens as well…
2. jeff weir says:
Hi Jon. It would be interesting to see how the new sparklines compare head to head against some in-cell graphs, given you can put a cell sized chart in 2007 and have all the formatting options
you get with any normal 2007 chart.
For instance, I’ve posted a picture of an implementation of in cell graphs and some closeups at http://cid-f380a394764ef31f.skydrive.live.com/browse.aspx/.Public?uc=3 – (the relevent picture
files all start with MJ or MU)
With this kind of functionality, I can only imagine that sparklines would help make this kind of functionality easier for the average user, but might not add much (if anything) to a more advanced
user. Its possible the new sparklines functionality might actually suck compared to what you can do with incell graphs. Your thoughts?
PS…is it possible to use tags when writing comments here? (And if so, which ones, and how do we use them)
Jeff -
I had experimented with “real” charts as sparklines. You can do it in Classic Excel if you give yourself some margin, by making the chart object no less than about four rows high, making the plot
area one row high, and positioning the chart object so the plot area overlies the cell. Pain in the neck, but that’s why Bill Gates invented VBA. 2007 made this a bit easier by removing the
margin, the several pixels all around the chart area that are inaccessible in Classic Excel, and by allowing the whole chart to shrink to a cell’s height and still show its contents. But other
issues with 2007 charting limited my experimenting there. Your mini charts are interesting.
Fernando Cinquegrani, who does some amazing graphics in Excel, experimented in Excel 2003, using Camera images of charts shrunk down to cell size. This was pretty interesting, though using more
than half a dozen or so camera objects in a worksheet is asking for trouble. I don’t see this example on his web page.
And I don’t know about the tags.I’d like to get threaded commenting hooked up too, one of these days.
4. Colin Banfield says:
Jon, great article capturing the essence of these two features. It never occurred to me to consider Sparklines as conditional formatting because, well, they aren’t based on any condition, or fall
under the Conditional Formatting umbrella…
I like how Sparklines work with tables – very much like a calculated column. Refreshing a table from an external data source automatically adds new Sparklines to the “calculated column.” Alas,
although you can fake out Sparklines in a PivotTable, it’s all pretty much a manual process to update, configure for different fields etc. In other words, any serious visual analytics is pretty
much out of the question.
Despite seeing axes on Sparklines in the Excel Blog postings, I haven’t been able to display them at all. After selecting the “Show Axis” option, I don’t see any axis displayed in the cell(s). Is
this option working for you?
Colin -
“Show Axis” did nothing that I could see. I could plot left-to-right, and it seemed that changing to a date scale also worked (though I didn’t try hard to break it).
I wouldn’t say that “serious” analytics is out of the question. As always, you’d need careful planning, and probably some clever VBA. If you need to extend your sparklines, since it’s contained
in the formatting of the cell, you can just fill the cell down or right as needed.
6. Jerry Betz says:
Hi Jon-
Does the Excel 2010 Visual Basic Editor pick up actions when working with Sparklines? It would be good if the actions were picked up properly so we could learn the new object syntax, unlike how
charting worked in 2007.
Jerry -
I haven’t tried recording a sparklines macro yet. It was getting late and I was approaching a deadline (bedtime).
8. Colin Banfield says:
My macro recording experience with Sparklines is similar to that with charts. Add some formatting during recording and the macros bomb on playback.
9. Jan Karel Pieterse says:
The axis option is abvailable on later 2010 builds than the one we got.
Same seems to go for recording macros.
MSFT is still working on this thing.
10. James says:
Hi Jon,
These look like very useful new features. I liked using the conditional formatting in 2007, but never could work out how to get them out of Excel, ie to get data bars in a Word table. Do you know
of any way to do so?
I haven’t tried doing this, but maybe you could embed the Excel worksheet in Word, or copy the table in Excel, and paste it as a picture (probably bitmap) in Word.
12. Christian Bracchi says:
Would it be possible for an exmplanation on how you were able to actually insert the sparklines into the pivottable? It looks like you have managed to get a blank column in the table and insert
the sparklines in there.
Christian -
I sure wish I could remember how I did that. I did it originally many months ago during the technical review. That was two versions ago, I deleted the virtual machine that was on, and I didn’t
save the files.
If anyone kows how I did that, please remind me.
14. nutsch says:
I manage to replicate something similar by:
- adding an empty column with a bogus title in the data source,
- inserting that column as the last row field,
- replacing (blank)s by spaces,
- replacing title by space,
- changing layout of previous row field to Show item labels in tabular form.
That creates an empty column in which you can put your sparklines.
15. Skip says:
Sparklines look like a quick and easy way to add some visualization.
The problem I have is using the parent company’s form the shows expenses as negative amounts. The Sparkline show oposite from what it should (up for high expense, down for low expense). I know I
can add a hidden row and change the sign. Is there any way to condition the sparkline cell to change the sign?
Thank you
16. Tim Heschotmail.com says:
Where do you find databars? You show where sparklines are but not databars. What up? Once again a waste of my time!! If you like sparklines so much don’t mention databars cause they’re not i n
the same spot!!!!!!!!!!!!!
Tim -
I mentioned at the top of the article that these graphical features were based on Conditional Formatting. One way to format values is via data bars, which can be found via the Conditional
Formatting button on the Home tab. I’ll include a screenshot when I get a chance.
Sorry to have “wasted” your time.
18. Julian Brinckmann says:
I have troubles with the Data Bars in Excel 2010. I have Data that have extreme negative numbers but small positive Data. Now I want the Data Bar to extend almost all the way in the negative
direction but the cell should end shortly after the positive end of the Data Bar. I cound not find a way how to format the c3ell to do this.
Maybe you guys could help me out.
Thanks in advance
Julian -
I entered a few numbers into a new sheet and applied data bars. I don’t see the problem you’re reporting:
20. Gerardo Serrano says:
Hi, is there a way to format databars in more tan 2 colors, I mean as in red, green and yellow? because there is only a format for positive and negative.
Gerardo -
You can have two colors if negative values are a different color (or you can format negatives to use the same color as positives). No more colors than that can be applied.
[...] Hier gehts direkt zum Beitrag im PTS-Blog. [...] | {"url":"http://peltiertech.com/WordPress/sparklines-and-data-bars-in-excel-2010/","timestamp":"2014-04-18T03:02:17Z","content_type":null,"content_length":"64763","record_id":"<urn:uuid:8b0c773b-5bbe-47d4-b90f-83d609869cd9>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |