content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Homework Help
Posted by ELIJAH HAVA on Sunday, May 5, 2013 at 7:58pm.
• maths - Reiny, Sunday, May 5, 2013 at 8:43pm
I will assume that your base is 10
log(x^2 + 1) = x/6
x^2 + 1 = 10^(x/6)
There is no easy way to do this, so I will resort to good ol' Wofram, (one of the best math-sites on the net)
as you can see
x = 0
x = .416103
x = 3.6267
I tested all three answers , they all work
I had done this question before for somebody else.
However, in that solution I assumed the base was e
and of course we got 3 different answers
Related Questions
Maths - Solve for x: 6log(x^2 +1) - x = 0
Maths - Solve for the value of x: 6log(x^2+1)-x=0
Please solve it for me!! Please...Maths - Solve for the value of x: 6log(x^2+1)-...
Please Ms Sue Help me! Alhorithm - Solve for the value of x: 6log(x^2+1)-x=0. I ...
Please help me solve it. Math - solve for the value of x: 6log (x^2+1)-x=0. ...
Please solve it.Logarithm - Please help me because I do not any idea in ...
Math - Make a complete re search project on maths is interconnected with other ...
Maths B - Charlotte received a score of 68 on both her English and Maths tests. ...
Maths B - Charlotte received a score of 68 on both her English and Maths tests. ...
maths letaracy,history,geography and life science - im going to grade 10 next ... | {"url":"http://www.jiskha.com/display.cgi?id=1367798305","timestamp":"2014-04-19T07:36:06Z","content_type":null,"content_length":"8810","record_id":"<urn:uuid:3306c1fd-8462-4c63-b2c0-34cbdcefd3de>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
d th
A. History of Kuramoto model
B. Problem formulation
A. Notation
B. Index theorem
C. Lower bounds on the stable set
IV. LARGE N LIMIT
A. Comprehensive example for three oscillators
B. Example for four oscillators | {"url":"http://scitation.aip.org/content/aip/journal/chaos/22/3/10.1063/1.4745197","timestamp":"2014-04-17T04:58:36Z","content_type":null,"content_length":"72957","record_id":"<urn:uuid:3e316cc1-b64d-4908-9854-31ab12710353>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binary Curve López-Dahab Coordinates
'López-Dahab Coordinates' (short: LD coordinates) are used to represent elliptic curve points on binary curves y^2 + xy = x^3 + ax^2 + b. In 'López-Dahab Coordinates' the triple (X, Y, Z) represents
the affine point (X / Z, Y / Z^2).
Point Doubling (up to 5M)
Let (X, Y, Z) be a point (unequal to the 'point at infinity') represented in 'López-Dahab Coordinates'. Then its double (X', Y', Z') can be calculated by
if (X == 0)
return POINT_AT_INFINITY
A = X^2
B = Z^2
Z' = A*B
C = A^2
D = b*B^2
X' = C + D
Y' = D*Z' + X'*(a*Z' + Y^2 + D)
return (X', Y', Z')
Note that the total number of field multiplications can be reduced if the curve coefficient a and b are carefully chosen.
Point Addition (up to 14M)
Let (X1, Y1, Z1) and (X2, Y2, Z2) be two points (both unequal to the 'point at infinity') represented in 'López-Dahab Coordinates'. Then the sum (X3, Y3, Z3) can be calculated by
A = X1*Z2 + X2*Z1
B = Y1*Z2^2 + Y2*Z1^2
if (A == 0)
if (B != 0)
return POINT_AT_INFINITY
return POINT_DOUBLE(X1, Y1, Z1)
C = Z1*A
D = Z2*C
Z3 = D^2
X3 = D*(A^2 + B) + B^2 + a*Z3
E = C*D
F = E^2*Y2
G = X3 + X2*E
Y3 = Z3*X3 + F + B*D*G
return (X3, Y3, Z3)
Note that if curve coefficient a is carefully chosen, the number of field multiplications can be reduced to 13M. | {"url":"http://point-at-infinity.org/ecc/Binary_Curve_Lopez-Dahab_Coordinates.html","timestamp":"2014-04-20T00:37:59Z","content_type":null,"content_length":"2151","record_id":"<urn:uuid:250c2f85-f318-478b-8a33-ccf042ee7b67>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Diophantine equations
November 3rd 2012, 11:53 PM #1
Mar 2012
Diophantine equations
Hello, please help with sugesstions or solutions.
1. Are there integers x and y so valid: $x^2 -5xy +y^2 =3$ ?
2. Are there prime p and natural n so: $3p+1=k^3$ ?
3. Determine the naural numbers: $2<a<b<c<d<e$ so: $\frac{1}{a}+\frac{1}{b}+\frac{1}{c}+ \frac{1}{d}+ \frac{1}{e} = 1$.
Thank you for any help
Last edited by amater2000; November 4th 2012 at 05:59 AM.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/pre-calculus/206716-diophantine-equations.html","timestamp":"2014-04-17T02:54:30Z","content_type":null,"content_length":"29365","record_id":"<urn:uuid:d1399f0a-e518-4c07-8fbf-6136d835b301>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
A P
That’s it, the 9th post of the series of programming job interview challenge is out and alive. 19 readers provided answers to job interview challenge #8, Pieter G was the first to provide a correct
The fastest way I can come up with is to generate a finite state machine at initialization. The transitions between states would be defined by the records you look for in the pattern and one
transition for an unmatched record. When the machine enters the goal state is should send the notification (how to most quickly do that I leave to someone else). When reaching the goal state the
machine should not terminate but continue (else we may miss a occurrence).
You can see more details about the solution in those blog entries:
Those are the readers who provided a correct answers in comments:
Pieter G, Dirk, Josh Bjornson, Tim C, Timothy Fries, Tristan, Michael Mrozek, Edward Shen, Alex, Trinity, Antoine Hersen and Mark Brackett.
Again, as last week, the amount of incorrect answers is tiny and there is no major pattern in those answers, so I will not refer to them. But, if someone think that I misunderstood his answer, please
leave me a comment about it.
This week’s question:
This question is about trees and it is very simple. It is a logical question so there is no need for code.
You have a tree (lets assume it is a binary tree, but it could be any king of tree), you need to provide an efficient algorithm to locate the one before last node in an In-order traverse (left ->
root -> right). I am not looking for an O(whatever) answer because the worst case is always O(n) incase the tree is a list, so think about you solution carefully, we want a practical solution so O(n/
2) (on a normal tree) is better than O(n). The tree is not sorted and there is no meaning to the values in the nodes, we want the location in the traverse. for example:
Here we would like to get the pink node.
As always you may post the solution in your blog or comment. Comments will be approved next week.
Good luck | {"url":"http://www.dev102.com/2008/06/23/a-programming-job-interview-challenge-9/","timestamp":"2014-04-19T14:49:50Z","content_type":null,"content_length":"68847","record_id":"<urn:uuid:55c699f6-5667-46c7-bc64-b1f0a9839cf3>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comparing quantiles for two samples
Recently, for a research paper, I some samples, and I wanted to compare them. Not to compare they means (by construction, all of them were centered) but there dispersion. And not they variance, but
more their quantiles. Consider the following boxplot type function, where everything here is quantile related (which is not the case for standard boxplot, see http://freakonometrics.hypotheses.org/
4138, in French)
> boxplotqbased=function(x){
+ q=quantile(x[is.na(x)==FALSE],c(.05,.25,.5,.75,.95))
+ plot(1,1,col="white",axes=FALSE,xlab="",ylab="",
+ xlim=range(X),ylim=c(1-.6,1+.6))
+ polygon(c(q[2],q[2],q[4],q[4]),1+c(-.4,.4,.4,-.4))
+ segments(q[1],1-.4,q[1],1+.4)
+ segments(q[5],1,q[4],1)
+ segments(q[5],1-.4,q[5],1+.4)
+ segments(q[1],1,q[2],1)
+ segments(q[3],1-.4,q[3],1+.4,lwd=2)
+ xt=x[(x<q[1])|(x>q[5])]
+ points(xt,rep(1,length(xt)))
+ axis(1)
+ }
(one can easily adapt the code for lists, e.g.). Consider for instance temperature, when the (linear) trend is removed (see http://freakonometrics.hypotheses.org/1016 for a discussion on that series,
in Paris),
from January 1st till December 31st. Let us remove now the seasonal cycle, i.e. we do have here the difference with the average seasonal temperature (with here upper and lower quantiles),
Seasonal boxplots are here (with Autumn on top, then Summer, Spring and Winter, below),
If we zoom in, we do have (where upper and lower segments are 95% and 5% quantiles, while classically, boxes are related to the 75% and the 25% quantiles)
Is there a (standard) test to compare quantiles – some of them perhaps ? Can we compare easily quantiles when have two (or more) samples ?
Note that this example on temperature could be related to other old posts (see e.g. http://freakonometrics.hypotheses.org/2190), but the research paper was on a very different topic.
Consider two (i.i.d.) samples $\{x_1,\cdots,x_m\}$ and $\{y_1,\cdots,y_n\}$, considered as realizations of random variables $X$ and $Y$. In all statistical courses, tests on the average are always
considered, i.e.
Usually, the idea in courses is to start with a one sample test, and to test something like
The idea is to assume that samples are from Gaussian variables,
$T = \frac{\overline{x} - \mu_\star}{\widehat{\sigma}/\sqrt{n}}$
Under $H_0$, $T$ has a Student t distribution. All that can be found in any Statistics 101 course. We can derive $p$-value, computing probabilities that $T$ exceeds the observed values (for two sided
tests, the probability that the absolute value of $T$ exceed the absolute value of the observed statistics). This test is closely related to the construction of confidence intervals for $\mu$. If $\
mu_\star$ belongs to the confidence interval, then it might be a suitable value. The graphical representation of this test is related to the following graph
Here the observed value was 1,96, i.e. the $p$-value (the area in red above) is exactly 5%.
To compare means, the standard test is based on
$T = {\overline{x} - \overline{y} \over \displaystyle\sqrt{{s_x^2 \over m} + {s_y^2 \over n}} }$
which has – under $H_0$ – a Student-t distribution, with $u$ degrees of freedom, where
$u = \frac{(s_x^2/m + s_y^2/n)^2}{(s_x^2/m)^2/(m-1) + (s_y^2/n)^2/(n-1)}.$
Here, the graphical representation is the following,
But tests on quantiles are rarely considered in statistical courses. In a general setting,define quantiles as
$Q_X(p)=\inf\left\{ x\in \mathbb R : p \le \mathbb P(X\leq x) \right\}$
one might be interested to test
$H_1:Q_X(p)eq Q_Y(p)$
for some $p\in(0,1)$. Note that we might be interested also to test if
$H_0:Q_X(p_k)= Q_Y(p_k)$
for all $k$, for some vector of probabilities $\boldsymbol{p}=(p_1,\cdots,p_d)\in(0,1)^d$.
One can imagine that this multiple test will be more complex. But more interesting, e.g. a test on boxplots (are the four quantiles equal ?). Let us start with something a bit more simple: a test on
quantiles for one sameple, and the derivation of a confidence interval for quantiles.
The important idea here is that it should be extremely simple to get $p$-values. Consider the following sample, and let us run a test to assess if the median can be zero.
> set.seed(1)
> X=rnorm(20)
> sort(X)
[1] -2.21469989 -0.83562861 -0.82046838 -0.62645381 -0.62124058 -0.30538839
[7] -0.04493361 -0.01619026 0.18364332 0.32950777 0.38984324 0.48742905
[13] 0.57578135 0.59390132 0.73832471 0.82122120 0.94383621 1.12493092
[19] 1.51178117 1.59528080
> sum(X<=0)
[1] 8
Here, 8 observations (out of 20, i.e. 40%) were below zero. But we do know the distribution of $N$ the number of observation below the target
$N=\sum_{i=1}^n \boldsymbol{1}(X_i\leq x_\star)$
It is a binomial distribution. Under $H_0$, it is a binomial distribution $\mathcal{B}(n,p_\star)$ where $p_\star$ is the probability target (here 50% since the test is on the median). Thus, one can
easily compute the $p$-value,
> plot(n,dbinom(n,size=20,prob=0.50),type="s",xlab="",ylab="",col="white")
> abline(v=sum(X<=0),col="red")
> for(i in 1:sum(X<=0)){
+ polygon(c(n[i],n[i],n[i+1],n[i+1]),
+ c(0,rep(dbinom(n[i],size=20,prob=0.50),2),0),col="red",border=NA)
+ polygon(21-c(n[i],n[i],n[i+1],n[i+1]),
+ c(0,rep(dbinom(n[i],size=20,prob=0.50),2),0),col="red",border=NA)
+ }
> lines(n,dbinom(n,size=20,prob=0.50),type="s")
which yields
Here, the $p$-value is
> 2*pbinom(sum(X<=0),20,.5)
[1] 0.5034447
Here the probability is easy to compute. But one can observe that there is some kind of disymmetry here. Actually, if the observed value was not 8, but 12, some minor changes should be done (to keep
some symmetry),
> plot(n,dbinom(n,size=20,prob=0.50),type="s",xlab="",ylab="",col="grey")
> abline(v=20-sum(X<=0),col="red")
> for(i in 1:sum(X<=0)){
+ polygon(c(n[i],n[i],n[i+1],n[i+1])-1,
+ c(0,rep(dbinom(n[i],size=20,prob=0.50),2),0),col="red",border=NA)
+ polygon(21-c(n[i],n[i],n[i+1],n[i+1])-1,
+ c(0,rep(dbinom(n[i],size=20,prob=0.50),2),0),col="red",border=NA)
+ }
> lines(n-1,dbinom(n,size=20,prob=0.50),type="s")
Based on those observations, one can easily write a code to test if the $p_\star$-quantile of a sample is $x_\star$. Or not. For a two sided test, consider
> quantile.test=function(x,xstar=0,pstar=.5){
+ n=length(x)
+ T1=sum(x<=xstar)
+ T2=sum(x< xstar)
+ p.value=2*min(1-pbinom(T2-1,n,pstar),pbinom(T1,n,pstar))
+ return(p.value)}
Here, we have
> quantile.test(X)
[1] 0.5034447
Now, based on that idea, due to the duality between confidence intervals and tests, one can easily write a function that computes confidence interval for quantiles,
> quantile.interval=function(x,pstar=.5,conf.level=.95){
+ n=length(x)
+ alpha=1-conf.level
+ r=qbinom(alpha/2,n,pstar)
+ alpha1=pbinom(r-1,n,pstar)
+ s=qbinom(1-alpha/2,n,pstar)+1
+ alpha2=1-pbinom(s-1,n,pstar)
+ c.lower=sort(x)[r]
+ c.upper=sort(x)[s]
+ conf.level=1-alpha1-alpha2
+ return(list(interval=c(c.lower,c.upper),confidence=conf.level))}
> quantile.interval(X,.50,.95)
[1] -0.3053884 0.7383247
[1] 0.9586105
Because of the use of non-asymptotic distributions, we can not get exactly a 95% confidence interval. But it is not that bad, here.
• Comparing quantiles for two samples
Now, to compare quantiles for two samples… it is more complicated. Exact tests are discussed in Kosorok (1999) (see http://bios.unc.edu/~kosorok/…) or in Li, Tiwari and Wells (1996) (see http://
jstor.org/…). For the computational aspects, as mentioned in a post published almost one year ago on http://nicebread.de/… there is a function to compare quantiles for two samples.
> install.packages("WRS")
> library("WRS")
Some multiple tests on quantiles can be performed here. For instance, on the temperature, if we compare quantiles for Winter and Summer (on only 1,000 observations since it can be long to run that
function), i.e. 5%, 25%, 75% and 95%,
> qcomhd(Z1[1:1000],Z2[1:1000],q=c(.05,.25,.75,.95))
q n1 n2 est.1 est.2 est.1_minus_est.2 ci.low ci.up p_crit p.value signif
1 0.05 1000 1000 -6.9414084 -6.3312131 -0.61019530 -1.6061097 0.3599339 0.01250000 0.220 NO
2 0.25 1000 1000 -3.3893867 -3.1629541 -0.22643261 -0.6123292 0.2085305 0.01666667 0.322 NO
3 0.75 1000 1000 0.5832394 0.7324498 -0.14921041 -0.4606231 0.1689775 0.02500000 0.338 NO
4 0.95 1000 1000 3.7026388 3.6669997 0.03563914 -0.5078507 0.6067754 0.05000000 0.881 NO
or if we compare quantiles for Winter and Summer
> qcomhd(Z1[1:1000],Z3[1:1000],q=c(.05,.25,.75,.95))
q n1 n2 est.1 est.2 est.1_minus_est.2 ci.low ci.up p_crit p.value signif
1 0.05 1000 984 -6.9414084 -6.438318 -0.5030906 -1.3748624 0.39391035 0.02500000 0.278 NO
2 0.25 1000 984 -3.3893867 -3.073818 -0.3155683 -0.7359727 0.06766466 0.01666667 0.103 NO
3 0.75 1000 984 0.5832394 1.010454 -0.4272150 -0.7222362 -0.11997409 0.01250000 0.012 YES
4 0.95 1000 984 3.7026388 3.873347 -0.1707078 -0.7726564 0.37160846 0.05000000 0.539 NO
(the following graphs are then plotted)
Those tests are based on the procedure proposed in Wilcox, Erceg-Hurn, Clark and Carlson (2013), online on http://tandfonline.com/…. They rely on the use of bootstrap samples. The idea is quite
simple actually (even if, in the paper, they use Harrell–Davis estimator to estimate quantiles, i.e. a weighted sum of ordered statistics – as described in http://freakonometrics.hypotheses.org/1755
– but the idea can be understood with any estimator): we generate several bootstrap samples, and compute the median for all of them (since our interest was initially on the median)
> Q=rep(NA,10000)
> for(b in 1:10000){
+ Q[b]=quantile(sample(X,size=20,replace=TRUE),.50)
+ }
Then, to derive a confidence interval (with, say, 95% confidence), we compute quantiles of those median estimates,
> quantile(Q,c(.025,.975))
2.5% 97.5%
-0.175161 0.666113
We can actually visualize the distribution of that bootstrap median,
> hist(Q)
Now, if we want to compare medians from two independent samples, the strategy is rather similar: we bootstrap the two samples – independently – then compute the median, and keep in mind the
difference. Then, we will look if the difference is significantly different from 0. E.g.
> set.seed(2)
> Y=rnorm(50,.6)
> QX=QY=D=rep(NA,10000)
> for(b in 1:10000){
+ QX[b]=quantile(sample(X,size=length(X),replace=TRUE),.50)
+ QY[b]=quantile(sample(Y,size=length(Y),replace=TRUE),.50)
+ D[b]=QY[b]-QX[b]
+ }
The 95% confidence interval obtained from the bootstrap difference is
> quantile(D,c(.025,.975))
2.5% 97.5%
-0.2248471 0.9204888
which is rather close to was can be obtained with the R function
> qcomhd(X,Y,q=.5)
q n1 n2 est.1 est.2 est.1_minus_est.2 ci.low ci.up p_crit p.value signif
1 0.5 20 50 0.318022 0.5958735 -0.2778515 -0.923871 0.1843839 0.05 0.27 NO
(where the difference is here the oppositive of mine). And when testing for 2 (or more) quantiles, Bonferroni method can be used to take into account that those tests cannot be considered as
Print This Post
The WRS package might be difficult to install. An efficient way might be to type
> install.packages(c(“MASS”, “akima”, “robustbase”))
> install.packages(c(“cobs”, “robust”, “mgcv”, “scatterplot3d”,
+ “quantreg”, “rrcov”, “lars”, “pwr”, “trimcluster”,
+ “parallel”, “mc2d”, “psych”, “Rfit”))
> install.packages(“WRS”, repos=”http://R-Forge.R-project.org”,
+ type=”source”)
> library(WRS) | {"url":"http://freakonometrics.hypotheses.org/4199","timestamp":"2014-04-20T06:02:16Z","content_type":null,"content_length":"126432","record_id":"<urn:uuid:837933f4-d21b-4b0f-a1cc-bedf2d7680d1>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
stuck on this problem
March 23rd 2009, 12:45 PM #1
Mar 2009
stuck on this problem
Write an equation using the sine function which goes through the points (5pie/3 , 2) and (10pie/3 , -1) Next write an equation using cosine going through the same two points.
Is the answer
Y=3 sin(x)+.5
Y=-3 cos(x)+.5
I don't think these answers are right so could someone help me?
The points are in polar coordinates.
Use y = r sin(angle) to write your equations.
I can't use angles. I have to do it in form y=sin(x)+b
March 23rd 2009, 02:45 PM #2
Jan 2009
March 23rd 2009, 02:51 PM #3
Mar 2009 | {"url":"http://mathhelpforum.com/trigonometry/80191-stuck-problem.html","timestamp":"2014-04-21T15:55:43Z","content_type":null,"content_length":"32288","record_id":"<urn:uuid:d5b13fe6-b37a-4b48-8bfb-a669e9414373>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by vee
Total # Posts: 66
Chemistry !
The solubility of Mn(OH)2 is 3.04 x 10^-4 gram per 100 ml of solution . A) write the balanced chem equation for Mn(OH)2 in aqueous solution B) calculate the molar solubility of Mn(OH)2 at 25 degrees
celcius C) calculate the value of the solubility product constant, Ksp, for Mn...
Calculate the pH of a solution made by combining 40.0 ml of a 0.14 molar HBrO and 5.0 ml of 0.56 molar NaOH
Can u show the solution to this problem? What is the mass of silver that can be prepared for 1.00g of copper metal Cu (s)+2AgNO3(ag)= Cu(NO3)2(ag)+2Ag(s)
The _______ style is an attempt to revive the approach used by composers in the latter half of the eighteenth century. A. New Baroque B. Neo-classical C. Post-modern D. Pre-romantic
What is music called when it is written in 2 or more chords played simultaneously?
Thank you.
Please help with the below: Find an expression for the nth term of the given sequence. a). 3,6, 12,24, ... b). 151,142,133,124, ...
Thank you.
Find a polynomial function of least degree with real coefficients satisfying the given properties. zeros -3, 0, and 4 f(1) =10
Please help. Solve the given system of equation. 3x -Y =-7
Thank you.
PLEASE help with the below. Solve the system of equations by first expressing it in matrix form as and then evaluating. a). 3x-2y=5 4x-y =-10 b). 3x -2y =-2 4x -y = 3
Thank you so much.
Please help me. Find the equation of the line satisfying the indicated properties. Express your answer in slope-intercept form. Passing through point (-2, 4) and perpendicular to the line containing
points (3, -1) and (5, -1).
PLEASE help. Solve the system of equations by first expressing it in matrix form as and then evaluating. a). 3x-2y=5 4x-y =-10 b). 3x -2y =-2 4x -y = 3
Please help. Solve the given system of equation. 3x -Y =-7
Solve the system of equations by first expressing it in matrix form as and then evaluating. a). 3x-2y=5 4x-y =-10 b). 3x -2y =-2 4x -y = 3
Thank you greatly!
Solve the system of equations by first writing it in matrix form and then using Gauss-Jordan elimination. x-4y =-5 -2x + 9y = 125
Thank you.
Express the system as an augmented matrix and solve using Gaussian elimination. x=+2y+z=3 2y+3z =2 -x+2z = 1
Please help with: Express the system as an augmented matrix and solve using Gaussian elimination. x=+2y+z=3 2y+3z =2 -x+2z = 1
Thank you.
How is this problem solved? It takes Bobby 60 minutes longer to wax the car than it does his brother Kevin. Together it takes them 50 minutes to wax the car. How long does it take each working
Thank you!
Please tell me IF there is a solution to this question (or is it misprinted). Can this be solved? Jenny has 11 coins in her pocket, all of which are either nickels or dimes. If the value of the coins
is 754, how many of each type of coin does she have?
How is this problem solved? It takes Bobby 60 minutes longer to wax the car than it does his brother Kevin. Together it takes them 50 minutes to wax the car. How long does it take each working
Can this be solved? Jenny has 11 coins in her pocket, all of which are either nickels or dimes. If the value of the coins is 754, how many of each type of coin does she have?
DC = 10 What is the measure of angle ABD?
Consider the equilibrium system: N204 (g) = 2 NO2 (g) for which the Kp = 0.1134 at 25 C and deltaH rx is 58.03 kJ/mol. Assume that 1 mole of N2O4 and 2 moles of NO2 are introduced into a 5 L
contains. What will be the equilibrium value of [N204]? Options are: A) 0.358 M B) 0.0...
Chemistry Help Please :)
sorry.. accidentally posted on your question :/
Consider the equilibrium system: N204 (g) = 2 NO2 (g) for which the Kp = 0.1134 at 25 C and deltaH rx is 58.03 kJ/mol. Assume that 1 mole of N2O4 and 2 moles of NO2 are introduced into a 5 L
contains. What will be the equilibrium value of [N204]? Options are: A) 0.358 M B) 0.0...
Suppose you are going on a weekend trip to a city that is d miles away. Develop a model that determines your round-trip gasoline costs?
DC = 10 What is the measure of angle ABD?
if there are 4 cubes and 5 cubes on one side and 21 cubes on the other how many cubes are in each cup
150 mL of a 4.00 molar NaOH solution is diluted with water to a new volume of 1.00 liter. What is the new molarity of the NaOH?
how many grams of naOH must be dissolved to a total volume of 800 mL, if the desired molarity is 0.200 molar?
According to the National Transportation Safety Board, 10% of all major automobile crashes result in serious injury to at least one person involved in the crash. The Georgia Department of
Transportation reports that there are approximately 20 major automobile crashes per month...
b. What is the probability that, in 20 major crashes, between three and seven result in serious injury?
2,000,000 shares of capital stocks at $3 par value were issued the company issued half of the stock for cash at $8 per share, and earnded $90,000 during the first three months of operation , and
declared a cash dividend of $15,000 what would be the total paid in capital after ...
solve for solution 6m+n=17m-5n=8
We had to add 5.0 ml of pure acetic acide to the well and measure conductivity. Then we had to measure the conductivity of 0.01 M aqueous acetic acid. Explain what happens.
Algebra 1
Sorry, It's supposed to be a 'greater than or equal to' sign
Algebra 1
can anyone help! I have no clue how to do this.. PART 1: Use complete sentences to describe a real-world scenario that could be represented by the inequality 5x + 2y 45. PART 2: Choose one ordered
pair that is a solution to the given inequality and explain what that ordered pa...
SCl6 is used in....?
No, because then it'd be an element.
SCl6 is used in....?
It is a chemical compound called silicon dioxide (AKA - silica)
Human Service
Maybe, 16 Wishes, starring Debby Ryan. It's almost a fantasy because of how unreal it is. It's a disney movie so it's totally kid friendly. :)
Ok then, it'd be $5.40...
The answer would be $95.40 because 6/100 is .06, which equals 540/90 = 6% and you multiply that by 90. What you get is 5.4, and add that to 90, you get $95.40.
Algebra 1- Virtual School
can anyone help! I have no clue how to do this.. PART 1: Use complete sentences to describe a real-world scenario that could be represented by the inequality 5x + 2y 45. PART 2: Choose one ordered
pair that is a solution to the given inequality and explain what that ordered pa...
your welcome :)
I'm so sorry if i get this wrong, but I got p = 1.237
The grammar isn't perfect, but the wording is great. :) If you want the grammar to be 100% perfect then just post another question, and I'd be glad to help!!!
the percentile rank of t5 = 1.476
Give the numerical value for each of the following descriptions concerning normal distributions by referring to the table standard unit normal distribution for N(0,1) The 5th percentile of N(20,36)
Mean and variance
Give the numerical value for each of the following descriptions concerning normal distributions by referring to the table for N(0,1) The 5th percentile of N(20,36)
Calculate the quantity of heat released when 0.520 mol of sulfur is burned in air. uising the enthalpy detalH= -296
business communication
persuading your audience to take some action, aren't you being manipulative and unethical?
techn. in crj
Discuss corrections system uses case management software to more efficiently and safely handle prisoner rehabilitation and transfers.Explain what might happen if two prisoners from rival gangs were
made cell mates at the local prison because of a file mix up.
Project Management
Search the Web for the following: Effective listening Effective meetings Project reports Identify several helpful techniques that were not presented in this chapter.
Project Management
Present two reasons scheduling resources is an important task and describe how outsourcing project work can help alleviate some of the common problems associated with multiproject resource
com 130
I 'm asking noone to do it for me, I just want to know how to start. Thank you
com 130
Imagine you have been asked to communicate to several clients regarding a delay in the production of widgets your company produces. Your clients are both local and international. They have diverse
backgrounds, technical experience, and understanding." Your assignment is t...
COM 140
are you talking about the Job-Search Management or theComprehensive Grammar
Lattice method/math
The lattice method originated from ancient India. Unlike the partial-products method which relies on your knowledge of place value, you can use the lattice method as long as you are familiar with
your basic facts. The cells are called lattices. Basically you multiply the numbe... | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=vee","timestamp":"2014-04-21T03:37:54Z","content_type":null,"content_length":"18741","record_id":"<urn:uuid:6a9d281f-cca1-46f5-9b33-66c60fb335f3>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cupertino Calculus Tutor
Find a Cupertino Calculus Tutor
I believe that the biggest hurdle to overcome with most struggling students is a fear of failure. Let me help your child to build the confidence they need to be successful. I'm an Australian high
school mathematics and science teacher, with seven years experience, who has recently moved to the bay area because my husband found employment here.
11 Subjects: including calculus, chemistry, physics, statistics
...I'm a patient tutor with a positive, collaborative approach to building mathematical skills for algebra, pre-calculus, calculus (single variable) and advanced calculus (multi-variable). Pre-
calculus skills are very valuable for success on the mathematics section of the SAT exam and the SAT Math...
22 Subjects: including calculus, geometry, accounting, statistics
...As the beneficiary, you will learn first how and then why the necessary basic knowledge and skills to ace any class in physics and math within three months - this is a guarantee, but as a must,
you need to do your part, that is to follow instructions, practice and retain what you have been taught...
15 Subjects: including calculus, physics, statistics, geometry
...Thank you for looking at my page. I've recently graduated from Santa Clara University with a Biochemistry degree. While I was obtaining it, I spent many hours helping my fellow students
understand the material we learned in class.
24 Subjects: including calculus, chemistry, physics, geometry
...After learning the basic skills, application becomes very important. But the depth of understanding in the course by a student leads to a better prepared thinker on a higher level. You learn to
think for yourself, evaluate and not simply memorize.
13 Subjects: including calculus, statistics, algebra 2, geometry | {"url":"http://www.purplemath.com/Cupertino_calculus_tutors.php","timestamp":"2014-04-20T21:22:44Z","content_type":null,"content_length":"23998","record_id":"<urn:uuid:5b3df58c-d317-4d6d-a935-1274370fdbd3>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Benchmarks Online
RSS Matters
How to Conduct Empirical Academic Research: A (very) General Guide
Link to the last RSS article here: Statistical Resources -- Ed.
By Dr. Jon Starkweather, Research and Statistical Support Consultant
This month’s article was motivated by an interaction with a student who reported being “stuck” on their dissertation and not knowing what to do next. A dissertation, or thesis for that matter, is not
an immovable object; nor is graduation an unattainable goal. The author of this article was reminded that some students spend a year or more between the completion of research methods and/or
statistics courses and the beginning of work on their dissertation. Recognition of the phenomena known as being stuck and the often lengthy time between methods/statistics courses and dissertation
work motivated the writing of this article. The article is meant to provide a very general framework, or guide, to the process of conducting empirical research (specifically a dissertation or
thesis). Keep in mind, if this process were extremely easy (and free), everyone would have an advanced degree. A meaningful and successful study takes a great deal of effort, time, and resources
(e.g. caffeine, money, etc.).
Before Data Collection
There are several key ingredients which must be present in order for a meaningful study to be completed. The first, of course, is thought. A necessary step in conducting a study is careful, critical,
and repeated thought about what will be accomplished, why it is meaningful to accomplish it, and how it will be accomplished.
│ Side Note 1: Choosing a Dissertation Advisor. │
│ │
│ When choosing a dissertation advisor, make sure that person’s research interests are well matched with your own. It is preferred that your dissertation advisor be at least familiar with, if not │
│ an expert on, the domain in which you wish to conduct your study. If a prospective advisor has been doing research on the mating habits of the Great Blue Heron and you are interested in │
│ conducting research into the thermodynamics of the Gulf Stream current…then you might not get the support or advice you will likely need. Occasionally, you may also want to consider the values │
│ and beliefs of a prospective dissertation advisor. If a prospective advisor has been doing externally funded research on the efficient extraction of petroleum and natural gas reserves for the │
│ last 20 years and you are interested in conducting a study of the impacts of hydraulic fracturing on drinking water…then you may not get the support or advice you will need and perhaps you should │
│ choose a different advisor. │
Choosing a research topic is also a critical decision in the process and should not be taken lightly. If one chooses a topic in which one has no personal interest (i.e. intrinsic motivation), then
one is unlikely to be able to muster the self-discipline to work on the research when distractions are present. Equally important is choosing the scale or scope of the research. Passion is a great
asset because it motivates work, but passion can also lead to an overly ambitious study (i.e. one which cannot possibly be completed in the allotted time frame). When choosing a research topic, make
sure it meets the approval of any and all collaborators (e.g. a dissertation advisor). Peers and advisors are invaluable resources during the entirety of the research process; they can often point
out advantages and disadvantages for you. Do not be afraid to ask others the question: “Am I making sense?” The answer can only improve your project or increase your confidence.
Once a general area of interest, or topic, is decided upon; a thorough review of the literature should be conducted. Generally, the word ‘literature’ in this context refers to peer reviewed academic
journal articles; with specific emphasis on empirical studies. Where should you look to find this literature, who could you consult? If you are working on a dissertation, your advisor should be
familiar with the resources you will need to turn to (e.g. what journals, electronic databases, societies/associations are likely to be oriented toward your topic). Also, remember library
professionals (e.g. reference librarians) are experts you can contact to learning how and where to search for information. Becoming familiar with the literature will acquaint you with the concepts,
terms, measures/instruments, methods, and results related to your chosen topic. Becoming familiar with the research which has been completed on, or around, the topic will also allow you to transition
from an area of interest to a research question. The research question should be just that; a question, stated in lay terms (i.e. even people not associated with your topic, or even your field,
should be able to understand the question). The research question should be constructed in such a way that the research you conduct should answer that question. For example, do animals raised in zoos
suffer negative health effects due to lack of exercise or predation?
The research question should then flow naturally into formal statement of hypotheses. Again, concerted effort (i.e. thought) should be expended on developing the hypotheses. Often collaboration is
involved in the development of hypotheses. Hypotheses should focus on the strength and direction of expected effects. Keep in mind; formal hypotheses should not to be confused with null and
alternative hypotheses. Formal hypotheses should be concise sentences which convey expected findings; for example, one might hypothesize that animals raised in zoos have on average significantly
greater body weight than similarly aged animals of the same species which were raised in the wild. Generally, a meaningful research project will have multiple formal hypotheses. Often they are
structured hierarchically; meaning a central thesis is conveyed in a main effects hypothesis and subordinate hypotheses are used for more narrow or lower level effects of interest.
Once formal hypotheses have been constructed, the research design can be attacked. Research design includes determining how variables will be measured, what instruments (if any are necessary) will be
used, will you develop your own instruments or use existing ones, will random sampling (and/or random assignment) be employed, what procedures will be followed (pretest – posttest; experimental
manipulation, etc.), how will internal and external validity be achieved, etc. in order to gather the data necessary to test the hypotheses. In this context, the word ‘necessary’ refers to both the
amount of data and the appropriateness of the data. The ‘amount’ of data determines the power of the study and is commonly constrained by practical concerns such as time and funding. However, many
applications are available (e.g. G*Power3) for determining a priori sample size for a given design, desired power, and desired effect size. The ‘appropriateness’ of the data has two meanings. First,
obviously you need to collect data which will be meaningful for answering your hypotheses; for example you are not going to measure animals’ weight with a thermometer. Second, ‘appropriateness’
refers to whether or not the data will adhere to the assumptions of a given analysis. For instance, a simple independent t-test (which is typically used to evaluate mean differences) requires a
categorical (i.e. factor) variable (e.g. animal sex; male or female) and a continuous or nearly continuous (i.e. numeric) variable (e.g. adult weight; kilograms). As another example, consider
studying the effects of chemotherapy on hair loss.
│ Side Note 2: New Data vs. Archival Data. │
│ │
│ New data in this context is defined as data you collect. Archival data is defined as existing data which someone else collected. There are benefits and costs associated with each. Generally, the │
│ main benefit of using archival data is that of time. The time associated with collecting archival data is drastically lower than the time associated with collecting new data. The primary benefit │
│ of collecting new data is control; meaning, you will have control over what is collected (i.e. how variables are measured and what the measurements represent). It is the opinion of this author │
│ that students conducting a dissertation should collect their own data and not rely upon archival data. Often, archival data is like the carrion of the research world, it has been picked over for │
│ years and likely has no meaningful effects left in it undiscovered. │
Here, you would find it beneficial to collect hair loss data by measuring the number of hairs per square inch of scalp, rather than simply rating hair loss as extreme, moderate, or slight (i.e. scale
of measurement is important). Clearly, there is a relationship between formal hypotheses, research design, and types of analysis. However, keep in mind; the data may not conform to expectations,
which means the initial analysis chosen may not be the analysis most appropriate once the data has been collected. Therefore, again, careful thought and collaboration should be exercised during the
consideration of design and choice of primary analysis, secondary analysis, and possibly alternative analytic techniques in case the data does not conform to assumptions (e.g. linearity). It is often
the case that a particular hypothesis and data combination can be addressed with more than one, and often several, statistical analyses. Therefore, it is important to consider the strengths and
weaknesses of alternative or competing research designs and statistical analyses.
Many questions will have to be addressed as you (and your advisor or collaborators) develop the design of the study. The following represent some likely questions to consider during this phase of the
process. Will you be attempting to identify mean/median differences and/or the strength and direction of relationships? Will you be modeling latent variables, manifest variables, or both? Will you be
using a covariance decomposition technique, a variance or components based technique, a qualitative technique or a …? Will you be taking a Frequentist or Bayesian approach to data analysis? Will you
be conducting a pilot study? Will you be doing simulations prior to data collection? Will you need Institutional Review Board (IRB) approval? Will you need approval from other institutions (e.g.
hospitals, schools, zoos, other universities)? Will your study be funded (e.g. grants)? Will you be handling sensitive information (e.g. health records)? Will you be collecting data from a vulnerable
population (e.g. children)? How will you safeguard the data and insure it is kept confidential? Will you need to develop an Informed Consent form? Will your study involve any level of deception? If
gathering data from human participants, will they be compensated (e.g. paid money, given extra credit, etc.)? Will your participants (humans) or subjects (non-humans) be treated safely, ethically,
and respectfully? Of course they will, but you will still need to think about how they will be treated (e.g. will they benefit emotionally, physically, intellectually, and/or financially from
participation in your study?).
Once the topic has been chosen, the literature review completed, formal hypotheses formulated, research design and proposed analyses decided (by you and your collaborators/advisor); you should
prepare to propose the study in written and oral form. The proposal stage involves writing a formal proposal manuscript and presenting the proposed research, including all of the above information
(often the bulk of the manuscript is the literature review). For students, oral presentation of the written proposal will be conducted as a method of gaining approval from a dissertation committee to
proceed with the study. Students can find assistance with the process of writing by contacting the Writing Lab. Once the committee has approved the study, very few deviations should be made from what
was approved. If collecting new data, generally the next step would be IRB approval. Then, of course, data collection can proceed.
After Data Collection
Once the data has been collected, the first step will commonly be to convert the data into a stable electronic format. It is generally recommended that the data be preserved in the most basic format
possible; because, versions of software and operating systems change over time and it may be the case that future versions are not capable of opening a particular file format. Next to binary code
form, the basic text format (filename.txt) is the obvious choice; using one of the common delimiters (e.g. comma delimited, space delimited, tab delimited, etc.). If one is using a traditional paper
and pencil based survey, one can utilize the services of Data Management to have the paper surveys (or ScanTrons) digitized. If one is using a software program to enter the data (e.g. Microsoft
Office Excel), then it is strongly recommended that the data be converted into text (.txt) files to be preserved. The second benefit of preserving data in text file format is that all popular
statistical computing software is capable of opening text data files (for a comparison of statistical software, see here). This can be extremely important when multiple collaborators use different
software (e.g. one collaborator using Open Office Calc and SAS on a Mac, and one collaborator using Microsoft Office Excel and IBM SPSS on a Windows PC).
Next, the data will likely be imported into one of the common statistical software packages for analysis (of course, RSS staff strongly recommends using R). However, prior to conducting the primary
and secondary analysis; one should do thorough initial data analysis. Initial data analysis refers to a wide variety of procedures which allow the researcher to become intimately familiar with the
data (i.e. variable distributions, relationships, etc.). Initial data analysis ranges from rather mundane tasks such as recoding/reverse coding variables, reviewing histograms and bar charts for
every variable; to more complex tasks like evaluating multivariate outliers and missing data. Whole books have been written on the subject of missing values (e.g. Little & Rubin, 2002), because,
missing values are an important issue for virtually every dataset collected. Initial data analysis should also include an evaluation of the relationships between each pair of variables, with
correlation matrices and scatterplot matrices commonly used. Testing the assumptions of planned parametric analyses should also be rigorously investigated (i.e. linearity, homoscedasticity, etc.). It
should be noted that in this discussion of initial data analysis, the use of graphs is repeatedly mentioned. Graphs are important because they can convey information more clearly than simple numeric
output; for example consider a five variable correlation matrix augmented with the same five variable relationships displayed in a scatterplot matrix:
For those interested, these two screen captures can be replicated using this script.
Initial data analysis may also employ parametric statistics, nonparametric statistics, transformations, optimal scaling techniques, variable selection techniques, matching, propensity score analysis,
model comparison, etc. The point being made here is that initial data analysis is a necessary step, and one which requires critical thought, as well as time and effort – like all data analysis, it
requires the tenacity and curiosity of a very good detective.
Primary, and secondary, analyses can commence once the initial data analysis is completed – although one may need to return to initial data analysis periodically during the course of alternative
analyses (i.e. if proposed primary analyses are replaced). Due to the extremely wide array of analyses one might employ, specific techniques will not be covered here.
│ Side Note 3: RSS Can Help. │
│ │
│ Of course, RSS can help with choices of research design and statistical analysis. However, it is important to remember that RSS staff will recommend and suggest; but it is ultimately the │
│ responsibility of the researcher to make decisions concerning what will be done. RSS has available literally walls full of books and articles related to research design and statistical analysis │
│ as well as the experience to be able to communicate the strengths and weaknesses of various choices. Please review our entire website (particularly the FAQpage), as well as last month’s article │
│ which dealt directly with statistical resources, prior to contacting us for a consultation. │
However, there are three key concepts which should be kept in mind while conducting primary analyses. First, virtually all inferential statistics are model based and with models comes the possibility
of model specification error. One could say there are two types of model specification error; errors of form and variable selection errors. Errors of form include specifying the wrong type of model,
such as imposing a linear model when an exponential model or quadratic model might be more appropriate. Variable selection errors are errors of inclusion and errors of omission (e.g. meaningless
variables in the model and meaningful variables left out of the model). Second, virtually all inferential statistics are based on some form of measurement and with measurement comes the possibility
of measurement error. Measurement error is more prevalent among the so-called soft sciences, as opposed to the hard sciences such as physics, biology, chemistry, etc.; however, measurement error
should be investigated and modeled or acknowledged when discovered. Third, inferential statistics are, by their very name and nature, used to make inferences from a sample to a population. In other
words, unless you are working with the entire population of interest, you are going to be computing or calculating sample statistics rather than population parameters. Therefore, sampling bias and/or
non-response should be investigated and reported.
Given the rapid expansion of sophisticated modern methods, the data analyst should be open to using such robust techniques as booting (i.e. bootstrap resampling), bagging (i.e. bootstrapped
aggregation), and boosting (i.e. using multiple models) to increase the precision and decrease the bias of statistical estimates. There are also modern sophisticated techniques to allow for
statistical control of so-called nuisance variables or confounding variables; techniques such as nearest neighbor matching, balancing, random stratification and propensity score analysis. It should
also be noted that there has been an expansion of optimization techniques in recent years, such that maximum likelihood, which is rather commonly known, has been joined by ant-colony optimization and
genetic optimization algorithms. Both of which can be applied to certain situations with amazing speed and produce optimal results (i.e. optimize on the most probable estimate of a parameter). Also,
for particularly large datasets and associated complex computation, UNT’s High Performance Computing (HPC) center is available for jobs which require serious computing power.
Of course, once the data has been analyzed and interpreted, it is time to write up the results and prepare the final presentation. Again, students can get assistance from the Writing Lab if they are
having difficulty with the writing process. Students should turn to their dissertation advisor for advice on formatting the manuscript. For example, some departments use the Modern Language
Association (MLA) style, some use the Chicago style, some use the American Psychological Association (APA) style, and still others use a style of their own creation or an amalgamation of several
styles. Students may also, at some point, want to contact the Graduate Reader in order to prepare their completed dissertation (or thesis) for submission to the Toulouse Graduate School. Another
thing to consider, when writing up an empirical research manuscript, is the journal in which one wishes to publish the results. It is often the case that journals have their own formatting
idiosyncrasies and therefore, it is often a good idea to consult their web site to review their submission guidelines long in advance of actually submitting a manuscript for review.
It is important to note that this article represents a very general guide to the conduct of empirical research and it is aimed more toward students conducting a dissertation than that of the
professional researcher. For students, it is important to note that your dissertation (or thesis) advisor should be able to offer you suggestions and guide your progress. However, not all questions
have easy or readily available answers; students should be proactive in seeking out information through any or all available sources. Do not expect your advisor (or anyone else) to do your work for
you. Completing a dissertation is hard work and should be a learning process. Remember, a meaningful study is one that contributes to a better understanding of the phenomena under investigation.
Lastly, a couple of sound-bytes of wisdom: Do not be afraid of your own ignorance; Albert Einstein once quipped something to the effect of: “if we already knew the answers, it would not be called re-
search.” Do not be afraid of non-significant results; as Thomas Edison once said, “I have not failed; I’ve just found 10,000 ways that won’t work!”
References, resources, and perhaps useful links.
Clark, M. (2007). What is statistics? Benchmarks: RSS Matters, September 2007. Available at: http://www.unt.edu/benchmarks/archives/2007/september07/rss.htm
Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate Data Analysis (5^th ed.). Upper Saddle River, NJ: Prentice Hall. Chapter 2
Herrington, R. (2007). How long should my analysis take? Benchmarks: RSS Matters, July 2007. Available at: http://www.unt.edu/benchmarks/archives/2007/july07/rss.htm
Kirk, R. E. (1995). Experimental Design (3^rd ed.). Pacific Grove, CA: Brooks/Cole Publishing Company.
Little, R. J. A., & Rubin, D. B. (2002). Statistical Analysis with Missing Data (2^nd ed.). Hoboken, NJ: John Wiley and Sons, Inc.
Mertler, C. A., & Vannatta, R. A. (2002). Advanced and Multivariate Statistical Methods: Practical Application and Interpretation (2^nd ed.). Los Angeles, CA: Pyrczak Publishing. Chapter 3
Pedhazur, E. J. (1997). Multiple Regression in Behavioral Research (3^rd ed.). Crawfordsville, IN: R.R. Donnelley (for Wadsworth – Thomson Learning, Inc.). Chapter 3
Raykov, T., & Marcoulides, G. A. (2008). An Introduction to Applied Multivariate Analysis. New York: Routledge (Taylor & Francis Group). Chapter 3
Starkweather, J. (2011). Go forth and propagate: Book recommendations for learning and teaching Bayesian statistics. Benchmarks: RSS Matters, September 2011. Available at: http://web3.unt.edu/
Starkweather, J. (2011). Statistical resources. Benchmarks: RSS Matters, November 2011. Available at: http://web3.unt.edu/benchmarks/issues/2011/10/rss-matters
Tabachnick, B. G., & Fidell, L. S. (2001). Using Multivariate Statistics (4^th ed.). Needham, MA: Allyn & Bacon. Chapter 4 | {"url":"http://it.unt.edu/benchmarks/issues/2011/12/rss-matters","timestamp":"2014-04-21T01:59:50Z","content_type":null,"content_length":"42352","record_id":"<urn:uuid:9f37be30-9df9-48eb-9e1f-f41ce54281d1>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Disguised Set Theory "DST"
Zuhair Abdul Ghafoor Al-Johar zaljohar at yahoo.com
Tue Oct 4 04:58:58 EDT 2011
Dear F.Bjordal.
I think that t(a) represent a set with E-elements <0,a>,<1,Ua>,<2,UUa>...
which is not provable in this theory yet, but even if we suppose
it exists then your set X is the set of all sets x such that
not x E x and not x E2 x and not x E3 x ..........
But by then X is simply the set of all sets, i.e. X=V,
since all sets in this theory has this property.
You said the following:
"Clearly xeX only if not XeTC(x)" which is false, and
your argument that x would be cyclic doesn't matter
since x can be cyclic (if what you mean is e-cyclic)
and yet it is not an Ei member of itself, V is an obvious
example. Mind you that all sets are hereditarily E-acyclic.
In the last line of your argument you said we derive
that X E X, which is false we derive X e X, which
is not a problem since X is V after all an indeed
it is E-acyclic and indeed it has itself as an e-element
of itself, no problem at all. In the line before it
you said suppose that X E X or X E2 X i.e. you meant X is E-cyclic
but in this theory there is no such X.
On Mon, 3 Oct 2011 05:20:30 +0200, Frode Bjordal wrote:
> Dear Zuhair,
> I believe the following may answer your query concerning
> the
> consistency of your suggested disguised set theory in the
> negative. In
> the following I presuppose that the readers have digested
> the
> terminology of the note you linked to in your message.
> Let ordered pairs be defined e.g. ? la Kuratowski. Let Uz
> signify the
> union set of z. Let z? signify the ordinal successor of z.
> Let ?
> signify the empty set {x:-x=x}. Let t(a) be the set
> provided by the
> comprehension {x:(y)((<?,a>Ey &
> (u)(v)(<u,v>Ey=><u?,Uv>Ey))=>xEy)}.
> Let X be given by the comprehension
> {x:(n)(y)(<n,y>Et(x)=>-xEy)}.
> Clearly, xeX only if not XeTC(x). For if xeX and XeTC(x)
> then xeTC(x),
> and x would be cyclic. As xEX iff xeX and not XeTC(x), we
> have that
> xEX iff xeX. Suppose first that X is cyclic, i.e. XEX or
> XE(2)X or?
> Then we derive in a finite number of steps that X is not
> cyclic.
> Suppose next that X is not cyclic. Then X fulfills the
> comprehension
> condition for X and we derive that XEX. So X is cyclic iff
> X is not
> cyclic according to the suggested set up.
> --
> Frode Bj?rdal
> Professor i filosofi
> IFIKK, Universitetet i Oslo
> www.hf.uio.no/ifikk/personer/vit/fbjordal/index.html
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2011-October/015864.html","timestamp":"2014-04-17T22:00:27Z","content_type":null,"content_length":"5551","record_id":"<urn:uuid:1b65a506-629d-4ffc-b9f7-c59fbb68d527>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kepler's Laws problems
1) Satellite X and Satellite Y orbit the earth. The distance between X and the earth is 8 times greater than the distance between Y and the earth. Using Kepler's Laws, the period of satellite X is
what factor times the period of satellite Y?
R is the distance between satellite Y and the earth
I used Kepler's third law:
T^2=R^3 for Y
T^2=(8R)^3 for X
So T for Y would be R^(3/2)
and T for X would be 22.6*R^(3/2)
So the factor is 22.6 right? But the answer key says "4.0" So am I right?
2) There is a point between the earth and the moon where the net force of gravity on an object located at that point would be zero. I have no idea which formula to use on this problem, please help. | {"url":"http://www.physicsforums.com/showthread.php?t=98892","timestamp":"2014-04-17T18:34:25Z","content_type":null,"content_length":"19750","record_id":"<urn:uuid:1725169d-5a25-42fa-8300-36363437c2f0>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
Haledon ACT Tutor
Find a Haledon ACT Tutor
...I work to my best ability to help all the students excess knowledge in the class. I work as substitute teacher in three different districts. I can teach the students in math and help them
become better at the subject.
4 Subjects: including ACT Math, geometry, algebra 1, prealgebra
...In addition, I scored 98% on the final exam for the capstone math class at my high school. I tailor my teaching strategy to the individual: first identifying whether the pupil is an auditory,
visual, or kinesthetic learner; then applying wisdom from my 9+ years of experience in coaching precalculus. This method of education consistently yields positive learning outcomes.
32 Subjects: including ACT Math, reading, calculus, physics
...Sometimes diagrams help, sometimes showing how a new technique is a variation of an old one is more helpful. I will find presentations that unlock the mystery and fun of mathematics for you! I
will come to your home or meet you at a mutually convenient location (such as the library). I am happy to work with individuals or groups.
10 Subjects: including ACT Math, calculus, geometry, statistics
...I have taken a strictly differential equations course, as well as applied non-linear differential equations in dynamics for engineering. I obtained an A+ in all courses. I am currently a
medical student at a top 20 medical school.
28 Subjects: including ACT Math, chemistry, calculus, physics
...My students include those from Hunter College High School, Stuyvesant, Bronx Science, Brooklyn Tech, and other private schools, etc., many of them were referred by students and parents. I
helped many students got into their dream schools or honor classes. I have two master degrees (physics and math) and have very deep understanding of physics and math concepts.
12 Subjects: including ACT Math, calculus, physics, algebra 2
Nearby Cities With ACT Tutor
Allendale, NJ ACT Tutors
Fairfield, NJ ACT Tutors
Glen Rock, NJ ACT Tutors
Hawthorne, NJ ACT Tutors
Ho Ho Kus ACT Tutors
Midland Park ACT Tutors
North Haledon, NJ ACT Tutors
Paterson, NJ ACT Tutors
Pequannock ACT Tutors
Pequannock Township, NJ ACT Tutors
Prospect Park, NJ ACT Tutors
Totowa ACT Tutors
Totowa Boro, NJ ACT Tutors
Wayne, NJ ACT Tutors
Woodland Park, NJ ACT Tutors | {"url":"http://www.purplemath.com/Haledon_ACT_tutors.php","timestamp":"2014-04-21T05:22:36Z","content_type":null,"content_length":"23664","record_id":"<urn:uuid:c3e4e2cc-8c65-447f-ae1d-82b243d7f8c1>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
Estimation of Synchronization Parameters using SAGE in a GNSS-Receiver
Antreich, F. and Esbri-Rodriguez, O. and Nossek, J.A. and Utschick, W. (2005) Estimation of Synchronization Parameters using SAGE in a GNSS-Receiver. In: Proceedings ION GNSS 2005. ION GNSS 2005,
Long Beach CA, USA, September 123-16, 2005, Long Beach, CA, USA.
Full text not available from this repository.
The quality of the data presented to the user in a GNSS (Global Navigation Satellite System)-receiver depends largely on the accuracy in the propagation delay estimation of the direct signal
(line-of-sight signal, LOSS). Under the presence of multipath signals, a standard navigation receiver that is designed to synchronize a single signal replica through conventional circuits (Delay-Lock
Loop, DLL) experiences an error in the pseudorange measurement, the so-called multipath error. For the current GPS C/A signal, this error can range from a few metres up to more than 100 metres. The
synchronization of a navigation signal is usually performed by a DLL, which basically implements an approximation of the maximum likelihood estimator (MLE). The problem which arises is that the order
of this estimator (the DLL) is chosen according to the assumption that only the LOSS is present. This means that this estimator tries to estimate the relative propagation delay of only one signal
replica. In case the LOSS is corrupted by several superimposed delayed replicas this estimator becomes biased, because of the change of the order of the incident estimation problem. Thus, in order to
perform synchronization in the presence of multipath corrupted signals we follow the approach of obtaining the MLE for estimation problems of higher order. Therefore, signal parameters of a number of
superimposed delayed replicas have to be estimated jointly. As this leads to a multi-dimensional non-linear optimization problem the reduction of the complexity of this problem is the most important
issue to be solved in order to perform precise positioning in a navigation receiver. Several techniques have been proposed in the literature to solve the multipath problem in navigation receivers,
like the well known MEDLL [1]. Recently, interesting approaches like in [2] and in [3] have appeared. The first applies the maximum likelihood principle to the delay estimation in the presence of
multipath and unintentional interference in an antenna array receiver, and the latter develops efficient multipath mitigation techniques (with low-complexity) in single antenna and array antenna
navigation receivers. In both works, a connection is made between the multipath estimation problem in navigation systems and the same problem in communication systems. In this work the potential of
the SAGE (Space-Alternating Generalized Expectation Maximization) algorithm for global navigation satellite systems in order to estimate synchronization parameters of the LOSS under the presence of
multipath signals is to be considered. The SAGE algorithm is a low-complexity generalization of the EM (Expectation Maximization) algorithm, which iteratively approximates the MLE. It breaks down the
multi-dimensional non-linear optimization problem which arises for the general maximum likelihood problem that usually is to complex to be solved with reasonable effort into problems of lower
dimensions. Due to this significant reduction of complexity and its fast convergence the SAGE algorithm has been successfully applied for parameter estimation (relative delay, incident azimuth,
incident elevation, Doppler frequency, and complex amplitude) in direct-sequence code-division multiple access systems (DS-CDMA) in mobile radio environments. This study discusses receivers with a
single antenna, and also points out the capabilities of the proposed techniques using multiple antennas (array processing), for the application in a GNSS environment. Whereas for the single antenna
case we estimate the complex amplitudes and the relative delays of the impinging waves, in the latter additionally the spatial signature (incident azimuth and incident elevation) is estimated. The
performance of the algorithm is assessed by computer simulations using a simple spatial channel model and a model for the aeronautical multipath navigation channel (European Space Agency, ESA: "
Navigation signal measurement campaign for critical environments"). In order to describe the behaviour of the SAGE algorithm classical concepts like the RMSE (root mean square error) and the
CRLB (Cramer-Rao lower bound) are employed. On the other hand simulations with the end-to-end simulator for satellite navigation systems NAVSIM developed by the German Aerospace Center (DLR) are made
in order to assess the performance of the SAGE algorithm compared to the tracking performance of a conventional navigation receiver with a single antenna (non-coherent DLL, narrow correlator,
Costas-Loop used as PLL). Furthermore, we discuss critical aspects which have to be considered using SAGE, like the initialisation problem or its complexity, and we propose an approach to an easy
implementation. The results of the performed computer simulations and discussion indicate that the SAGE algorithm has the potential to be a very powerful high-resolution method to successfully
estimate parameters of impinging waves for navigation systems. The presented approach to synchronization in GNSS-receivers has proven to be a promising method to efficiently combat multipath for
navigation applications due to its good performance, fast convergence, and low complexity. [1] R. D. J. Van Nee, J. Siereveld, P. Fenton, and B. R. Townsend, " The Multipath Estimating Delay
Lock Loop: Approaching Theoretical Accuracy Limits", Proc. IEEE Position, Location Navigation Symp., pp. 246-251, Apr. 1994. [2] Gonzalo Seco, "Antenna Arrays for Multipath and Interference
Mitigation in GNSS Receivers", Ph.D. thesis, Department of Signal Theory and Communications, Universitat Politecnica Catalunya, 2000. [3] Jesus Selva Vera, "Efficient Mitigation in
Navigation Systems", Ph.D. thesis, Department of Signal Theory and Communications, Universitat Politecnica Catalunya, 2004.
Document Type: Conference or Workshop Item (Paper)
Additional Information: LIDO-Berichtsjahr=2005,
Title: Estimation of Synchronization Parameters using SAGE in a GNSS-Receiver
Authors: ┌─────────────────────┬──────────────────────────────────┐
│ Authors │ Institution or Email of Authors │
│ Antreich, F. │ UNSPECIFIED │
│ Esbri-Rodriguez, O. │ UNSPECIFIED │
│ Nossek, J.A. │ TU Munich │
│ Utschick, W. │ TU Munich │
Date: 2005
Journal or Publication Title: Proceedings ION GNSS 2005
Refereed publication: Yes
In ISI Web of Science: No
Status: Published
Keywords: GNSS-Receiver, SAGE, Synchronisation, Estimation
Event Title: ION GNSS 2005, Long Beach CA, USA, September 123-16, 2005
Event Location: Long Beach, CA, USA
Event Type: international Conference
Organizer: ION
HGF - Research field: Aeronautics, Space and Transport (old)
HGF - Program: Space (old)
HGF - Program Themes: W - no assignement
DLR - Research area: Space
DLR - Program: W - no assignement
DLR - Research theme (Project): W -- no assignement (old)
Location: Oberpfaffenhofen
Institutes and Institutions: Institute of Communication and Navigation
Deposited By: elib DLR-Beauftragter
Deposited On: 09 Oct 2005
Last Modified: 14 Jan 2010 19:39
Repository Staff Only: item control page | {"url":"http://elib.dlr.de/18705/","timestamp":"2014-04-19T03:02:37Z","content_type":null,"content_length":"39637","record_id":"<urn:uuid:387a7fc0-3af6-488f-bc6a-eff03c3d93f9>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
Civil Engineering Archive | November 05, 2010 | Chegg.com
Please show calculation formulas, and explanation of this whole probability deal
Project 1
Probability Revenue
Net revenue given in
PW 0.2 $2,000
0.6 $3,000
0.2 3,500
Project 2 Probability Revenue
Net revenue given in
PW 0.3 $1,000
0.4 $2,500
0.3 $4,500
A manufacturing firm is considering two mutually exclusive projects. Both projects have an economic service life of one year, with no salvage value. The first cost of Project 1 is $1000 and the first
cost of project 2 is $800. The Net year for each project is as follows.
Assume that both projects are statistically independent of each other.
a) If you make decision by maximizing the expected NPW, which project would you select?
b) If you also consider the variance of the projects, which project would you select?
• Show less | {"url":"http://www.chegg.com/homework-help/questions-and-answers/civil-engineering-archive-2010-november-05","timestamp":"2014-04-19T23:44:10Z","content_type":null,"content_length":"23512","record_id":"<urn:uuid:a62b0bec-c440-45dc-b6db-bbd0e5128477>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why santa claus can't exist...
Why Santa Can't exist
Sorry to say, but here's why. You must be over 12 to read this
1) No known species of reindeer can fly, but there are 300,000 species of organisms yet to be classified, and while most of these are insects and germs, this does not completely rule out flying
reindeer which only Santa has ever seen.
2) There are 2 billion children (defined as persons under 18) in the world; However, since Santa doesn't appear to handle Muslim, Hindu, Jewish, or Buddhist children, that reduces the workload down
to 15% of the original total - 378 million according to the Population Reference Bureau. At an average census rate of 3.5 children per household, that's only 91.8 million homes. One presumes that
there is at least one good child in each.
3) Santa has 31 hours of Christmas to work with, thanks to the different time zones and the rotation of the earth, assuming he travels east to west. This works out to 822.6 visits per second. That is
to say that for each Christian household with good children, Santa has 1/1000th of a second to park, hop out of the sleigh, jump down the chiminey, fill the stockings, distribute the remaining
presents under the tree, eat whatever snacks have been left, get back up the chiminey, get back into the sleigh, and move on to the next house. Assuming that each of these 91.8 million stops are
evenly distributed around the earth (which we know to be false but will accept for the purpose of these calculations), we are talking about .78 miles per household, a total trip of 75.5 million
miles, not counting stops to do what most of us must do at least once every 31 hours, plus eating, etc. This means that Santa's sleigh is moving at 650 miles per second, 3000 times the speed of
sound. For purposes of comparison, the fastest man-made vehicle, the Ulysses space probe, moves at a poky 27.4 miles per second. A conventional reindeer can run 15 miles per hour at the most.
4) The payload on the sleigh add another interesting element. Assuming that each child gets nothing more than a medium-size set of Lego building blocks (about two pounds), the sleigh is carrying
321,300 tons, not counting Santa, who is invariably described as overweight. On land, conventional reindeer can pull no more than 300 pounds. Even granting that flying reindeer exist (see point 1),
can fly very quickly (see point 2), and can pull ten times the normal amount, we cannot do the job with eight, or even nine, reindeer. We would need 214,200 reindeer. This increases the payload - not
counting the weight of the sleigh - to 353,430 tons. Again, for comparision, this is four times the weight of the Queen Elizabeth 2.
5) 353,000 tons travelling at 650 miles per second creates enormous air resistance. This would heat the reindeer up in the same fashion as a spacecraft re-entering the earth's atmosphere. The lead
pair of reindeer would absorb 14.3 quintillion joules of energy. Per second. Each. In short, they would burst into flame almost instantaneously, exposing the reindeer behind them, and creating
deafening sonic booms in their wake. The entire reindeer team would be vaporized within .00426 seconds. Santa, meanwhile, would be subjected to forces 17,500 times greater than normal gravity. A
250-pound Santa (which seems slim) would be pinned to the back of his sleigh by 4,315,015 pounds of force. In conclusion, if Santa ever did deliver presents on Christmas Eve, he's dead now. Merry
"But Santa has to exist!" young timmy said, "who else has the ability to forge my mother and father and grandmother's writing??" | {"url":"http://www.angelfire.com/crazy2/coolsite0/humor/santa.html","timestamp":"2014-04-18T18:11:34Z","content_type":null,"content_length":"15129","record_id":"<urn:uuid:c3ae363a-e180-42d3-ba9b-a53aa646529d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00584-ip-10-147-4-33.ec2.internal.warc.gz"} |
VOL 98
Ars Combinatoria
Volume XCVIII, January, 2011
• Ahmad Mahmood Qureshi, "A New Version of Menages Problem", pp. 3-6
• Shengxiang Lv and Yanpei Liu, "A new bound on maximum genus of simple graphs", pp. 7-14
• Mingjing Gao and Erfang Shan, "The Signed Total Domination Number of Graphs", pp. 15-24
• Jin-Hua Yang and Feng-Zhen Zhao, "Values of a Class of Generalized Euler and Bernoulli Numbers", pp. 25-32
• A.P. Santhakumaran and S. Athisayanathan, "Weak Edge Detour Number of a Graph", pp. 33-61
• Shuhua Li, Hong Bian, Guoping Wang and Haizheng Yu, "Vertex PI indices of some sums of graphs", pp. 63-71
• Xiaoxin Song and Weiping Shang, "Roman domination in a tree", pp. 73-82
• Weiping Wang and Tianming Wang, "Matrices related to the idempotent numbers and the numbers of planted forests", pp. 83-96
• H. Cao and Y. Wu, "Simple Kirkman Packing Designs SKPD({3,4},v) with index two", pp. 97-111
• Yuan Xudong, Li Ting-ting and Su Jianji, "The Vertices of Lower Degree in Contraction-Critical k-Connected Graphs", pp. 113-127
• Emrah Kilic and Nurettin Irmak, "Binomial Identities Involving The Generalized Fibonacci Type Polynomials", pp. 129-134
• Suogang Gao and Jun Guo, "A construction of distance-regular graphs from subspaces in d-bounded distance-regular graphs", pp. 135-148
• Xi Yue, Yang Yuan-sheng and Meng Xin-hong, "Skolem-Gracefulness of k-Stars", pp. 149-160
• Ming-Ju Lee, Chiang Lin and Wei-Han Tsai, "On Antimagic Labeling For Power of Cycles", pp. 161-165
• Hong Bian, Fuji Zhang, Guoping Wang and Haizheng Yu, "Extremal polygonal cactus chain concerning k-independent sets", pp. 167-172
• Kenta Ozeki and Tomoki Yamashita, "Dominating cycles in triangle-free graphs", pp. 173-182
• Jian-Liang Wu and Yu-Wen Wu, "Edge colorings of planar graphs with maximum degree five", pp. 183-191
• Yunshu Gao, Jin Yan and Guojun Li, "On 2-Factors with Chorded Quadrilaterals in Graphs", pp. 193-201
• H. Roslan and Y.H. Peng, "Chromatic Uniqueness of Complete Bipartite Graphs With Certain Edges Deleted", pp. 203-213
• Iwona Wloch, "On kernels by monochromatic paths in D-join", pp. 215-224
• Rene Schott and George Stacey Staples, "Nilpotent Adjacency Matrices and Random Graphs", pp. 225-239
• Xuechao Li, "A new lower bound on critical graphs with maximum degree of 8 and 9", pp. 241-257
• Petros Hadjicostas and K.B. Lakshmanan, "Measures of disorder and straight insertion sort with erroneous comparisons", pp. 259-288
• Rao Li, "Hamilton-Connectivity of Claw-Free Graphs with Bounded Dilworth Numbers", pp. 289-294
• Sibel Ozkan, "Generalization of the Erdos-Gallai Inequality", pp. 295-302
• Lihua Feng, "Spectral radius of graph with given diameter", pp. 303-308
• Lihua Feng and Guihai Yu, "Erratum to: A note on the eigenvalues of graphs ", p. 309
• Mingqing Zhai, Ruifang Liu and Jinlong Shu, "On the (Laplacian) spectral radius of bipartite graphs with given number of blocks", pp. 311-319
• Bart De Bruyn, "The valuations of the near 2n-gon I[n]", pp. 321-336
• Renwang Su and Hung-Lin Fu, "Embeddings of Maximum Packings of Triples", pp. 337-351
• Hortensia Galeana-Sanchez and Rocio Sanchez-Lopez, "H-kernels in the D-join", pp. 353-377
• R.S. Manikandan, P. Paulraja and S. Sivasankar, "Directed Hamilton cycle decompositions of the tensor product of symmetric digraphs", pp. 379-386
• Hongyu Chen, Xuegang Chen and Xiang Tan, "On k-connected restrained domination in graphs", pp. 387-397
• Selvam Avadayappan and P. Santhi, "Some results on neighbourhood highly irregular graphs", pp. 399-414
• Yunshu Gao and Guojun Li, "On the Maximum Number of Disjoint Chorded Cycles in Graphs", pp. 415-422
• Jianqin Zhou, "An algorithm to find k-tight optimal double-loop networks", pp. 423-432
• Zheng Wenping, Lin Xiaohui, Yang Yuansheng and Yang Xiwu, "The Crossing Numbers of Cartesian Product of Cone Graph C[m] + K[l] with Path P[n]", pp. 433-445
• Takao Komatsu, "On the sum of reciprocal Tribonacci numbers", pp. 447-459
• Xiang-Feng Pan, Meijie Ma and Jun-Ming Xu, "Highly Fault-Tolerant Routings in Some Cartesian Product Digraphs", pp. 461-470
• Miao Lianying, "On the Independence Number of Edge Chromatic Critical Graphs", pp. 471-481
• Zhao Zhang and Fengxia Liu, "Isoperimetric Edge Connectivity of Line Graphs and Path Graphs", pp. 483-491
• Maggy Tomova and Cindy Wyels, "Pebbling Graph Products", pp. 493-499
• Paul Manuel and Indra Rajasingh, "Minimum Metric Dimension of Silicate Networks", pp. 501-510
• Jianchu Zeng and Yanpei Liu, "Genus Distributions For Double Pearl-Ladder Graphs", pp. 511-520
• Vito Abatangelo and Bambina Larato, "Complete Arcs In Moulton Planes Of Odd Order", pp. 521-527
Ars Combinatoria page. | {"url":"http://www.combinatorialmath.ca/arscombinatoria/vol98.html","timestamp":"2014-04-20T15:52:10Z","content_type":null,"content_length":"5846","record_id":"<urn:uuid:52a1dd61-a39b-44bd-afc0-c32333b67abf>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter Introduction
NAG Library Chapter Introduction
x01 – Mathematical Constants
1 Scope of the Chapter
This chapter is concerned with the provision of mathematical constants required by other functions within the Library.
These constants are not functions, but they are defined in the header file <nagx01.h>.
2 Background to the Problems
Some Library functions require mathematical constants. These functions call
Chapter x01
and thus lessen the number of changes that have to be made between different implementations of the Library.
3 Recommendations on Choice and Use of Available Functions
Although these functions are primarily intended for use by other functions they may be accessed directly by you.
Euler's constant, γ nag_euler_constant (X01ABC)
4 Functions Withdrawn or Scheduled for Withdrawal | {"url":"http://www.nag.com/numeric/CL/nagdoc_cl23/html/X01/x01intro.html","timestamp":"2014-04-17T05:39:18Z","content_type":null,"content_length":"4948","record_id":"<urn:uuid:f3205f73-36d5-4d4e-a440-a3f423deda43>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Political Science 2703 > Ayala > Notes > Scopes_Final_Review.doc | StudyBlue
Scopes and Methods in Political Science Final Review Michael Abbott Spring 2011 Multiple Choice Questions: Know what population means Population is defined as a set of units of analysis or elements
in statistics (potentially involving more than one), however it can be referred to anyone or anything, rather than only people. Example, a population may be all adults living in a geographical area
such a country or state, or working in an organization, or even could be a set of counties, corporations, government agencies, events magazine articles, or years. Thus, one must carefully define the
unit of analysis and population so it is relevant to the research question (must have linearity from the research question to data inferences). Know the definition of sample A Sample is any subset of
units collected in some manner from a population in which inferences are made and would be generally reflective to that whole population that is of interest. Know the definition of sample statistic
Sample statistics are used to approximate the corresponding population values or parameters or percentages that are we use sample statistics to estimate population characteristic (parameters). Know
the difference between probability sample and non-probability sample Probability samples-a sample in which each element in the total population has a known probability of being included in the sample
(representativness/generalizable). This allows a researcher to calculate how accurately the sample reflects the population from which it is drawn (not %100, but the higher the probability, the
greater the chance sample reflect the whole population in which was drawn from, the lower the margin of error). Non-probability sample-in which each element in the population has an unknown
probability of being selected, this rules out the use of statistical theory to make inferences and increase the chances that the sample will not be unrepresentative to the large population it was
drawn from, increases the chances of error. See Purposive (Nonprobability) samples as well as convenience and snowballing samplings get an idea of this type This is why in scientific field,
probability samples are much preferred, and again there are different types of them (probability samples): Simple random samples, systematic samples, stratified samples (both proportionate and
disproportionate), cluster samples, and telephone samples what happens to the standard error and dispersion when you have a smaller sample size (wider, larger narrower, what?) ****Key to remember:
The smaller the sample size, the larger the standard error and wider the dispersion or distribution, and the larger the sample size means the smaller standard error and narrower/clustered the
distribution (variability or ranges of sample estimates decreases or reduces). What happens to the standard error and dispersion variant when you have a larger sample size Know the difference between
type 1 error and type 2 error (a) When you reject the null hypothesis and it is actually true, a TYPE I Error has been committed, the incorrect or mistaken rejection of a true null hypothesis. (b)
When someone fails or do not reject the null hypothesis when it is false, basically by accepting a false null hypothesis when the result does not fall in the critical value region, a type II error
mistake has been committed. Be familiar with the central limit theorem (know what it means) AKA Central Limited Theorem Meaning taking an infinite number of independent and random samples from the
target population of N (like N=10, which denotes sample size) repeatedly and calculated the proportion or averages of independent in each sample, and afterwards take the extended list of sample
proportions or percentages and averages to compare to the Population?s proportion or average. The more infinite amounts of independent sample taken from the target population, the more the sample
proportions or averages (mean) will mirror, equal, or get closer to the corresponding true population parameter (percentages or proportion) value, no matter the sample size. Frequence Tables, be
familiar with levels of measurement Nominal-Variable values are unordered names or labels, like ethnicity, gender (depending on the coding remember Dichotomous and it becomes ordinal), country, or
origin. Ordinal-Variable values are labels having a hidden or hidden, but unspecified or measured order/ranking. Numbers may be assigned or coded to categories to show ordering/ranking, high/greater/
stronger to low/lesser/weaker (Example: scale of ideology). Interval/Ratio-Numbers are assigned to objects such that interval differences are constant across the scale while Ratio have scales that
have a meaningful zero value (Interval have no true or meaningful zero point) like years of education, and income. Know the difference between nominal, ordinal and integral/ratio levels of
measurement Know dichotomous ordinal There is one thing to mention, Remember the variable ?Gender,? when looking at the coding, if it is assigned with a (0) Male; (1) Female, you may assumed that it
was nominal because it involves gender and does not seem take on any comparison attributes. However, if you are working with dichotomous data (codings with 0 and 1), then it becomes dichotomous
ordinal-level measures Typical dichotomous responses are sometimes defined as (0) no/don?t like/oppose; (1) yes/like/support because such 0 and 1 involves a comparison in which the former, i.e.-
(male) is lesser than the latter (female). Be able to know the central tendencies (how do you locate your mode) A measure of central Tendency is locating the middle or center of a distribution. Often
meaning what is the Average or Mean, Median, and Mode The Mean: This is the most familiar measure of Central Tendency, the Mean or Average is basically the summation or addition of a batch of numbers
or values of a variable and dividing it by the total number of values. The mean or average is appropriate for interval and ratio (quantitative) variables, but also applied sometimes to ordinal scales
in which the categories are assigned number or coding. The mean or average should not be the only statistical indicator that is emphasized it can lead to misleading results that may overestimate or
underestimate results about the sample. Thus mean or averages can have illustrations of few extreme or very large or small values can affect or skew the numerical magnitude of the mean and other
statistics. The Median- A measure of Central Tendency that is fully applicable to ordinal as well as interval and ratio data. The Median (frequently denoted as M), is a value that divides a
distribution in half. That is half of the observations lie above the median, the other half below it. In other words, the median is found by locating the middle of the distribution. One can find the
middle of an odd number of observations by arranging them in order from lowest to highest and count the same number of observations from top and bottom to find the middle like if have seven values
from lowest to highest 3, 5, 6, 8, 9, 10, 13, count three and count three, which leaves one out the fourth one in the middle one is number 8, the median (the median index that the three values lie
below 8, and three above it, so the median divides the distribution in half. If you are dealing with a distribution with many observations, an easy way to find the middle is the formula: midobs =
(N+1) 2 This formula will not provide the exact median number, but locate where to find it like look above there are a total of 7 cases so 7+1=8/2 gives you 4, so it is the fourth number and that is
8. Yet, what if dealing with an even number of observations? To illustrate, Table 11-1, there are 12 European countries and arrange from smallest to largest: 9, 9, 10, 11, 11, 11, 14, 15, 21, 22, 28,
and 35 and if use the above formula it would be (12+1)/2=6.5, we take that Sixth and Seventh number (11+14)/2=M and that gives us the median of 12.5. The median is a resistant measure in that extreme
values (outliers) do not overwhelm its computation. Figures 11-1 on page 376 shows the calculation of the median for a hypothetical example. However when dealing with SPSS outputs, and looking at the
frequency distribution, the Median can be obtain through frequency statistics or by looking at the Cumulative Percent (most preferable) and locate the median (50th number). Whereas, averages can tend
to be over-estimated or underestimated if there are extreme scores, yet medians might over counter that. The Mode: This is a common measure of Central Tendency, especially for nominal and categorical
ordinal data. The mode or modal category is the category with the greatest frequency of observations, or most occurring number. Look at Table 1-4 and pg. 356, which show the distribution of responses
to a party identification question from the 2004 NES. The modal (most frequent) answer was ?independent-leaning Democratic,? with 208 responses. Helpful in descriptions of the shape of distributions
of all kinds of variables. When one category or range of values has many more cases than all the others do, we describe the distribution as unimodal, and it has a single peak. When there are more
than two dominant peaks or spikes I the distribution, we call this multimodal distribution. Remember: Average/Mean and Median-applicable to Ordinal, Interval, and Ratio. The Modal or Mode applicable
to Nominal variables. *Under what column and what will it represent *mode is often found under the frequency column * Know what the difference is between the percent and the valid percent column
Also, know that the percentages of the valid response are not the same as total percentages since again the missing data is excluded in the valid percentage section. Cumulative Percentages, take all
the percentages and add them up like 42% of the sample either ?agree or neither agree nor disagree? Thus if you are going to exclude the missing data, you must mention this like in this case
according to the valid percent, 29.4% of those (1,059) respondents with substantive or valid responses agreed that a working woman can establish ?just as warm and secure a relationship? with the
family as a stay-at-home mom. Whereas, if you use the total percent, it was 25.7 (including the missing data). If given data number, then find that number and find the column and data set it
represents Correlation matrix given, identify the independent and the dependent variable, based on that correlation Non Multiple Choice Questions: Essay Given same correlation matrix, give the
correlation and strength and direction for the independent/dependent variable Given bi variant model summary table, asked to give the adjusted r squared number Tell if it?s a good model fit summary
or not (if there?s a lot of unexplained variance) Give a bi variant regression table, asked to give independent/dependent variable When identifying which is which, look at the footnote of the table
Going to frame a research question based on the two previous variables Once framed in proper format, must provide a theory (explanatory answer to your question, you just gave) should be about a
paragraph, then provide hypothesis (short directional statement about how x infleunces y) then give the null hypothesis. The null hypotheses: ?there is no relationship or statistical difference
between the variables? Then give the unit of analysis (population, organization, country, etc) Given same bi variant regression table you are going to find the slope # , then interpret what that
means in your own words (the more likely then determine if the relationship Universal slope sentence: ?For every one unit moved , there is a increase or decrease If it?s a negative slope, you use the
lowest coding, if a positive slope, then use the highest coding T-Statistic= beyond +/- 1.96 Observed Sig= must be less than .05 to reject the null hypothesis Confidence interval = must not contain a
zero between the confidence interval ranges to reject the null hypothesis *Always state that you are rejecting the null hypothesis by 95% and that you are accepting the alternative hypothesis Or *By
accepting the null hypothesis you are rejecting the alternative hypothesi (You should never conflict your answers, if you reject or accept the null in one, you must do it on all) We will be looking
at standardized beta, look at strength and correlation handout on blackboard Independtly determine their strength and direction, then compare the two and tell which is Tell if it?s a negative or
positive Anything close to 0 is weak around 50 is good around 100 like 70+ is Going to give the threshold for statistical significance, for two statistics two observe t statistics two sigs two
confidence intervals then tell if independent or and dependent will be statistical significant given MAD frequency charts, like in the homework, find the mean medium mode, etc calculate mean,
standard deviation | {"url":"http://www.studyblue.com/notes/note/n/scopes_final_reviewdoc/file/878781","timestamp":"2014-04-21T12:10:00Z","content_type":null,"content_length":"40716","record_id":"<urn:uuid:4f2b14a2-619d-4457-91fc-d0bc79fcf06d>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exterior derivative on almost complex manifolds
up vote 3 down vote favorite
Let $M$ be a complex manifold, and $\omega$ be a $(p,q)$-form. Then $d\omega$ is an element of $\Omega^{p+1,q}(M)\oplus\Omega^{p,q+1}(M)$, so that $d = \partial + \overline{\partial}$, where $\
partial$ and $\overline{\partial}$ are the Dolbeault operators.
Now let $M$ be almost complex. It is commonly stated that $d = \partial + \overline{\partial}$ only holds for complex manifolds, and not for almost complex manifolds. But why is this? After extending
$d$ to also be complex linear, if $\omega = \sum_i f(z)dz^i$ is a $(0,1)$-form, I'd say that we would have $ d\omega = \sum_i df\wedge dz_i = \sum_{i,j} \frac{\partial f}{\partial z^j}dz^j\wedge dz^i
+ \frac{\partial f}{\partial \overline{z}^j}d\overline{z}^j\wedge dz^i,$ which clearly does not have a $(0,2)$-part. Why is this wrong?
On the other hand, let $X, Y$ be antiholomorphic tangent vectors, then $d\omega(X,Y) = X(\omega(Y)) - Y(\omega(X)) - \omega([X,Y]) = -\omega([X,Y])$. Since $M$ is not nessecarily complex, $[X,Y]$ is
not nessecarily also antiholomorphic, so that this term does not nessecarily vanish. But $d\omega$, being a 2-form, can only give a nonzero result if it has a $(0,2)$-part. So from this I can see
that it has to have one, but I can't see why this contradicts the calculation of $d\omega$ above.
ag.algebraic-geometry dg.differential-geometry
6 In your computation of $d \omega$ you are assuming that $M$ is complex, since you are using holomorphic coordinates $z_i$. For a general almost complex manifold it makes no sense to write $\
partial /\partial z$ and $\partial / \partial \bar{z}$, just because no holomorphic coordinates are available. There is just a complex structure on the tangent space, but to write $f(z)$ you need
such a structure to be integrable. – Francesco Polizzi Nov 23 '10 at 13:28
For an explicit example of $d \ne \partial + \bar \partial$ consider $\mathbb{C}^n$ as $\mathbb{R}^{2n}$ with coordinates $x_i, y_i$. The tan. sp. has basis $\partial/\partial x_i, \partial/\
partial y_i$. The usual complex structure of $\mathbb{C}^n$ uses the almost complex structure $i(\partial/\partial x_i) = \partial/\partial y_i$ (this determines $i$ on the other basis vectors
using $i^2 = -1$. If instead you used another complex structure $J(\partial/\partial x_i) = -\partial/\partial y_i$ and then proceeded to use $J$ to define $(p,q)$ forms then $d \ne \partial + \
bar \partial$ – solbap Nov 23 '10 at 14:14
@solbap: Your example doesn't work. You just replaced the original J by its negative, which is still an integrable complex structure. Francesco and Eric gave the correct reason. – Spiro
Karigiannis Nov 23 '10 at 15:01
hmm yeah it seems I've just reversed the orientation of $\mathbb{C}^n$. I guess I was just thinking that the identity map $(\mathbb{C}^n,i)→(\mathbb{R}^{2n},J)$ doesn't satisfy $i∘D(id)=D(id)
∘J$,so this doesn′t give a holomorphic chart for $\mathbb{R}^{2n}$,but I guess $\overline{\mathbb{C}^n}$ does. – solbap Nov 23 '10 at 16:16
add comment
2 Answers
active oldest votes
In writing $\omega$ you used a symbol $dz$ which doesn't make sense unless there is a holomorphic coordinate. Your $dz$ should really be an element of a frame of (1,0)
up vote 11 down vote 1-forms, which need not be closed (as you have assumed).
add comment
Just to follow up on Eric's correct answer: when you have an almost complex structure $J$, you can decompose $1$-forms into type $(1,0)$ and $(0,1)$. Locally, you can find a local basis $e^
1, \ldots, e^n$ of $(1,0)$-forms, but these are not of the form $dz^1, \ldots, dz^n$. Indeed, as Eric mentioned, we do not have local holomorphic coordinates. Then $\bar e^1, \ldots, \bar e^
up vote n$ are a local basis of $(0,1)$ forms. Now if we compute $de^i$, it is a $2$-form, so it can be written in the form \begin{equation*} de^i = a^i_{jk} e^j \wedge e^k + b^i_{jk} e^j \wedge \
5 down bar e^k + c^i_{jk} \bar e^j \wedge \bar e^k. \end{equation*} The almost complex structure $J$ is integrable if and only if all the $c^i_{jk}$'s are zero.
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry dg.differential-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/47082/exterior-derivative-on-almost-complex-manifolds","timestamp":"2014-04-21T02:49:44Z","content_type":null,"content_length":"59386","record_id":"<urn:uuid:490128fe-d6a1-483a-9875-35d9466c2192>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
n C
Website Detail Page
published by the NASA Engineering Design Challenge
This instructional unit challenges students to build a model thrust structure that is as light as possible, yet strong enough to withstand the load of a "launch-to-orbit" three times.
Students first determine the amount of force needed to launch a model rocket to 1 meter, then they design, build, and test their own structure designs. In collaborative groups, they
revise their designs to increase the strength and reduce the weight of their structure. Materials are all readily available at hardware stores. Allow six class periods.
Editor's Note: This module, adaptable for grades 6-10, meets a broad range of national standards. It was originally developed by NASA Design Challenge to connect students in the classroom
with the challenges faced by NASA engineers as they design the next generation of spacecraft, habitat, and communications technologies. This archived lesson plan lead students through
design and testing, the evaluation process, documentation of results, and final shared reports.
Subjects Levels Resource Types
Classical Mechanics
- Applications of Newton's Laws
- General
- Linear Momentum
- Collection
= Rockets
- Instructional Material
- Motion in Two Dimensions
- High School = Activity
= Projectile Motion
- Middle School = Instructor Guide/Manual
- Newton's Second Law
- Informal Education = Laboratory
= Force, Acceleration
= Lesson/Lesson Plan
Education Practices
= Project
- Active Learning
= Problem Solving
General Physics
- Properties of Matter
Appropriate Courses Categories Ratings
- Physical Science - Lesson Plan
- Physics First - Activity
- Conceptual Physics - Laboratory
- Algebra-based Physics - Assessment
- AP Physics - New teachers
Intended Users:
Access Rights:
Free access
Does not have a copyright, license, or other use restriction.
drag, engineering module, engineering problem, experiment, guided inquiry, inquiry-based learning, project, rocket launcher, rocket project, thrust
Record Cloner:
Metadata instance created May 1, 2012 by Caroline Hall
Record Updated:
October 18, 2012 by Caroline Hall
Last Update
when Cataloged:
November 18, 2007
AAAS Benchmark Alignments (2008 Version)
1. The Nature of Science
1B. Scientific Inquiry
• 6-8: 1B/M1b. Scientific investigations usually involve the collection of relevant data, the use of logical reasoning, and the application of imagination in devising hypotheses and
explanations to make sense of the collected data.
• 6-8: 1B/M2ab. If more than one variable changes at the same time in an experiment, the outcome of the experiment may not be clearly attributable to any one variable. It may not always
be possible to prevent outside variables from influencing an investigation (or even to identify all of the variables).
3. The Nature of Technology
3B. Design and Systems
• 6-8: 3B/M4a. Systems fail because they have faulty or poorly matched parts, are used in ways that exceed what was intended by the design, or were poorly designed to begin with.
• 6-8: 3B/M4b. The most common ways to prevent failure are pretesting of parts and procedures, overdesign, and redundancy.
4. The Physical Setting
4E. Energy Transformations
• 6-8: 4E/M2. Energy can be transferred from one system to another (or from a system to its environment) in different ways: 1) thermally, when a warmer object is in contact with a
cooler one; 2) mechanically, when two objects push or pull on each other over a distance; 3) electrically, when an electrical source such as a battery or generator is connected in a
complete circuit to an electrical device; or 4) by electromagnetic waves.
4F. Motion
• 6-8: 4F/M3a. An unbalanced force acting on an object changes its speed or direction of motion, or both.
8. The Designed World
8B. Materials and Manufacturing
• 6-8: 8B/M2. Manufacturing usually involves a series of steps, such as designing a product, obtaining and preparing raw materials, processing the materials mechanically or chemically,
and assembling the product. All steps may occur at a single location or may occur at different locations.
9. The Mathematical World
9B. Symbolic Relationships
• 6-8: 9B/M3. Graphs can show a variety of possible relationships between two variables. As one variable increases uniformly, the other may do one of the following: increase or decrease
steadily, increase or decrease faster and faster, get closer and closer to some limiting value, reach some intermediate maximum or minimum, alternately increase and decrease, increase
or decrease in steps, or do something different from any of these.
11. Common Themes
11A. Systems
• 6-8: 11A/M2. Thinking about things as systems means looking for how every part relates to others. The output from one part of a system (which can include material, energy, or
information) can become the input to other parts. Such feedback can serve to control what goes on in the system as a whole.
• 9-12: 11A/H2. Understanding how things work and designing solutions to problems of almost any kind can be facilitated by systems analysis. In defining a system, it is important to
specify its boundaries and subsystems, indicate its relation to other systems, and identify what its input and output are expected to be.
• 9-12: 11A/H4. Even in some very simple systems, it may not always be possible to predict accurately the result of changing some part or connection.
11B. Models
• 9-12: 11B/H5. The behavior of a physical model cannot ever be expected to represent the full-scale phenomenon with complete accuracy, not even in the limited set of characteristics
being studied. The inappropriateness of a model may be related to differences between the model and what is being modeled.
12. Habits of Mind
12C. Manipulation and Observation
• 6-8: 12C/M3. Make accurate measurements of length, volume, weight, elapsed time, rates, and temperature by using appropriate devices.
• 6-8: 12C/M5. Analyze simple mechanical devices and describe what the various parts are for; estimate what the effect of making a change in one part of a device would have on the
device as a whole.
12D. Communication Skills
• 6-8: 12D/M6. Present a brief scientific explanation orally or in writing that includes a claim and the evidence and reasoning that supports the claim.
• 6-8: 12D/M9. Prepare a visual presentation to aid in explaining procedures or ideas.
Common Core State Standards for Mathematics Alignments
Ratios and Proportional Relationships (6-7)
Understand ratio concepts and use ratio reasoning to solve problems. (6)
• 6.RP.1 Understand the concept of a ratio and use ratio language to describe a ratio relationship between two quantities.
• 6.RP.3.a Make tables of equivalent ratios relating quantities with whole number measurements, find missing values in the tables, and plot the pairs of values on the coordinate plane.
Use tables to compare ratios.
• 6.RP.3.b Solve unit rate problems including those involving unit pricing and constant speed. Supplements
Analyze proportional relationships and use them to solve real-world and mathematical problems. (7) Contribute
• 7.RP.2.b Identify the constant of proportionality (unit rate) in tables, graphs, equations, diagrams, and verbal descriptions of proportional relationships. Similar
• 7.RP.2.d Explain what a point (x, y) on the graph of a proportional relationship means in terms of the situation, with special attention to the points (0, 0) and (1, r) where r is the Materials
unit rate.
The Number System (6-8)
Apply and extend previous understandings of numbers to the system of rational numbers. (6)
• 6.NS.8 Solve real-world and mathematical problems by graphing points in all four quadrants of the coordinate plane. Include use of coordinates and absolute value to find distances
between points with the same first coordinate or the same second coordinate.
Expressions and Equations (6-8)
Represent and analyze quantitative relationships between dependent and independent variables. (6)
• 6.EE.9 Use variables to represent two quantities in a real-world problem that change in relationship to one another; write an equation to express one quantity, thought of as the
dependent variable, in terms of the other quantity, thought of as the independent variable. Analyze the relationship between the dependent and independent variables using graphs and
tables, and relate these to the equation.
Solve real-life and mathematical problems using numerical and algebraic expressions and equations. (7)
• 7.EE.4.a Solve word problems leading to equations of the form px + q = r and p(x + q) = r, where p, q, and r are specific rational numbers. Solve equations of these forms fluently.
Compare an algebraic solution to an arithmetic solution, identifying the sequence of the operations used in each approach.
Understand the connections between proportional relationships, lines, and linear equations. (8)
• 8.EE.5 Graph proportional relationships, interpreting the unit rate as the slope of the graph. Compare two different proportional relationships represented in different ways.
Statistics and Probability (6-8)
Summarize and describe distributions. (6)
• 6.SP.4 Display numerical data in plots on a number line, including dot plots, histograms, and box plots.
• 6.SP.5.a Reporting the number of observations.
• 6.SP.5.c Giving quantitative measures of center (median and/or mean) and variability (interquartile range and/or mean absolute deviation), as well as describing any overall pattern
and any striking deviations from the overall pattern with reference to the context in which the data were gathered.
Common Core State Reading Standards for Literacy in Science and Technical Subjects 6—12
Key Ideas and Details (6-12)
• RST.6-8.3 Follow precisely a multistep procedure when carrying out experiments, taking measurements, or performing technical tasks.
• RST.9-10.1 Cite specific textual evidence to support analysis of science and technical texts, attending to the precise details of explanations or descriptions.
Integration of Knowledge and Ideas (6-12)
• RST.6-8.7 Integrate quantitative or technical information expressed in words in a text with a version of that information expressed visually (e.g., in a flowchart, diagram, model,
graph, or table).
• RST.6-8.8 Distinguish among facts, reasoned judgment based on research findings, and speculation in a text.
• RST.6-8.9 Compare and contrast the information gained from experiments, simulations, video, or multimedia sources with that gained from reading a text on the same topic.
Range of Reading and Level of Text Complexity (6-12)
• RST.6-8.10 By the end of grade 8, read and comprehend science/technical texts in the grades 6—8 text complexity band independently and proficiently.
Common Core State Writing Standards for Literacy in History/Social Studies, Science, and Technical Subjects 6—12
Text Types and Purposes (6-12)
• 2. Write informative/explanatory texts, including the narration of historical events, scientific procedures/ experiments, or technical processes. (WHST.6-8.2)
Research to Build and Present Knowledge (6-12)
• WHST.6-8.9 Draw evidence from informational texts to support analysis, reflection, and research.
• WHST.9-10.7 Conduct short as well as more sustained research projects to answer a question (including a self-generated question) or solve a problem; narrow or broaden the inquiry when
appropriate; synthesize multiple sources on the subject, demonstrating understanding of the subject under investigation.
This resource is part of a Physics Front Topical Unit.
Dynamics: Forces and Motion
Unit Title:
Newton's Second Law & Net Force
This archived lesson module challenges students to build a model spacecraft with certain constraints: as light as possible, yet strong enough to withstand three "launch-to-orbit" trips.
Kids will be exposed to engineering design, the physics of thrust and drag, and using systems analysis to solve problems. All materials are readily available at hardware or grocery
stores. Meets multiple national standards in science, mathematics, and language arts.
Link to Unit:
ComPADRE is beta testing Citation Styles!
<a href="http://www.thephysicsfront.org/items/detail.cfm?ID=11945">NASA Engineering Design Challenge. NASA Engineering Design Challenges: Spacecraft Structures. Houston: NASA Engineering
Design Challenge, November 18, 2007.</a>
(NASA Engineering Design Challenge, Houston, 2000), WWW Document, (http://er.jsc.nasa.gov/seh/main_EDC_Spacecraft_Structures.pdf).
NASA Engineering Design Challenges: Spacecraft Structures (NASA Engineering Design Challenge, Houston, 2000), <http://er.jsc.nasa.gov/seh/main_EDC_Spacecraft_Structures.pdf>.
NASA Engineering Design Challenges: Spacecraft Structures. (2007, November 18). Retrieved April 17, 2014, from NASA Engineering Design Challenge: http://er.jsc.nasa.gov/seh/
NASA Engineering Design Challenge. NASA Engineering Design Challenges: Spacecraft Structures. Houston: NASA Engineering Design Challenge, November 18, 2007. http://er.jsc.nasa.gov/seh/
main_EDC_Spacecraft_Structures.pdf (accessed 17 April 2014).
NASA Engineering Design Challenges: Spacecraft Structures. Houston: NASA Engineering Design Challenge, 2000. 18 Nov. 2007. 17 Apr. 2014 <http://er.jsc.nasa.gov/seh/
@misc{ Title = {NASA Engineering Design Challenges: Spacecraft Structures}, Publisher = {NASA Engineering Design Challenge}, Volume = {2014}, Number = {17 April 2014}, Month = {November
18, 2007}, Year = {2000} }
%T NASA Engineering Design Challenges: Spacecraft Structures
%D November 18, 2007
%I NASA Engineering Design Challenge
%C Houston
%U http://er.jsc.nasa.gov/seh/main_EDC_Spacecraft_Structures.pdf
%O application/pdf
%0 Electronic Source
%D November 18, 2007
%T NASA Engineering Design Challenges: Spacecraft Structures
%I NASA Engineering Design Challenge
%V 2014
%N 17 April 2014
%8 November 18, 2007
%9 application/pdf
%U http://er.jsc.nasa.gov/seh/main_EDC_Spacecraft_Structures.pdf
: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the
Citation Source Information
area for clarifications.
Citation Source Information
The AIP Style presented is based on information from the AIP Style Manual.
The APA Style presented is based on information from APA Style.org: Electronic References.
The Chicago Style presented is based on information from Examples of Chicago-Style Documentation.
The MLA Style presented is based on information from the MLA FAQ. | {"url":"http://www.thephysicsfront.org/items/detail.cfm?ID=11945","timestamp":"2014-04-17T06:43:38Z","content_type":null,"content_length":"60128","record_id":"<urn:uuid:ea9353c7-a6d3-4ed9-9a38-eaef4ff87ebd>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
Six Sigma - Defect Metrics
Before we go ahead, lets define two terms:
• A Six Sigma defect is defined as anything outside of customer specifications.
• A Six Sigma opportunity is the total quantity of chances for a defect.
Here are various formulae to measure different metrics related to Six Sigma Defects
Defects Per Unit - DPU
Total Number of Defects
DPU = ------------------------
Total number of Product Units
The probability of getting 'r' defects in a sample having a given dpu rate can be predicted with the Poisson Distribution.
Total Opportunities - TO
TO = Total number of Product Units x Opportunities
Defects Per Opportunity - DPO
Total Number of Defects
DPO = ------------------------
Total Opportunity
Defects Per Million Opportunities - DPMO
DPMO = DPO x 1,000,000
Defects Per Million Opportunities or DPMO can be then converted to sigma values using Yield to Sigma Conversion Table given in Six Sigma - Measure Phase.
According to the conversion table
6 Sigma = 3.4 DPMO
How to find your Sigma Level
• Clearly define the customer's explicit requirements.
• Count the number of defects that occur.
• Determine the yield-- percentage of items without defects.
• Use the conversion chart to determine DPMO and Sigma Level.
Simplified Sigma Conversion Table
If your yield is: Your DPMO is: Your Sigma is:
30.9% 690,000 1.0
62.9% 308,000 2.0
93.3 66,800 3.0
99.4 6,210 4.0
99.98 320 5.0
99.9997 3.4 6.0 | {"url":"http://www.tutorialspoint.com/six_sigma/six_sigma_defect_metrics.htm","timestamp":"2014-04-16T04:50:39Z","content_type":null,"content_length":"13344","record_id":"<urn:uuid:a908db20-b359-424c-a854-52825079e0d5>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
class sklearn.preprocessing.StandardScaler(copy=True, with_mean=True, with_std=True)¶
Standardize features by removing the mean and scaling to unit variance
Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Mean and standard deviation are then stored to be used on later
data using the transform method.
Standardization of a dataset is a common requirement for many machine learning estimators: they might behave badly if the individual feature do not more or less look like standard normally
distributed data (e.g. Gaussian with 0 mean and unit variance).
For instance many elements used in the objective function of a learning algorithm (such as the RBF kernel of Support Vector Machines or the L1 and L2 regularizers of linear models) assume that
all features are centered around 0 and have variance in the same order. If a feature has a variance that is orders of magnitude larger that others, it might dominate the objective function and
make the estimator unable to learn from other features correctly as expected.
with_mean : boolean, True by default
If True, center the data before scaling. This does not work (and will raise an exception) when attempted on sparse matrices, because centering them entails building a dense matrix
which in common use cases is likely to be too large to fit in memory.
Parameters with_std : boolean, True by default
If True, scale the data to unit variance (or equivalently, unit standard deviation).
copy : boolean, optional, default is True
If False, try to avoid a copy and do inplace scaling instead. This is not guaranteed to always work inplace; e.g. if the data is not a NumPy array or scipy.sparse CSR matrix, a
copy may still be returned.
│mean_│array of floats with shape [n_features]│The mean value for each feature in the training set. │
│std_ │array of floats with shape [n_features]│The standard deviation for each feature in the training set. │
│fit(X[, y]) │Compute the mean and std to be used for later scaling. │
│fit_transform(X[, y]) │Fit to data, then transform it. │
│get_params([deep]) │Get parameters for this estimator. │
│inverse_transform(X[, copy])│Scale back the data to the original representation │
│set_params(**params) │Set the parameters of this estimator. │
│transform(X[, y, copy]) │Perform standardization by centering and scaling │ | {"url":"http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html","timestamp":"2014-04-19T22:05:28Z","content_type":null,"content_length":"21296","record_id":"<urn:uuid:811b6efb-7c32-408b-9ff7-62139442a7cc>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
Logic & Lateral Thinking Puzzles & Problems
1. DUMMY
The letters from the three words below can be taken apart, unscrambled and merged to form three separate words all of which are synonyms. Can you find them?
2. Unscramble
The king of conundrums lives in the MANLIEST CAGE
3. Something Fishy!?
How long does it take for you to solve weird riddles? Time is passing as we speak...tic...toc! This one involves absolutely no academia. If you don't solve it you might be ill. As far as difficulty
is concerned it's my easiest one. Can you solve this?
4. SO!
What color is her blouse?
5. Below there are sixteen numbers. Assuming that any three of the numbers may be drawn at random, what is the probability (to the nearest percent) that three numbers will be drawn whose sum equals
6. Judy is five times as old as Henry. In two years, she'll be three times as old, and in six years she'll only be twice as old. How old will Judy be in seven years?
Which one is the odd one out?
URN URA ARS RTH TER NUS
8. LET'S SPLIT
Maxwell Edison is studying a newly discovered hyperactive amoeba which multiplies at a highly accelerated rate. He places one such amoeba in a jar. After 15 seconds the amoeba splits. 15 seconds
later the two amoebae split. 15 seconds after that the four amoebae split and so on. After two hours the jar is halfway full. How long will it take to fill the jar completely?
9. GOOD SAMARITAN
463512=a divine service and 3/8 of me is a system or theory. What am I?
10. Below there are 36 numbers. Assuming that any three of the numbers may be drawn at random, what is the probability (to the nearest percent) that three numbers will be drawn whose sum equals 15?
11. Orgm lif sca means "work all day."
Habba sca flib means "car does work."
Flib clop orgm means "all she does."
What words would you use to say: "Does she work?" The order that you place the words in is unimportant - you only need to find the correct words to use.
12. There are 2 identical strings. If you light one of the strings at its end, it will take exactly one hour for it to finish burning completely. The string will not burn evenly - it is thicker in
some places, thinner in others. For example, the string may not be half consumed exactly 30 minutes from lighting it at one end. You have no other means of telling time, and you want to know when
exactly 45 minutes have passed. All that you have is a lighter and these 2 identical strings. What is the most accurate method you can use, given these conditions?
For the four puzzles below, pretend you are an alien who had managed to learn the English language, but you do not know what significance the days of the week have. On which day of the week would you
13. You would cook a meal.
14. You would get paid.
15. You would get married.
16. It would be unusually bright.
17. If there are 4 empty seats in a movie theatre, how many permutations are there for the number of ways 4 people could sit in these seats?
18. There are 10 socks of each of the following colors in a drawer: blue, green, red, yellow & white, for a total of 50 socks. If the socks are randomly distributed in the drawer (i.e. not in pairs
or any other grouping), & you are blindfolded, what is the minimum number socks you must draw from the drawer in order to be certain you have at least 2 socks of the same color?
19. If you are in the same situation as in the preceding problem, how many socks must you draw from the drawer in order to be certain you have at least 2 socks of different colors?
20. If none of the following statements are true, who can we conclude broke the vase?
Mike: Sally broke the vase.
Tom: Mike will tell you who broke the vase.
April: Tom, Mike & I could not have broken the vase.
Chris: I did not break the vase.
Erik: Mike broke the vase, so Tom & April couldn't have.
Jim: I broke the vase, so Tom is innocent.
21. Make a word from boas that can be used to keep you clean.
22. A man & his family lay out blankets & lie down, watching the sky for hours, even though explosions can be heard nearby. Why?
Hint: The date is important.
23. A woman steps to the edge of a very high building, & as people look on, she leaps off, & falls several stories. The woman is not injured. Why?
Hint: The woman did not fall on cushions or any other type of softened surface, & was not wearing a parachute.
24. A man leaves home one night & drives over a mile to meet a friend for a drink. When the man arrives home, the clock shows a time only five minutes later than when he left. How is this possible?
Hint: There is nothing wrong with the clock, & it consistently shows the correct time.
25. A boy enters a room that is filled with adults. He is told by a man that the court has found that his parents have neglected & abused him, & he will be placed in foster care. However, the boy
sleeps in the same house with his parents that night & several nights after that. No further mention is made of his move to foster care. Why?
26. Three men enter a room filled with gas wearing gas masks. The men voluntarily remove their masks, & begin coughing heavily because of the gas. They do not put their masks back on. The men are not
suicidal, so why did they do this?
27. Spike, an adult, brings the paper to Mr. Hopkins every day. Spike is never paid for this. Why does he do this?
Hint: Spike does not have to bring the paper, but he does not do it entirely because he like Mr. Hopkins.
28. Toby is celebrating his birthday with his friends & family at a restaurant. "I'd like to have a beer - the best you've got! Today is my sixteenth birthday," Toby says to the waiter. The
restaurant manager & several customers hear what Toby says, but he is still served a beer. Why?
Hint: Toby really is sixteen years old.
29. A woman bets her friends that she can grab the bare wire on a high voltage electric cable & not be injured. How could she possibly do this?
Hint: Electricity of extremely high voltage is flowing through the cable, & cannot be turned off. The cable cannot be cut or removed from the source of electricity.
30. The fastest runner in school bets a much slower runner that he can beat him in a sprint to a point that is 100 yards away from them. After considering for a minute, the slower runner agrees to
the bet, & wins the race. How did he do it?
Hint: Both students actually ran in the race.
31. Mark's friends & family throw a surprise party for him. Mark is divorced a few months after the party. Why?
Hint: The party was in a town in which Mark does not live.
32. Two trains, each two miles long, enter two one mile long tunnels that are two miles apart from one another on the same track. The trains enter the tunnels at exactly the same time. The first
train is going 5 miles/hour, and the second train is going 10 miles/hour. What is the sum of the lengths of the two trains that will protrude from the tunnels at the exact moment that they collide,
assuming that neither train changes its speed prior to collision? The trains are on the same track headed in opposite directions (i.e. directly toward one another).
33. You have a box that fits inside of a box that fits inside of a box that fits inside of a box that fits inside of a box, for a total of 5 boxes. Assume that no two boxes can fit inside of a box,
unless one is inside of the other (e.g. the two smallest boxes could not fit inside of the largest box, unless the smallest box was inside of the second smallest box), & the boxes cannot be altered
(e.g. folded, cut, or torn). Using only these 5 boxes, how many different arrangements are there to place a gift in the boxes, if the gift can only be inside of the smallest box that is being used?
Example: The gift in the second smallest box inside of the largest box would be 1 arrangement.
34. Solve the preceding problem for 6 boxes.
35. If the same functions are applied to reach the results in each of the three sets of numbers, find what number should replace the ? in the last set:
24 30 ?
36. You have 1,432 feet of fence that must be strung out in a straight line. A fence post must be placed for every 4 feet of fence, so how many fence posts will be needed?
37. If you take 7, then 17, & then 8 from me, you have 160. But if you take 6, then 17, then 8 from me, you have 170. Finally, if you take 1, then 4, then 1 from me, you have 762. What am I?
38. For each of the following equations, letters have been substituted for the numbers. This substitution is consistent throughout all 4 of the equations. Determine what number (from 0-9) is
represented by each of the 10 letters.
A. LFOH
B. LTEL + EMAO + LAHF MOST HOST
C. ELRO
D. OTTH + OLRF + LETH MORE FORE
39. I281B4
Determine which of the following letters & numbers completes the sequence above:
S 0 V Q U 22
40. Without writing anything or using any calculating device, tell me if there are more 2s or 8s to be found in all of the numbers from 1 to 50,000.
41. If 2 of the following statements are false, what chance is there that the egg came first? Round to the nearest whole percent. Note: If any part of a statement is false, then the entire statement
must be false.
A. The chicken came first.
B. The egg came first.
C. A is false, & B is true.
42. If everyone in Chinaville owns an even number of dishes, no one owns more than 274 dishes, & no 2 people own the same number of dishes, what is the maximum number of people in Chinaville?
43. Determine which of the following words does not belong:
peck rod feed grain gill
44. If each letter in the following equations represents a number from 1 through 9, determine what number each letter represents.
A. A+A+B+C = 13
B. A+B+C+D = 14
C. B+B+C+D = 13
45. Should the letter I be on the top or bottom row?
A H J K
B C D E F G L M N O P Q R S T U V W X Y Z
46. Complete each of the following statements by filling in each ____ with a word. Don't use reference materials on this one!
A. New York is the big ____
B. An ____ a day keeps the doctor away
C. George Washington cut down the ____ tree
D. As American as ____ pie
E. They say that rabbits have excellent vision because they eat ____
47. A little girl is in Missouri, & her mother is in California. The little girl is in an accident, & has to be rushed to a nearby hospital. The little girl is the daughter of the nurse who assists
her. How is this possible?
48. You have 8 marbles that weigh 1 ounce each, & 1 marble that weighs 1.5 ounces. You are unable to determine which is the heavier marble by looking at them. You have a weighing scale that consists
of 2 pans, but the scale is only good for 2 total weighings. How can you determine which marble is the heaviest 1 using the scale, & in 2 weighings?
49. A group of 4 people, Andy, Brenda, Carl, & Dana, arrive in a car near a friend's house, who is having a large party. It is raining heavily, & the group was forced to park around the block from
the house because of the lack of available parking spaces due to the large number of people at the party. The group has only 1 umbrella, & agrees to share it by having Andy, the fastest, walk with
each person into the house, & then return each time. It takes Andy 1 minute to walk each way, 2 minutes for Brenda, 5 minutes for Carl, & 10 minutes for Dana. It thus appears that it will take a
total of 19 minutes to get everyone into the house. However, Dana indicates that everyone can get into the house in 17 minutes by a different method. How? The individuals must use the umbrella to get
to & from the house, & only 2 people can go at a time (& no funny stuff like riding on someone's back, throwing the umbrella, etc.).
50. You are in a room with 2 doors leading out. Behind 1 door is a coffer overflowing with jewels & gold, along with an exit. Behind the other door is an enormous, hungry lion that will pounce on
anyone opening the door. You do not know which door leads to the treasure & exit, & which door leads to the lion. In the room you are in are 2 individuals. The first is a knight, who always tells the
truth, & a knave, who always lies. Both of these individuals know what is behind each door. You do not know which individual is the knight, or which one is the knave. You may ask 1 of the individuals
exactly 1 question. What should you ask in order to be certain that you will open the door with the coffer behind it, instead of the hungry lion?
51. You & I come across 3 people, & each 1 is a knight, knave, or normal (normals sometimes tell the truth, & sometimes lie). Exactly 1 of them is a knight, 1 of them is a knave, & the other 1 is a
normal. They make the following statements:
A. I love cats.
B. C always tells the truth.
C. A hates cats.
If I bet you $20 that you could not correctly identify which 1 of these people is a knight, which 'horse' would you be wisest to bet on?
52. Four individuals made the following statements, & each 1 is a knight or a knave. Which ones are knaves, if any?
A. Hydroponics is a science that deals with fisheries.
B. D always tells the truth.
C. The primary colors in the spectrum are red, yellow, & blue.
D. C always tells the truth.
53. If you added together the number of 2's in each of the following sets of numbers, which set would contain the most 2's: 1-333, 334-666, or 667-999?
54. You have 3 baskets, & each 1 contains exactly 4 balls, each of which is of the same size. Each ball is either red, black, white, or purple, & there is 1 of each color in each basket. If you were
blindfolded, & lightly shook each basket so that the balls would be randomly distributed, & then took 1 ball from each basket, what chance is there that you would have exactly 2 red balls, and 1
non-red ball?
55. 8 kips & 14 ligs can build 510 tors in 10 hours, & 13 kips & 6 ligs can build 492 tors in 12 hours. At what rates do kips & ligs build tors? Express your answers in tors per hour.
56. If a juggler juggles 4 objects, how many total throws must he or she make before the objects are returned to their original positions (i.e. the original 2 objects in each hand)? The juggler
starts out with 2 objects in each hand, & throws 1 object from 1 hand, then another object from the second hand, then the remaining object from the first hand, & so on. Except for the first throw for
each hand, there is a moment where the throwing hand no longer holds anything after each throw. You may wish to draw a diagram for this one.
57. A poor man wanted to smoke cigarettes, but did not have enough money to buy them. He found that if he collected cigarette butts, he could make a cigarette from every 5 butts found. He found 25
butts, so how many cigarettes could he smoke?
58. Having just picked some apples from my tree, I placed them in a basket, & took them around to my friends. I ate one, & then gave a third of the remaining apples to my friend Mike. I then drove to
Joe's home, but ate two apples along the way. I gave Joe half of the remaining apples. After Joe's I met Christy, & gave her 10 of the remaining apples, which left one apple. I ate this one later.
How many apples started out in the basket?
59. A man and his son were in an automobile accident. The man died in the accident, but his son was rushed to the hospital. Fortunately, the boy was saved by the doctor who operated on him. The boy
was the doctor's son. How is this possible?
60. In a certain lottery, thirty balls, each one numbered 1, 2, 3......30 are placed in a basket. The basket is shaken, and 5 of the balls are randomly drawn from the basket, and set side by side.
The result is a set of numbers in a particular order, such as 14, 26, 2, 9, and 17. If you purchased a ticket that had 5 such numbers in random order, what chance would you have of winning the
61. Andy, Brian, Cedric, and Dave are an architect, a barber, a caseworker, and a dentist, but not necessarily in that order. Given the following facts, determine what each man's occupation is:
A. At least one, but not all of the men's names begin with the same first letter as their occupation.
B. The architect's name does not contain an r.
C. The barber and dentist each have names that share exactly one letter.
62. Three men make the following statements regarding a murder that they are suspected of. Two of the men are lying, & one of them is telling the truth. Exactly one of the men is guilty of the crime.
Is anyone definitely guilty or innocent? Which individual(s) is most likely to be guilty?
A. I didn't do it.
B. C did it.
C. A did it. | {"url":"http://www.puzz.com/1001/logic.htm","timestamp":"2014-04-18T03:34:03Z","content_type":null,"content_length":"24499","record_id":"<urn:uuid:a33bc507-b172-4f03-8b42-ab106a9651ad>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
Westmont, NJ Algebra 2 Tutor
Find a Westmont, NJ Algebra 2 Tutor
...In between formal tutoring sessions, I offer my students FREE email support to keep them moving past particularly tough problems. In addition, I offer FREE ALL NIGHT email/phone support just
before the “big" exam, for students who pull "all nighters". One quick note about my cancellation policy...
14 Subjects: including algebra 2, physics, calculus, ASVAB
...While getting my Master's degree I worked with children with special needs as a Teacher's Assistant so I am comfortable and experienced in all types of learners. I am Pennsylvania state
certified to teach K-6. I am currently working as a 4th grade teacher.
15 Subjects: including algebra 2, reading, writing, geometry
...I have a bachelor's degree in secondary math education. During my time in college, I took one 3-credit course in Differential Equations. While I was studying, I worked in the Math Center at my
11 Subjects: including algebra 2, calculus, geometry, algebra 1
...As a teaching assistant for four years in graduate school and a tutor as an undergraduate, I have tutored various levels of math as well as chemistry. Concepts in pre-algebra, algebra 1, and
algebra 2 are necessary for proper equation manipulation in the sciences, especially in upper level cours...
9 Subjects: including algebra 2, chemistry, geometry, algebra 1
...I have learned through the years how to make math seem easy. I enjoy math a great deal and look forward to working with you.I have taught and tutored Algebra 1 in different capacities for over
5 years among other subjects. I am a certified in secondary mathematics by the State of Pennsylvania.
11 Subjects: including algebra 2, statistics, geometry, algebra 1
Related Westmont, NJ Tutors
Westmont, NJ Accounting Tutors
Westmont, NJ ACT Tutors
Westmont, NJ Algebra Tutors
Westmont, NJ Algebra 2 Tutors
Westmont, NJ Calculus Tutors
Westmont, NJ Geometry Tutors
Westmont, NJ Math Tutors
Westmont, NJ Prealgebra Tutors
Westmont, NJ Precalculus Tutors
Westmont, NJ SAT Tutors
Westmont, NJ SAT Math Tutors
Westmont, NJ Science Tutors
Westmont, NJ Statistics Tutors
Westmont, NJ Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Ashland, NJ algebra 2 Tutors
East Camden, NJ algebra 2 Tutors
East Haddonfield, NJ algebra 2 Tutors
Echelon, NJ algebra 2 Tutors
Ellisburg, NJ algebra 2 Tutors
Erlton, NJ algebra 2 Tutors
Haddon Township, NJ algebra 2 Tutors
Haddonfield algebra 2 Tutors
Middle City East, PA algebra 2 Tutors
Oaklyn algebra 2 Tutors
South Camden, NJ algebra 2 Tutors
West Collingswood Heights, NJ algebra 2 Tutors
West Collingswood, NJ algebra 2 Tutors
Westville Grove, NJ algebra 2 Tutors
Woodcrest, NJ algebra 2 Tutors | {"url":"http://www.purplemath.com/Westmont_NJ_Algebra_2_tutors.php","timestamp":"2014-04-17T21:38:41Z","content_type":null,"content_length":"24210","record_id":"<urn:uuid:8564c422-8cda-42d4-8758-333463359185>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00574-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differential equation from transfer function
How can I Obtain the differential equation from transfer function below ? 1/s^2 + 2s +7
This is the reverse process of the question of obtaing the LT form of the transfer function defined by a differential equation we did earlier. $H(s)=\frac{1}{s^2+2s+7}$ So: $Y(s)= H(s)X(s)=\frac{1}{s
^2+2s+7}X(s)$ $(s^2+2s+7)Y(s)=X(s)$ Taking inverse Laplace transforms (assuming $y(0)=0$ and $y'(0)=0$): $y''(t)+2y'(t)+7y(t)=x(t)$ CB
Thanks, your comprehension is very appreciated I am trying to solve a second excercice as you did, but I got lost in third line. I don't know where the numerator 10 should go from the second to 3hd
line H(s) = 10/(s+7) * (s+8) y(s) = H(s) * X(s) = 10/([s+7] * [s+8]) * X(s) ([s+7] * [s+8]) Y(s) = X(s)
Thank you. Now I know how to move the numerator I have this last one I am stuck in line 3 because I'm in doubt if I can sum s^3 + 8s^2 and 9s or let it be s+2/s^3 + 8s^2 +9s +15 y(s)= H(s) X(s) = s+2
/s^3 + 8s^2 +9s +15 * X(s) (s^3 + 8s^2 +9s +15) * Y(s) = s+2 * X(s) | {"url":"http://mathhelpforum.com/differential-equations/100521-differential-equation-transfer-function-print.html","timestamp":"2014-04-21T04:54:47Z","content_type":null,"content_length":"11934","record_id":"<urn:uuid:930ff7fe-215e-432f-b599-cdee84df9d8d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear programming via a non-differentiable penalty function
, 1997
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here
we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Cited by 328 (18 self)
Add to MetaCart
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we
consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse.
, 1978
"... An algorithm for solving large-scale nonlinear ' programs with linear constraints is presented. The method combines efficient sparse-matrix techniques as in the revised simplex method with
stable quasi-Newton methods for handling the nonlinearities. A general-purpose production code (MINOS) is descr ..."
Cited by 75 (11 self)
Add to MetaCart
An algorithm for solving large-scale nonlinear ' programs with linear constraints is presented. The method combines efficient sparse-matrix techniques as in the revised simplex method with stable
quasi-Newton methods for handling the nonlinearities. A general-purpose production code (MINOS) is described, along with computational experience on a wide variety of problems.
- European Journal of Operational Research , 1995
"... This paper presents an analysis of the involvement of the penalty parameter in exact penalty function methods that yields modifications to the standard outer loop which decreases the penalty
parameter (typically dividing it by a constant). The procedure presented is based on the simple idea of makin ..."
Cited by 5 (0 self)
Add to MetaCart
This paper presents an analysis of the involvement of the penalty parameter in exact penalty function methods that yields modifications to the standard outer loop which decreases the penalty
parameter (typically dividing it by a constant). The procedure presented is based on the simple idea of making explicit the dependence of the penalty function upon the penalty parameter and is
illustrated on a linear programming problem with the l 1 exact penalty function and an active-set approach. The procedure decreases the penalty parameter, when needed, to the maximal value allowing
the inner minimization algorithm to leave the current iterate. It moreover avoids unnecessary calculations in the iteration following the step in which the penalty parameter is decreased. We report
on preliminary computational results which show that this method can require fewer iterations than the standard way to update the penalty parameter. This approach permits a better understanding of
the performance of exac...
- IEEE Trans. Auto. Contr , 1995
"... Abstract — A class of neural networks that solve linear programming problems is analyzed. The neural networks considered are modeled by dynamic gradient systems that are constructed using a
parametric family of exact (nondifferentiable) penalty functions. It is proved that for a given linear program ..."
Cited by 4 (2 self)
Add to MetaCart
Abstract — A class of neural networks that solve linear programming problems is analyzed. The neural networks considered are modeled by dynamic gradient systems that are constructed using a
parametric family of exact (nondifferentiable) penalty functions. It is proved that for a given linear programming problem and sufficiently large penalty parameters, any trajectory of the neural
network converges in finite time to its solution set. For the analysis, Lyapunov-type theorems are developed for finite time convergence of nonsmooth sliding mode dynamic systems to invariant sets.
The results are illustrated via numerical simulation examples. Index Terms—Invariant sets, linear programming, neural networks, nondifferentiable optimization, penalty functions, sliding modes. I. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=3442427","timestamp":"2014-04-18T19:36:05Z","content_type":null,"content_length":"20438","record_id":"<urn:uuid:1c9d54ed-22b7-42fa-815b-6529aabb37eb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find a java program that calculates
May 9th, 2013, 10:16 AM
Zuhairi Abdullah
Find a java program that calculates
1) For linear function
a. x-and y-intercepts
2) For quadratic function
a. x-and y-intercepts
b. Vertex
--- Update ---
Guest...i really need helped...this assignment will be submit on Sunday..12 may..2013.
May 9th, 2013, 10:18 AM
Re: Find a java program that calculates
May 9th, 2013, 10:48 AM
Zuhairi Abdullah
Re: Find a java program that calculates
Honestly..i do not take programming subject in this semester.So,i did not know how to solve that.Please help me.I try to ask my friend but they have same problem with me.
May 9th, 2013, 11:08 AM
Re: Find a java program that calculates
Have you tried to hire a programmer to write the code for you?
This site is for helping programming students solve their programming problems, not to write code for people. | {"url":"http://www.javaprogrammingforums.com/%20object-oriented-programming/29369-find-java-program-calculates-printingthethread.html","timestamp":"2014-04-18T05:47:00Z","content_type":null,"content_length":"5089","record_id":"<urn:uuid:6994a840-1c78-416d-849d-90f4a527525c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding Solutions To Diophantine Equations By Smell
Finding Solutions To Diophantine Equations By Smell Follow
Written by Mike James
Thursday, 06 June 2013
Yes - smell. Diophantine equations are just polynomial equations that use nothing but integers for their coefficients and solutions. They are very hard to solve and often very
We know that there is no general solution - this was proved by Matiyasevich in 1970 who also showed that there was no solution to Hilbert's tenth problem.
For example, find integers a,b and c that satisfy:
a^2 + b^2 - c^2 = 0
In this case a,b and c are just Pythagorean triples like 3,4,5 i.e. the sides of a right angled triangle. Equally well known is the fact that the same equation but with a power
greater than two doesn't have any solutions because this is Fermat's last theorem as proved by Wiles in 1986.
As the solution space consists of a discrete set of integers, it seems fairly obvious to try some AI search techniques to solve the general equation and people have tried things
like the genetic algorithm, which represents potential solutions as genes and breeds new solutions by selecting breding pairs according to fitness.
Now we have another approach from a team at Mumbai University - the Ant Colony Optimization algorithm. This works by allowing an "ant" to explore the solution space and leave a
pheromone trail behind for others to follow. The idea is that the pheromone is deposited according to the goodness of the ant's location and it also evaporates to allow new areas of
the space to be searched.
What makes searching for a Diophantine solution different is that you might well have a set of integers that get close to a solution but the neighboring integer solutions over- or
undershoot and so don't provide a solution. That are some seemingly good locations in the search space are in fact very bad.
In this case the ants are set loose in the search space at random initial locations on an m dimensional grid - where m is the number of unknowns. The quality of the location is
established by how close it comes to solving the equation. This is used to give the ant a quantity of pheromone which it distributes randomly among neighboring locations. Over time
the pheromone concentration decreases using a law that emulates evaporation. Of course, ants move toward concentrations of pheromone.
What is surprising about this procedure is that it not only works but it seems to work better than the genetic algorithm - sometimes by quite a lot.
So what is it about searching for integer solutions that makes Ant Colony Optimization work? Possibly it is the simple fact that near a solution there are a lot of very good
approximate solutions - consider changing one of the m variables in a solution by 1. This makes the problem more suitable for this sort of discrete "hill climbing" method.
Whatever the reason, I doubt many mathematicians will think that "smelling" a solution in this sense has much beauty or elegance about it.
Clojure 1.6
Clojure is a dialect of Lisp that has attracted a following among programmers who want to adopt a functional approach. Version 1.6 introduces new and improved features, enhancements
to performance, pr [ ... ]
+ Full Story
Facebook Buys Oculus VR
Oculus VR, which has a virtual reality headset under development, has been snapped up by Facebook in a deal valued at $2 billion. What does this mean for the future of VR?
+ Full Story
More News
Last Updated ( Thursday, 06 June 2013 )
RSS feed of news items only
Copyright © 2014 i-programmer.info. All Rights Reserved. | {"url":"http://www.i-programmer.info/news/112/5949.html","timestamp":"2014-04-21T02:04:52Z","content_type":null,"content_length":"40135","record_id":"<urn:uuid:26d78137-bbe8-4c3d-a961-397e7d64c59a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
PHYS 110 Mechanics (1)
Newtonian dynamics, including kinematics, the laws of motion, gravitation, and rotational motion, are considered. The conservation laws for energy, momentum, and angular momentum, are presented along
with applications ranging from the atomic to the celestial. One laboratory meeting per week. NOTE: PHYS 110 and PHYS 120 are intended for both science and non-science majors. In PHYS 110 and PHYS
120, calculus concepts and techniques are introduced and taught as needed. No prior knowledge of calculus is necessary to undertake these courses. MNS; QL; Staff
PHYS 120 Heat, Waves, and Light (1)
Thermodynamics explores the connections between heat and other forms of energy, temperature, and entropy, with applications to engines, refrigerators, and phase transitions. Oscillatory behavior and
wave motion, with application to acoustic and optical phenomena. Geometric and wave optics, considering optical systems and the diverse phenomena associated with the wave nature of light. Techniques
from calculus are introduced and taught as needed. One laboratory meeting per week.MNS; QL; Staff
PHYS 130 Electricity and Magnetism (1)
This course utilizes the concept of "field" to explain the properties of static electric and magnetic forces. The behavior of dynamic electric and magnetic fields is studied and the connection
between the two is formulated in the form of Maxwell's equations, which unify the study of electricity, magnetism, and optics. The static and dynamic behaviors of fluids are also covered to introduce
concepts useful in understanding electrical circuits. Calculus is used. One laboratory meeting per week. MNS; Prereq : MATH 152; QL; Staff
PHYS 130A Electricity and Magnetism (Algebra-based) (1)
This course covers most of the topics in PHYS 130 but without calculus and in less depth. Additionally, the history and basic concepts of Quantum Physics are introduced, with an emphasis on how
Quantum Physics has changed our understanding of energy, light, and the atom. This course is intended for students not planning to pursue Physics, Chemistry, or other related fields. One laboratory
meeting per week. MNS; Credit cannot be earned for both PHYS 130A and PHYS 130; QL; Staff
PHYS 163 Physics of Music (1)
A survey of the physical principles involved in sound and musical instruments. How the properties of an instrument or room influence the perceived tone quality of sound or music. Analysis/synthesis
of the frequency components in musical sound. Coverage is primarily descriptive with the laboratory an important component. MNS; QL; Staff
PHYS 165 Physics of Sports (1)
In this course, physics principles will be used to analyze motion of objects and athletes in a variety of sports, including an analysis of proper technique. Approaches to this analysis will include
an introduction to Newtonian mechanics, fluid dynamics, the conservation of energy, momentum and angular momentum. Concepts will be developed through observation and laboratory experience. Specific
topics for analysis will be drawn from the interests of class participants.MNS; Prereq : satisfaction of the mathematics proficiency portion of the QL Key Competency requirement; QL; M.Shroyer;
PHYS 167 Astronomy (1)
How measurements (from naked-eye observations to the most modern techniques) and their analysis have led to our current understanding of the size, composition, history, and likely future of our
universe. Concepts and methodology developed through observations and laboratory exercises emphasizing simple measurements and the inferences to be drawn from them. Includes evening viewing sessions.
MNS; QL; Staff
PHYS 205 Modern Physics (1)
An introduction to the two major shifts in our view of physics (which have occurred since 1900), Einstein's Special Relativity and the wave-particle duality of nature. The course starts with a review
of key experiments which show that classical mechanics and electrodynamics do not provide a satisfactory explanation for the observed phenomena, and introduces the relativity and quantum theory which
provide such an explanation. Includes regular laboratory meetings. MNS; Prereq : PHYS 130 or PHYS 130A; and MATH 152; QL; Staff
PHYS 241 Introduction to Research (1)
Experiments and seminars emphasizing modern techniques and instrumentation in physical measurements. Student-selected examples in several areas of physics illustrate such techniques as noise
suppression, data handling and reduction, and instrumental interfacing. Introduction to literature search, error analysis, experimental design, and preparation of written and oral reports. MNS;
Prereq : any physics course numbered 200 or above, MATH 152, or permission of the instructor; O; QL; W; Staff
PHYS 242 Electronics (1)
An introduction to electronics surveying the three major areas: circuit analysis, analog and digital electronics. Topics include network theorems, AC circuit analysis, phasors, frequency response,
diodes, transistors, operational amplifiers, Boolean algebra, combinational and sequential logic, programmable logic devices, memory, analog-to-digital conversion and sensors. Constructing and
testing circuits in the laboratory is a major component of the course. Prereq : PHYS 130 or PHYS 130A; QL; Staff
PHYS 248 Teaching Assistant (1/2 or 1)
Prereq : Permission of instructor; May be graded S/U at instructor's discretion; Staff
PHYS 260 Engineering Mechanics: Statics (1)
Statics concerns the mechanics of non-moving structures. This problem-oriented course explores force and moment systems, distributed forces, trusses, cables and cable networks, friction and friction
machines, and the virtual work principle. The course is offered on an independent-study basis by arrangement with the instructor. Prereq : PHYS 312 or permission of the instructor; T.Moses;
PHYS 295 Special Topics (1/2 or 1)
Courses offered occasionally in special areas of Physics not covered in the usual curriculum.Staff
PHYS 300 Mathematical Physics (1)
An introduction to the methods of advanced mathematics applied to physical systems, for students in physics, mathematics, chemistry, or engineering. Topics include the calculus of variations, linear
transformations and eigenvalues, partial differential equations, orthogonal functions, and integral transforms. Physical applications include Hamilton's Principle, coupled oscillations, the wave
equation and its solutions, Fourier analysis. Prereq : MATH 152 and at least one other course in mathematics or physics numbered 200 or above; QL; Staff
PHYS 308 Optics (1)
Electromagnetic waves, refraction, geometric optics and optical instruments, polarization, interference and diffraction phenomena, special topics including lasers, holography, and nonlinear optics.
Prereq : PHYS 120 or permission of the instructor; QL; Staff
PHYS 310 Thermodynamics and Statistical Mechanics (1)
Elementary probability theory, thermodynamic relations, entropy, ideal gases, Gibbs distribution, partition function methods, quantum statistics of ideal gases, and systems of interacting particles,
with examples taken from lattice vibrations of a solid, van der Waals gases, ferromagnetism, and superconductivity. Prereq : PHYS 205; QL; Staff
PHYS 312 Classical Dynamics (1)
Simple harmonic motion (damped, driven, coupled), vector algebra and calculus, motion under a central force, motion of systems of particles, and Lagrangian mechanics. Prereq : PHYS 110 or permission
of the instructor; QL; Staff
PHYS 313 Classical Electromagnetism (1)
Electrostatics and electric current, magnetic fields, electromagnetic induction, and Maxwell's equations. Prereq : MATH 205 recommended; QL; Staff
PHYS 314 Quantum Physics (1)
Interpretation of atomic and particle physics by wave and quantum mechanics. Topics include solution to the Schröinger Equation for one and three dimensional systems, Hilbert space, the hydrogen
atom, orbital and spin angular momentum, and perturbation theory. Prereq : PHYS 205 or permission of the instructor; QL; Staff
PHYS 316 Astrophysics (1)
A survey at an intermediate level of a variety of topics in astrophysics. Possible topics include: the classification of stars, the physics of their structure and life cycle; stellar pulsation; black
holes; the formation and dynamics of galaxies; cosmology. Prereq : PHYS 312 or permission of the instructor; QL; Staff
PHYS 340 Comprehensive Review of Physics (1/2)
An intensive, comprehensive review of physics, emphasizing the four major areas: Mechanics, Electricity & Magnetism, Quantum Mechanics, and Thermal-Statistical Physics. Coverage may include some
topics from Optics, Statistics, and laboratory practice. Prereq : junior standing or permission of the instructor; Staff
PHYS 341 Advanced Physics Laboratory (1/2)
Students will undertake experiments selected from atomic and quantum physics, optics and spectroscopy, condensed matter physics, and nuclear physics. Emphasis is on learning experimental techniques
and instrumentation used in different domains of physics. Course may be repeated once for credit. Prereq : PHYS 205 and 241, or permission of the instructor; Staff
PHYS 345 Seminar in Theoretical Physics: Analytical Mechanics (1/2)
Topics may include oscillations, non-linear oscillations and chaos, calculus of variations, Lagrangian and Hamiltonian mechanics, and rigid body dynamics. Prereq : PHYS 312; QL; Staff
PHYS 346 Seminar in Theoretical Physics: Electrodynamics (1/2)
Topics may include multipoles, Laplace's equation, electromagnetic waves, reflection, radiation, interference, diffraction, and relativistic electrodynamics. Prereq : PHYS 313; QL; Staff
PHYS 347 Seminar in Theoretical Physics: Quantum Mechanics (1/2)
Topics include Hilbert space, perturbation theory, density matrices, transition probabilities, propagators, and scattering. Prereq : PHYS 314; QL; Staff
PHYS 348 Teaching Assistant (1/2 or 1)
Prereq : Permission of instructor; May be graded S/U at instructor's discretion; Staff
PHYS 395 Special Topics (1/2 or 1)
Courses offered occasionally in special areas of Physics not covered in the usual curriculum.Staff
PHYS 400 Advanced Studies (1/2 or 1)
See College Honors Program. Staff | {"url":"http://www.knox.edu/offices/registrar/course-descriptions/physics.html","timestamp":"2014-04-18T23:15:57Z","content_type":null,"content_length":"37236","record_id":"<urn:uuid:6a9d4796-aad6-42f2-af2e-198c69bd27d3>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
Counting Problem!
August 20th 2013, 10:29 AM #1
Counting Problem!
How many different ways can you draw 9 cards out of a deck of 52 cards given that 4 are 1's.
My classmates and I have a debate on this one.
My answer is 1*1*1*1*[32C5] = 48C5. Is it correct?
Re: Counting Problem!
Nope, not correct. **EDIT: well actually it might be** But please clarify something - is the order of draw important or not? In other words is drawing cards 1,1,1,1,2,3,4,5,6 considered to be the
same or different than 6,5,4,3,2,1,1,1,1?
Last edited by ebaines; August 20th 2013 at 11:48 AM.
Re: Counting Problem!
No .. the order is not important.
Re: Counting Problem!
If order is not important then once you pull the four 1's (by the way, I assume you mean "aces," right?) you have 48 cards remaining. There are 48C5 possible combinations of the other 5 cards. So
you were correct, although I was confused by the "32C5" in your post - I guess that was a typo?
Re: Counting Problem!
August 20th 2013, 11:13 AM #2
August 20th 2013, 11:37 AM #3
August 20th 2013, 12:02 PM #4
August 20th 2013, 12:32 PM #5 | {"url":"http://mathhelpforum.com/statistics/221290-counting-problem.html","timestamp":"2014-04-18T17:02:37Z","content_type":null,"content_length":"39446","record_id":"<urn:uuid:b04e3301-db7b-42a8-b685-6f9e1a518ba7>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
SparkNotes: SAT Subject Test: Math Level 1: Relating Length, Surface Area, and Volume
7.1 Prisms 7.5 Solids Produced by Rotating Polygons
7.2 Solids That Aren’t Prisms 7.6 Key Formulas
7.3 Relating Length, Surface Area, and Volume 7.7 Review Questions
7.4 Inscribed Solids 7.8 Explanations
Relating Length, Surface Area, and Volume
The Math IC tests not only whether you’ve memorized the formulas for the different geometric solids, but also whether you understand those formulas. The test gauges your understanding by asking you
to calculate the lengths, surface areas, and volumes of various solids. The Math IC will ask you about the relationship between these three properties. The Math IC includes two kinds of questions
covering these relationships.
Comparing Dimensions
The first way the Math IC will test your understanding of the relationship among the basic measurements of geometric solids is by giving you the length, surface area, or volume of different solids
and asking you to compare their dimensions. The math needed to answer comparing-dimensions questions isn’t that hard. But in order to do the math, you need to have a good grasp of the formulas for
each type of solid and be able to relate those formulas to one another algebraically. For example,
The surface area of a sphere is the same as the volume of a cylinder. What is the ratio of the radius of the sphere to the radius of the
This question tells you that the surface area of a sphere and the volume a cylinder are equal. A sphere’s surface area is 4π(r[s])^2, where r[s] is the radius of the sphere.
A cylinder’s volume is π(r[c])^2 h, where r[c] is the radius of the cylinder, and h is its height. Therefore,
The question asks for the ratio between the radii of the sphere and the cylinder. This ratio is given by r ^s/[rc]. Now you can solve the equation 4πr[s]^2 = πr[c]^2 h for the ratio ^rs/[rc].
Changing Measurements
The second way the Math IC will test your understanding of the relationships among length, surface area, and volume is by changing one of these measurements by a given factor, and then asking how
this change will influence the other measurements.
When the lengths of a solid in the question are increased by a single constant factor, a simple rule can help you find the answer:
• If a solid’s length is multiplied by a given factor, then the solid’s surface area is multiplied by the square of that factor, and its volume is multiplied by the cube of that factor.
Remember that this rule holds true only if all of a solid’s dimensions increase in length by a given factor. So for a cube or a sphere, the rule holds true when just a side or the radius changes, but
for a rectangular solid, cylinder, or other solid, all of the length dimensions must change by the same factor. If the dimensions of the object do not increase by a constant factor—for instance, if
the height of a cylinder doubles but the radius of the base triples—you will have to go back to the equation for the dimension you are trying to determine and calculate by hand.
Example 1
If you double the length of the side of a square, by how much do you increase the area of that square?
If you understand the formula for the area of a square, this question is simple. The formula for the area of a square is A = s^2, where s is the length of a side. Replace s with 2s, and you see that
the area of a square quadruples when the length of its sides double: (2s)^2 = 4s^2.
Example 2
If a sphere’s radius is halved, by what factor does its volume
The radius of the sphere is multiplied by a factor of ^1⁄[2] (or divided by a factor of 2), and so its volume multiplies by the cube of that factor: (^1⁄[2])^3 = ^1⁄[8]. Therefore, the volume of the
sphere is multiplied by a factor of ^1⁄[8] (divided by 8), which is the same thing as decreasing by a factor of 8.
Example 3
A rectangular solid has dimensions x y []z (these are its length, width, and height), and a volume of 64. What is the volume of a rectangular solid of dimensions ^x /[2] ^y /[2] z?
If this rectangular solid had dimensions that were all one-half as large as the dimensions of the solid whose volume is 64, then its volume would be (^1⁄[2])^3 64 = ^1⁄[8] 64 = 8. But dimension z is
not multiplied by ^1⁄[2] like x and y. To answer a question like this one, you should use the volume formula for rectangular solids: Volume = l w h. It is given in the question that xyz = 64. So, ^x⁄
[2] ^y⁄[2] z = ^1⁄[4] xyz = ^1⁄[4] 64 = 16. | {"url":"http://www.sparknotes.com/testprep/books/sat2/math1c/chapter7section3.rhtml","timestamp":"2014-04-17T07:36:35Z","content_type":null,"content_length":"53201","record_id":"<urn:uuid:396d60a9-6da7-4508-a5bc-2711977e4d79>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differential equation system (Please HELP)
May 26th 2010, 02:43 PM #1
May 2010
Differential equation system (Please HELP)
In few hours I got an exam in Differential equations
1 really need help with these to differential equation systems (I am not sure how to solve them). Please help ,me !!!!!!!!!!!!!!!!!
3x+y'+2y=2*e^(2t) , x=x(t), y=y(t)
y1'= -2*y1-4*y2+1+4x
y2'= -y1+y2+(3/2)x^2
Thank you in advance for any help !!!!!!!
Please inserte this to the second equation.
June 2nd 2010, 09:30 AM #2
Senior Member
Mar 2010 | {"url":"http://mathhelpforum.com/differential-equations/146558-differential-equation-system-please-help.html","timestamp":"2014-04-18T16:11:49Z","content_type":null,"content_length":"32139","record_id":"<urn:uuid:dd69b064-80d0-4a24-b302-03c95734c165>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00441-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quadrilateral Classification
6.8: Quadrilateral Classification
Practice Quadrilateral Classification
What if you were given a quadrilateral in the coordinate plane? How could you determine if that quadrilateral qualifies as one of the special quadrilaterals: parallelograms, squares, rectangles,
rhombuses, kites, or trapezoids? After completing this Concept, you'll be able to make such a determination.
Watch This
CK-12 Foundation: Chapter6QuadrilateralClassificationA
Ten Marks: Parallelograms in a Coordinate Plane
Ten Marks: Kites and Trapezoids
When working in the coordinate plane, you will sometimes want to know what type of shape a given shape is. You should easily be able to tell that it is a quadrilateral if it has four sides. But how
can you classify it beyond that?
First you should graph the shape if it has not already been graphed. Look at it and see if it looks like any special quadrilateral. Do the sides appear to be congruent? Do they meet at right angles?
This will give you a place to start.
Once you have a guess for what type of quadrilateral it is, your job is to prove your guess. To prove that a quadrilateral is a parallelogram, rectangle, rhombus, square, kite or trapezoid, you must
show that it meets the definition of that shape OR that it has properties that only that shape has.
If it turns out that your guess was wrong because the shape does not fulfill the necessary properties, you can guess again. If it appears to be no type of special quadrilateral then it is simply a
The examples below will help you to see what this process might look like.
Example A
Determine what type of parallelogram $TUNE$$T(0, 10), U(4, 2), N(-2, -1)$$E(-6, 7)$
This looks like a rectangle. Let’s see if the diagonals are equal. If they are, then $TUNE$
$EU & = \sqrt{(-6 -4)^2 + (7-2)^2} && TN = \sqrt{(0 + 2)^2 +(10 + 1)^2}\\& = \sqrt{(-10)^2 + 5^2} && \quad \ \ = \sqrt{2^2 + 11^2}\\& = \sqrt{100 + 25} && \quad \ \ = \sqrt{4 + 121}\\& = \sqrt{125} &
& \quad \ \ = \sqrt{125}$
If the diagonals are also perpendicular, then $TUNE$
$\text{Slope of}\ EU = \frac{7 - 2}{-6 - 4} = -\frac{5}{10} = -\frac{1}{2} \quad \text{Slope of}\ TN = \frac{10 - (-1)}{0-(-2)} = \frac{11}{2}$
The slope of $EU eq$$TN$$TUNE$
Example B
A quadrilateral is defined by the four lines $y=2x+1$$y=-x+5$$y=2x-4$$y=-x-5$
To check if its a parallelogram we have to check that it has two pairs of parallel sides. From the equations we can see that the slopes of the lines are $2$$-1$$2$$-1$
Example C
Determine what type of quadrilateral $RSTV$
There are two directions you could take here. First, you could determine if the diagonals bisect each other. If they do, then it is a parallelogram. Or, you could find the lengths of all the sides.
Let’s do this option.
$RS & = \sqrt{(-5-2)^2+(7-6)^2} && ST=\sqrt{(2-5)^2+(6-(-3))^2}\\& = \sqrt{(-7)^2+1^2} && \quad \ =\sqrt{(-3)^2+9^2}\\& = \sqrt{50}=5\sqrt{2} && \quad \ = \sqrt{90}=3\sqrt{10}$
$RV& =\sqrt{(-5-(-4))^2+(7-0)^2} && VT=\sqrt{(-4-5)^2+(0-(-3))^2}\\& = \sqrt{(-1)^2+7^2} && \quad \ =\sqrt{(-9)^2+3^2}\\& = \sqrt{50}=5\sqrt{2} && \quad \ =\sqrt{90}=3\sqrt{10}$
From this we see that the adjacent sides are congruent. Therefore, $RSTV$
Algebra Review: When asked to “simplify the radical,” pull all square numbers (1, 4, 9, 16, 25, ...) out of the radical. Above $\sqrt{50}=\sqrt{25 \cdot 2}$$\sqrt{25}=5$$\sqrt{50}=\sqrt{25 \cdot 2}=5
Watch this video for help with the Examples above.
CK-12 Foundation: Chapter6QuadrilateralClassificationB
Example D
Is the quadrilateral $ABCD$
We have determined there are four different ways to show a quadrilateral is a parallelogram in the $x-y$$AB$$CD$
$AB& =\sqrt{(-1-3)^2+(5-3)^2} && CD=\sqrt{(2-6)^2+(-2+4)^2}\\& = \sqrt{(-4)^2+2^2} && \quad \ \ =\sqrt{(-4)^2+2^2}\\& = \sqrt{16+4} && \quad \ \ =\sqrt{16+4}\\& = \sqrt{20} && \quad \ \ =\sqrt{20}$
$AB = CD$$ABCD$
Slope $AB = \frac{5-3}{-1-3}=\frac{2}{-4}=-\frac{1}{2}$$CD = \frac{-2+4}{2-6}=\frac{2}{-4}=-\frac{1}{2}$
Therefore, $ABCD$
A parallelogram is a quadrilateral with two pairs of parallel sides. A quadrilateral is a rectangle if and only if it has four right (congruent) angles. A quadrilateral is a rhombus if and only if it
has four congruent sides. A quadrilateral is a square if and only if it has four right angles and four congruent sides. A trapezoid is a quadrilateral with exactly one pair of parallel sides. An
isosceles trapezoid is a trapezoid where the non-parallel sides are congruent. A kite is a quadrilateral with two distinct sets of adjacent congruent sides. If a kite is concave, it is called a dart.
Guided Practice
1. A quadrilateral is defined by the four lines $y=2x+1$$y=-2x+5$$y=2x-4$$y=-2x-5$
2. Determine what type of quadrilateral $ABCD$$A(-3, 3), \ B(1, 5), \ C(4, -1), \ D(1, -5)$
3. Determine what type of quadrilateral $EFGH$$E(5, -1), F(11, -3), G(5, -5), H(-1, -3)$
1. To be a rectangle a shape must have four right angles. This means that the sides must be perpendicular to each other. From the given equations we see that the slopes are $2$$-2$$2$$-2$
2. First, graph $ABCD$$\overline{BC}$$\overline{AD}$
Slope of $\overline{BC}=\frac{5-(-1)}{1-4}=\frac{6}{-3}=-2$
Slope of $\overline{AD}=\frac{3-(-5)}{-3-1}=\frac{8}{-4}=-2$
We now know $\overline{BC} \ || \ \overline{AD}$$AB$$CD$
$AB & =\sqrt{(-3-1)^2+(3-5)^2} && ST = \sqrt{(4-1)^2+(-1-(-5))^2}\\& = \sqrt{(-4)^2+(-2)^2} && \quad \ \ = \sqrt{3^2+4^2}\\& = \sqrt{20}=2\sqrt{5} && \quad \ \ = \sqrt{25}=5$
$AB eq CD$
3. We will not graph this example. Let’s find the length of all four sides.
$EF & = \sqrt{(5-11)^2+(-1-(-3))^2} && FG = \sqrt{(11-5)^2+(-3-(-5))^2}\\& = \sqrt{(-6)^2+2^2} && \quad \ = \sqrt{6^2+2^2}\\& = \sqrt{40}=2\sqrt{10} && \quad \ =\sqrt{40}=2\sqrt{10}$
$GH & = \sqrt{(5-(-1))^2+(-5-(-3))^2} && HE = \sqrt{(-1-5)^2+(-3-(-1))^2}\\& = \sqrt{6^2+(-2)^2} && \quad \ = \sqrt{(-6)^2+(-2)^2}\\& = \sqrt{40}=2\sqrt{10} && \quad \ =\sqrt{40}=2\sqrt{10}$
All four sides are equal. That means, this quadrilateral is either a rhombus or a square. The difference between the two is that a square has four $90^\circ$
$EG & = \sqrt{(5-5)^2+(-1-(-5))^2} && FH = \sqrt{(11-(-1))^2+(-3-(-3))^2}\\& = \sqrt{0^2+4^2} && \quad \ \ = \sqrt{12^2+0^2}\\& = \sqrt{16}=4 && \quad \ \ = \sqrt{144}=12$
The diagonals are not congruent, so $EFGH$
1. If a quadrilateral has exactly one pair of parallel sides, what type of quadrilateral is it?
2. If a quadrilateral has two pairs of parallel sides and one right angle, what type of quadrilateral is it?
3. If a quadrilateral has perpendicular diagonals, what type of quadrilateral is it?
4. If a quadrilateral has diagonals that are perpendicular and congruent, what type of quadrilateral is it?
5. If a quadrilateral has four congruent sides and one right angle, what type of quadrilateral is it?
Determine what type of quadrilateral $ABCD$
6. $A(-2, 4), B(-1, 2), C(-3, 1), D(-4, 3)$
7. $A(-2, 3), B(3, 4), C(2, -1), D(-3, -2)$
8. $A(1, -1), B(7, 1), C(8, -2), D(2, -4)$
9. $A(10, 4), B(8, -2), C(2, 2), D(4, 8)$
10. $A(0, 0), B(5, 0), C(0, 4), D(5, 4)$
11. $A(-1, 0), B(0, 1), C(1, 0), D(0, -1)$
12. $A(2, 0), B(3, 5), C(5, 0), D(6, 5)$
13. What type of quadrilateral is $SPCE$
14. If $SR = 20$$RU = 12$$CE$
15. Find $SC$$RC$
Files can only be attached to the latest version of Modality | {"url":"http://www.ck12.org/book/CK-12-Geometry-Concepts/r2/section/6.8/","timestamp":"2014-04-21T10:49:39Z","content_type":null,"content_length":"173679","record_id":"<urn:uuid:e2fb6f9e-d120-4e53-b22b-9b8c744c4e23>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistical Analysis: an Introduction using R/Chapter 2
Data is the life blood of statistical analysis. A recurring theme in this book is that most analysis consists of constructing sensible statistical models to explain the data that has been observed.
This requires a clear understanding of the data and where it came from. It is therefore important to know the different types of data that are likely to be encountered. Thus in this chapter we focus
on different types of data, including simple ways in which they can be examined, and how data can organised into coherent datasets.
R topics in Chapter 2
The R examples used in this chapter are intended to introduce you to the nuts and bolts of R, so may seem dry or even overly technical compared to the rest of the book. However, the topics introduced
here are essential to understand how to use R and it is particularly important to understand them. They assume that you are comfortable with the concepts of assignment (i.e. storing objects) and
functions, as detailed previously.
The simplest sort of data is just a collection of measurements, each measurement being a single "data point". In statistics, a collection of single measurements of the same sort is commonly known as
a variable, and these are often given a name^[1]. Variables usually have a reasonable amount of background context associated with them: what the measurements represent, why and how they were
collected, whether there are any known omissions or exceptional points, and so forth. Knowing or finding out this associated information is an essential part of any analysis, along with examination
of the variables (e.g. by plotting or other means).
Single variables in REdit
One of the most fundamental objects in R is the
, used to store multiple measurements of the same type (e.g. data variables). There are several different sorts of data that can be stored in a vector. Most common is the
numeric vector
, in which each element of the vector is simply a number. Other commonly used types of vector are
character vectors
(where each element is a piece of text) and
logical vectors
(where each element is either
). In this topic we will use some example vectors provided by the "datasets" package, containing data on States of the USA (see
R is an inherently vector-based program; in fact the numbers we have been using in previous calculations are just treated as vectors with a single element. This means that most basic functions in R
will behave sensibly when given a vector as a argument, as shown below.
1. state.area #a NUMERIC vector giving the area of US states, in square miles
2. state.name #a CHARACTER vector (note the quote marks) of state names
3. sq.km <- state.area*2.59 #Arithmetic works on numeric vectors, e.g. convert sq miles to sq km
4. sq.km #... the new vector has the calculation applied to each element in turn
5. sqrt(sq.km) #Many mathematical functions also apply to each element in turn
6. range(state.area) #But some functions return different length vectors (here, just the max & min).
7. length(state.area) #and some, like this useful one, just return a single value.
> state.area #a NUMERIC vector giving the area of US states, in square miles
[1] 51609 589757 113909 53104 158693 104247 5009 2057 58560 58876 6450 83557 56400
[14] 36291 56290 82264 40395 48523 33215 10577 8257 58216 84068 47716 69686 147138
[27] 77227 110540 9304 7836 121666 49576 52586 70665 41222 69919 96981 45333 1214
[40] 31055 77047 42244 267339 84916 9609 40815 68192 24181 56154 97914
> state.name #a CHARACTER vector (note the quote marks) of state names
[1] "Alabama" "Alaska" "Arizona" "Arkansas"
[5] "California" "Colorado" "Connecticut" "Delaware"
[9] "Florida" "Georgia" "Hawaii" "Idaho"
[13] "Illinois" "Indiana" "Iowa" "Kansas"
[17] "Kentucky" "Louisiana" "Maine" "Maryland"
[21] "Massachusetts" "Michigan" "Minnesota" "Mississippi"
[25] "Missouri" "Montana" "Nebraska" "Nevada"
[29] "New Hampshire" "New Jersey" "New Mexico" "New York"
[33] "North Carolina" "North Dakota" "Ohio" "Oklahoma"
[37] "Oregon" "Pennsylvania" "The smallest state" "South Carolina"
[41] "South Dakota" "Tennessee" "Texas" "Utah"
[45] "Vermont" "Virginia" "Washington" "West Virginia"
[49] "Wisconsin" "Wyoming"
> sq.km <- state.area*2.59 #Standard arithmatic works on numeric vectors, e.g. convert sq miles to sq km
> sq.km #... giving another vector with the calculation performed on each element in turn
[1] 133667.31 1527470.63 295024.31 137539.36 411014.87 269999.73 12973.31 5327.63
[9] 151670.40 152488.84 16705.50 216412.63 146076.00 93993.69 145791.10 213063.76
[17] 104623.05 125674.57 86026.85 27394.43 21385.63 150779.44 217736.12 123584.44
[25] 180486.74 381087.42 200017.93 286298.60 24097.36 20295.24 315114.94 128401.84
[33] 136197.74 183022.35 106764.98 181090.21 251180.79 117412.47 3144.26 80432.45
[41] 199551.73 109411.96 692408.01 219932.44 24887.31 105710.85 176617.28 62628.79
[49] 145438.86 253597.26
> sqrt(sq.km) #Many mathematical functions also apply to each element in turn
[1] 365.60540 1235.90883 543.16140 370.86299 641.10441 519.61498 113.90044 72.99062
[9] 389.44884 390.49819 129.24976 465.20171 382.19890 306.58390 381.82601 461.58830
[17] 323.45487 354.50609 293.30334 165.51263 146.23826 388.30328 466.62203 351.54579
[25] 424.83731 617.32278 447.23364 535.06878 155.23324 142.46136 561.35100 358.33202
[33] 369.04978 427.81111 326.74911 425.54695 501.17940 342.65503 56.07370 283.60615
[41] 446.71213 330.77479 832.11058 468.96955 157.75712 325.13205 420.25859 250.25745
[49] 381.36447 503.58441
> range(state.area) #But some functions return different length vectors (here, just the max & min).
[1] 1214 589757
> length(state.area) #and some, like this useful one, just return a single value.
[1] 50
Note that the first part of your output may look slightly different to that above. Depending on the width of your screen, the number of elements printed on each line of output may differ. This is the
reason for the numbers in square brackets, which are produced when vectors are printed to the screen. These bracketed numbers give the position of the first element on that line, which is a useful
visual aid. For instance, looking at the printout of state.name, and counting across from the second line, we can tell that the eighth state is Delaware.
You may occasionally need to create your own vectors from scratch (although most vectors are obtained from processing data in already-existing files). The most commonly used function for constructing
vectors is
, so named because it
oncatenates objects together. However, if you wish to create vectors consisting of regular sequences of numbers (e.g. 2,4,6,8,10,12, or 1,1,2,2,1,1,2,2) there are several alternative functions you
can use, including
, and the
1. c("one", "two", "three", "pi") #Make a character vector
2. c(1,2,3,pi) #Make a numeric vector
3. seq(1,3) #Create a sequence of numbers
4. 1:3 #A shortcut for the same thing (but less flexible)
5. i <- 1:3 #You can store a vector
6. i
7. i <- c(i,pi) #To add more elements, you must assign again, e.g. using c()
8. i
9. i <- c(i, "text") #A vector cannot contain different data types, so ...
10. i #... R converts all elements to the same type
11. i+1 #The numbers are now strings of text: arithmetic is impossible
12. rep(1, 10) #The "rep" function repeats its first argument
13. rep(3:1,10) #The first argument can also be a vector
14. huge.vector <- 0:(10^7) #R can easily cope with very big vectors
15. #huge.vector #VERY BAD IDEA TO UNCOMMENT THIS, unless you want to print out 10 million numbers
16. rm(huge.vector) #"rm" removes objects. Deleting huge unused objects is sensible
> c("one", "two", "three", "pi") #Make a character vector
[1] "one" "two" "three" "pi"
> c(1,2,3,pi) #Make a numeric vector
[1] 1.000000 2.000000 3.000000 3.141593
> seq(1,3) #Create a sequence of numbers
[1] 1 2 3
> 1:3 #A shortcut for the same thing (but less flexible)
[1] 1 2 3
> i <- 1:3 #You can store a vector
> i
[1] 1 2 3
> i <- c(i,pi) #To add more elements, you must assign again, e.g. using c()
> i
[1] 1.000000 2.000000 3.000000 3.141593
> i <- c(i, "text") #A vector cannot contain different data types, so ...
> i #... R converts all elements to the same type
[1] "1" "2" "3" "3.14159265358979" "text"
> i+1 #The numbers are now strings of text: arithmetic is impossible
Error in i + 1 : non-numeric argument to binary operator
> rep(1, 10) #The "rep" function repeats its first argument
[1] 1 1 1 1 1 1 1 1 1 1
> rep(3:1,10) #The first argument can also be a vector
[1] 3 2 1 3 2 1 3 2 1 3 2 1 3 2 1 3 2 1 3 2 1 3 2 1 3 2 1 3 2 1
> huge.vector <- 0:(10^7) #R can easily cope with very big vectors
> #huge.vector #VERY BAD IDEA TO UNCOMMENT THIS, unless you want to print out 10 million numbers
> rm(huge.vector) #"rm" removes objects. Deleting huge unused objects is sensible
Accessing elements of vectors
It is common to want to access certain elements of a vector: for example, we might want to use only the 10th element, or the first 4 elements, or select elements depending on their value. The way to
do this is to take the vector and prepend the indexing operator,
(i.e. square brackets). If these square brackets contain
• A positive number or numbers, then this has the effect of picking those particular elements of the vector
• A negative number or numbers, then this has the effect of picking the whole vector except those elements
• A logical vector, then each element of the logical vector indicates whether to pick (if TRUE) or not (if FALSE) the equivalent element of the original vector^[3].
The use of logical vectors may seem a little complicated. However, they can be extremely useful, because they are the key behind using comparison operators. These can be used, for example, to
identify which US states are small, with an area less than (<) 10 000 square miles (as demonstrated below).
1. min(state.area) #This gives the area of the smallest US state...
2. which.min(state.area) #... this shows which element it is (the 39th as it happens)
3. state.name[39] #You can obtain individual elements by using square brackets
4. state.name[39] <- "THE SMALLEST STATE" #You can replace elements using [] too
5. state.name #The 39th name ("Rhode Island") should now have been changed
6. state.name[1:10] #This returns a new vector consisting of only the first 10 states
7. state.name[-(1:10)] #Using negative numbers gives everything but the first 10 states
8. state.name[c(1,2,2,1)] #You can also obtain the same element multiple times
9. ###Logical vectors are a little more complicated to get your head round
10. state.area < 10000 #A LOGICAL vector, identifying which states are small
11. state.name[state.area < 10000] #So this can be used to select the names of the small states
> min(state.area) #This gives the area of the smallest US state...
[1] 1214
> which.min(state.area) #... this shows which element it is (the 39th as it happens)
[1] 39
> state.name[39] #You can obtain individual elements by using square brackets
[1] "Rhode Island"
> state.name[39] <- "The smallest state" #You can replace elements using [] too
> state.name #The 39th name ("Rhode Island") should now have been changed
[1] "Alabama" "Alaska" "Arizona" "Arkansas"
[5] "California" "Colorado" "Connecticut" "Delaware"
[9] "Florida" "Georgia" "Hawaii" "Idaho"
[13] "Illinois" "Indiana" "Iowa" "Kansas"
[17] "Kentucky" "Louisiana" "Maine" "Maryland"
[21] "Massachusetts" "Michigan" "Minnesota" "Mississippi"
[25] "Missouri" "Montana" "Nebraska" "Nevada"
[29] "New Hampshire" "New Jersey" "New Mexico" "New York"
[33] "North Carolina" "North Dakota" "Ohio" "Oklahoma"
[37] "Oregon" "Pennsylvania" "THE SMALLEST STATE" "South Carolina"
[41] "South Dakota" "Tennessee" "Texas" "Utah"
[45] "Vermont" "Virginia" "Washington" "West Virginia"
[49] "Wisconsin" "Wyoming"
> state.name[1:10] #This returns a new vector consisting of only the first 10 states
[1] "Alabama" "Alaska" "Arizona" "Arkansas" "California" "Colorado"
[7] "Connecticut" "Delaware" "Florida" "Georgia"
> state.name[-(1:10)] #Using negative numbers gives everything but the first 10 states
[1] "Hawaii" "Idaho" "Illinois" "Indiana"
[5] "Iowa" "Kansas" "Kentucky" "Louisiana"
[9] "Maine" "Maryland" "Massachusetts" "Michigan"
[13] "Minnesota" "Mississippi" "Missouri" "Montana"
[17] "Nebraska" "Nevada" "New Hampshire" "New Jersey"
[21] "New Mexico" "New York" "North Carolina" "North Dakota"
[25] "Ohio" "Oklahoma" "Oregon" "Pennsylvania"
[29] "THE SMALLEST STATE" "South Carolina" "South Dakota" "Tennessee"
[33] "Texas" "Utah" "Vermont" "Virginia"
[37] "Washington" "West Virginia" "Wisconsin" "Wyoming"
> state.name[c(1,2,2,1)] #You can also obtain the same element multiple times
[1] "Alabama" "Alaska" "Alaska" "Alabama"
> ###Logical vectors are a little more complicated to get your head round
> state.area < 10000 #A LOGICAL vector, identifying which states are small
[1] FALSE FALSE FALSE FALSE FALSE FALSE TRUE TRUE FALSE FALSE TRUE FALSE FALSE FALSE FALSE
[16] FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE TRUE
[31] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE TRUE
[46] FALSE FALSE FALSE FALSE FALSE
> state.name[state.area < 10000] #So this can be used to select the names of the small states
[1] "Connecticut" "Delaware" "Hawaii" "Massachusetts"
[5] "New Hampshire" "New Jersey" "THE SMALLEST STATE" "Vermont"
Although the
operator can be used to access just a single element of a vector, it is particularly useful for accessing a number of elements at once. Another operator, the double square-bracket (
) exists for specifically accessing a single element. While not particularly useful for vectors, it comes into its own for
#Data frames
Logical operations
accessing elements of vectors
, we saw how to use a simple logical expression involving the less than sign (
) to produce a
logical vector
, which could then be used to select elements less than a certain value. This type of logical operation is very useful thing to be able to do. As well as
, there are a handful of other
comparison operators
. Here is the full set (See
for more details)
• < (less than) and <= (less than or equal to)
• > (greater than) and >= (greater than or equal to)
• == (equal to^[4]) and != (not equal to)
Even more flexibility can be gained by combining logical vectors using and, or, and not. For example, we might want to identify which US states have an area less than 10 000 or greater than 100 000
square miles, or to identify which have an area greater than 100 000 square miles and which have a short name. The code below shows how can be used to do this, using the following R symbols:
• & ("and")
• | ("or")
• ! ("not")
When using logical vectors, the following functions are particularly useful, as illustrated below
• which() identifies which elements of a logical vector are TRUE
• sum() can be used to give the number of elements of a logical vector which are TRUE. This is because sum() forces its input to be converted to numbers, and if TRUE and FALSE are converted to
numbers, they take the values 1 and 0 respectively.
• ifelse() returns different values depending on whether each element of a logical vector is TRUE or FALSE. Specifically, a command such as ifelse(aLogicalVector, vectorT, vectorF) takes
aLogicalVector and returns, for each element that is TRUE, the corresponding element from vectorT, and for each element that is FALSE, the corresponding element from vectorF. An extra elaboration
is that if vectorT or vectorF are shorter than aLogicalVector they are extended by duplication to the correct length.
1. ### In these examples, we'll reuse the American states data, especially the state names
2. ### To remind yourself of them, you might want to look at the vector "state.names"
4. nchar(state.name) # nchar() returns the number of characters in strings of text ...
5. nchar(state.name) <= 6 #so this indicates which states have names of 6 letters or fewer
6. ShortName <- nchar(state.name) <= 6 #store this logical vector for future use
7. sum(ShortName) #With a logical vector, sum() tells us how many are TRUE (11 here)
8. which(ShortName) #These are the positions of the 11 elements which have short names
9. state.name[ShortName] #Use the index operator [] on the original vector to get the names
10. state.abb[ShortName] #Or even on other vectors (e.g. the 2 letter state abbreviations)
12. isSmall <- state.area < 10000 #Store a logical vector indicating states <10000 sq. miles
13. isHuge <- state.area > 100000 #And another for states >100000 square miles in area
14. sum(isSmall) #there are 8 "small" states
15. sum(isHuge) #coincidentally, there are also 8 "huge" states
17. state.name[isSmall | isHuge] # | means OR. So these are states which are small OR huge
18. state.name[isHuge & ShortName] # & means AND. So these are huge AND with a short name
19. state.name[isHuge & !ShortName]# ! means NOT. So these are huge and with a longer name
21. ### Examples of ifelse() ###
23. ifelse(ShortName, state.name, state.abb) #mix short names with abbreviations for long ones
24. # (think of this as "*if* ShortName is TRUE then use state.name *else* use state.abb)
26. ### Many functions in R increase input vectors to the correct size by duplication ###
27. ifelse(ShortName, state.name, "tooBIG") #A silly example: the 3rd argument is duplicated
28. size <- ifelse(isSmall, "small", "large") #A more useful example, for both 2nd & 3rd args
29. size #might be useful as an indicator variable?
30. ifelse(size=="large", ifelse(isHuge, "huge", "medium"), "small") #A more complex example
> ### In these examples, we'll reuse the American states data, especially the state names
> ### To remind yourself of them, you might want to look at the vector "state.names"
> nchar(state.name) # nchar() returns the number of characters in strings of text ...
[1] 7 6 7 8 10 8 11 8 7 7 6 5 8 7 4 6 8 9 5 8 13 8 9 11 8 7 8 6 13
[30] 10 10 8 14 12 4 8 6 12 12 14 12 9 5 4 7 8 10 13 9 7
> nchar(state.name) <= 6 #so this indicates which states have names of 6 letters or fewer
[1] FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE TRUE FALSE FALSE
[15] TRUE TRUE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE
[29] FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE TRUE FALSE FALSE FALSE FALSE FALSE
[43] TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE
> ShortName <- nchar(state.name) <= 6 #store this logical vector for future use
> sum(ShortName) #With a logical vector, sum() tells us how many are TRUE (11 here)
[1] 11
> which(ShortName) #These are the positions of the 11 elements which have short names
[1] 2 11 12 15 16 19 28 35 37 43 44
> state.name[ShortName] #Use the index operator [] on the original vector to get the names
[1] "Alaska" "Hawaii" "Idaho" "Iowa" "Kansas" "Maine" "Nevada" "Ohio" "Oregon"
[10] "Texas" "Utah"
> state.abb[ShortName] #Or even on other vectors (e.g. the 2 letter state abbreviations)
[1] "AK" "HI" "ID" "IA" "KS" "ME" "NV" "OH" "OR" "TX" "UT"
> isSmall <- state.area < 10000 #Store a logical vector indicating states <10000 sq. miles
> isHuge <- state.area > 100000 #And another for states >100000 square miles in area
> sum(isSmall) #there are 8 "small" states
[1] 8
> sum(isHuge) #coincidentally, there are also 8 "huge" states
[1] 8
> state.name[isSmall | isHuge] # | means OR. So these are states which are small OR huge
[1] "Alaska" "Arizona" "California" "Colorado" "Connecticut"
[6] "Delaware" "Hawaii" "Massachusetts" "Montana" "Nevada"
[11] "New Hampshire" "New Jersey" "New Mexico" "Rhode Island" "Texas"
[16] "Vermont"
> state.name[isHuge & ShortName] # & means AND. So these are huge AND with a short name
[1] "Alaska" "Nevada" "Texas"
> state.name[isHuge & !ShortName]# ! means NOT. So these are huge and with a longer name
[1] "Arizona" "California" "Colorado" "Montana" "New Mexico"
> ### Examples of ifelse() ###
> ifelse(ShortName, state.name, state.abb) #mix short names with abbreviations for long ones
[1] "AL" "Alaska" "AZ" "AR" "CA" "CO" "CT" "DE" "FL"
[10] "GA" "Hawaii" "Idaho" "IL" "IN" "Iowa" "Kansas" "KY" "LA"
[19] "Maine" "MD" "MA" "MI" "MN" "MS" "MO" "MT" "NE"
[28] "Nevada" "NH" "NJ" "NM" "NY" "NC" "ND" "Ohio" "OK"
[37] "Oregon" "PA" "RI" "SC" "SD" "TN" "Texas" "Utah" "VT"
[46] "VA" "WA" "WV" "WI" "WY"
> # (think of this as "*if* ShortName is TRUE then use state.name *else* use state.abb)
> ### Many functions in R increase input vectors to the correct size by duplication ###
> ifelse(ShortName, state.name, "tooBIG") #A silly example: the 3rd argument is duplicated
[1] "tooBIG" "Alaska" "tooBIG" "tooBIG" "tooBIG" "tooBIG" "tooBIG" "tooBIG" "tooBIG"
[10] "tooBIG" "Hawaii" "Idaho" "tooBIG" "tooBIG" "Iowa" "Kansas" "tooBIG" "tooBIG"
[19] "Maine" "tooBIG" "tooBIG" "tooBIG" "tooBIG" "tooBIG" "tooBIG" "tooBIG" "tooBIG"
[28] "Nevada" "tooBIG" "tooBIG" "tooBIG" "tooBIG" "tooBIG" "tooBIG" "Ohio" "tooBIG"
[37] "Oregon" "tooBIG" "tooBIG" "tooBIG" "tooBIG" "tooBIG" "Texas" "Utah" "tooBIG"
[46] "tooBIG" "tooBIG" "tooBIG" "tooBIG" "tooBIG"
> size <- ifelse(isSmall, "small", "large") #A more useful example, for both 2nd & 3rd args
> size #might be useful as an indicator variable?
[1] "large" "large" "large" "large" "large" "large" "small" "small" "large" "large"
[11] "small" "large" "large" "large" "large" "large" "large" "large" "large" "large"
[21] "small" "large" "large" "large" "large" "large" "large" "large" "small" "small"
[31] "large" "large" "large" "large" "large" "large" "large" "large" "small" "large"
[41] "large" "large" "large" "large" "small" "large" "large" "large" "large" "large"
> ifelse(size=="large", ifelse(isHuge, "huge", "medium"), "small") #A more complex example
[1] "medium" "huge" "huge" "medium" "huge" "huge" "small" "small" "medium"
[10] "medium" "small" "medium" "medium" "medium" "medium" "medium" "medium" "medium"
[19] "medium" "medium" "small" "medium" "medium" "medium" "medium" "huge" "medium"
[28] "huge" "small" "small" "huge" "medium" "medium" "medium" "medium" "medium"
[37] "medium" "medium" "small" "medium" "medium" "medium" "huge" "medium" "small"
[46] "medium" "medium" "medium" "medium" "medium"
If you have done any computer programming, you may be more used to dealing with logic in the context of "if" statements. While R also has an if() statement, it is less useful when dealing with
vectors. For example, the following R expression
if(aVariable == 0) then print("zero") else print("not zero")
to be a single number: it outputs "zero" if this number is 0, or "not zero" if it is a number other than zero
. If
is a vector of 2 values or more, only the first element counts: everything else is ignored
. There are also logical operators which ignore everything but the first element of a vector: these are
for AND and
for OR
Missing data
When collecting data, it is often the case that certain data points are unknown. This happens for a variety of reasons. For example, when analysing experimental data, we might be recording a number
of variables for each experiment (e.g.
time of day
, etc.), yet have forgotten (or been unable) to record
in one instance. Or when collecting social data on US states, it might be that certain states do not record certain statistics of interest. Another example is the ship passenger data from the sinking
of the Titanic, where careful research has identified the ticket class of all 2207 people on board, but not been able to ascertain the age of 10 or so of the victims (see
We could just omit missing data, but in many cases, we have information for some variables, but not for others. For example, we might not want to completely omit a US state from an analysis, just
because it it missing one particular datum of interest. For this reason, R provides a special value, NA, meaning "not available". Any vector, numeric, character, or logical, can have elements which
are NA. These can be identified by the function "is.na".
1. some.missing <- c(1,NA)
2. is.na(some.missing)
some.missing <- c(1,NA)
[1] FALSE TRUE
Note that some analyses are hard to do if there are missing data. You can use "complete.cases" or "na.omit" to construct datasets with the missing values omitted.
Measurement valuesEdit
One important feature of any variable is the values it is allowed to have. For example, a variable such as gender can only take a limited number of values ('Male' and 'Female' in this instance),
whereas a variable such as humanHeight can take any numerical value between about 0 and 3 metres. This is the sort of obvious background information that cannot necessarily be inferred from the data,
but which can be vital for analysis. Only a limited amount of this information is usually fed directly into statistical analysis software. As always, it's very important to take account of such
background information. This can be usually done using that commodity - unavailable to a computer - known as common sense. For example, a computer could be used to perform an analysis of human height
without realising that one person has been recorded as (say) 175, rather than 1.75, metres tall. A computer can blindly perform analysis on this variable without noticing the error, even though it is
glaringly obvious to a human^[8]. That's one of the primary reasons that it is important to plot data before analysis.
Categorical versus quantitative variablesEdit
Nevertheless, a few bits of information about a variable can (indeed, often must) be given to analysis software. Nearly all statistical software packages require you, at a minimum, to distinguish
between categorical variables (e.g. gender) in which each data point takes one of a fixed number of pre-defined "levels", and quantitative variables (e.g. humanHeight) in which each data point is a
number on a well-defined scale. Further examples are given in Table 2.1. This distinction is important even for such simple analyses as taking an average: a procedure which is meaningful for a
quantitative variable, but rarely for a categorical variable (what is the "average" sex of a 'male' and a 'female'?).
Table 2.1 Examples of types of variables. Many of these variables are used later in this book.
Categorical (also known as • The variable gender where each data point is the gender of a human (i.e. the levels are 'Male' or 'Female')
"qualitative" variables or
"factors") • The variable class, where each data point is the class of a passenger on a ship (the levels being '1st', '2nd', or '3rd')
• The variable weightChange, where each point is the weight change of an anorexic patient over a fixed period.
• The variable landArea, where each data point is a positive number giving the area of a piece of land.
• The variable shrimp, where each data point is the determination of the percentage of shrimp in a preparation of shrimp cocktail (!)
• The variable deathsPerYear, where data point is a count of a number of individuals who have died from a particular cause in a particular year.
Quantitative (also known as • The variable cases, where each data point is a count of the number of people in a particular group who have a specific disease. This may be meaningless unless we
"numeric" variables or also have some measure of the group size. This can be done by via another variable (often labelled controls) indicating the number of people in each group who have
"covariates") not developed the disease (grouped case/control studies of this sort are common in areas such as medicine).
• The variable temperatureF, where each data point is an average monthly temperature in degrees Fahrenheit.
• The variable direction, where each data point is a compass direction in degrees.
It is not always immediately obvious from the plain data whether a variable is categorical or quantitative: often this judgement must be made by careful consideration of the context of the data. For
example, a variable containing numbers 1, 2, and 3 might seem to be a numerical quantity, but it could just as easily be a categorical variable describing (say) a medical treatment using either drug
1, drug 2, or drug 3. More rarely, a seemingly categorical variable such as colour (levels 'blue', 'green', 'yellow', 'red') might be better represented as a numerical quantity such as the wavelength
of light emitted in an experiment. Again, it's your job to make this sort of judgement, on the basis of what you are trying to do.
Borderline categorical/quantitative variablesEdit
Despite the importance of the categorical/quantitative distinction (and its prominence in many textbooks), reality is not always so clear-cut. It can sometimes be reasonable to treat categorical
variables as quantitative, or vice versa. Perhaps the commonest case is when the levels of a categorical variable seem to have a natural order, such as the class variable in Table 2.1, or the Likert
scale often used in questionnaires.
In rare and specific circumstances, and depending on the nature of the question being asked, there may be rough numerical values that can be allocated to each level. For example, maybe a survey
question is accompanied by a visual scale on which the Likert categories are marked, from 'absolutely agree' to 'absolutely disagree'. In this case it may be justifiable to convert the categorical
variable straight into to a quantitative one.
More commonly, the order of levels is known, but exact values cannot be generally agreed upon. Such categorical variables can be described as ordinal or ranked, as opposed ones such as gender or
professedReligion which are purely nominal. Hence we can recognise two major types of categorical variable: ordered ("ordinal") and unordered ("nominal"), as illustrated by the two examples in Table
Classification of quantitative variablesEdit
Although the categorical/quantitative division is the most important one, we can further subdivide each type (as we have already seen when discussing categorical variables) . The most commonly taught
classification is due to Stevens (1946). As well as dividing categorical variables into ordinal and nominal types, he classified quantitative variables into two types, interval or ratio, depending on
the nature of the scale that was used. To this classification can be added circular variables. Hence classifying quantitative variables on the basis of the measurement scale leads to three
subdivision (as illustrated by the subdivisions in Table 2.1):
• Ratio data is the most commonly encountered. Examples include distances, lengths of time, numbers of items, etc. These variables are measured on a scale with a natural zero point. Because we can
• Interval data is measured on a scale where there is no natural zero point. The most common examples are temperature (in degrees Centigrade or Fahrenheit) and calendar date. Since the zero point
on the scale is essentially arbitrary, The name comes from the fact that while ratios are not meaningful, intervals are. E.g. means that ****
• Circular data is measured on a scale which "wraps around", such as Direction, TimeOfDay, Logitude etc. ***
The Stevens classification is not the only way to categorise quantitative variables. Another sensible division recognises the difference between continuous and discrete measurements. Specifically,
quantitative variables can represent either
• Continuous data, in which it makes sense to talk about intermediate values (e.g. 1.2 hours, 12.5%, etc.). This includes cases where data have been rounded ***.
• "Discrete data", where intermediate values are nonsensical (e.g. doesn't make much sense to talk about 1.2 deaths, or 2.6 cancer cases in a group of 10 people). Often these are counts of things:
this is sometime known as meristic data.
In practice, discrete data are often treated as continuous, especially when the units into which they are divided are relatively small. For example, the population size of different countries is
theoretically discrete (you can't have half a person), but the values are so huge that it may be reasonable to treat such data as continuous. However, for small values, such as the number of people
in a household, the data are rather "granular", and the discrete nature of values becomes very apparent. One common result of this is the presence of multiple repeated values (e.g. there will be a
lot of 2 person households in most data sets).
A third way of classifying quantitative variables depends on whether the scale has upper or lower bounds, or even both.
• bounded at one end (e.g. landArea cannot be below 0),
• at both ends (e.g. percentages cannot be less then 0 or greater than 100). Also see censored data ***
• unbounded (weightLoss).
Most important is circular - often requires very different analytical tools. Often best to make linear in some way (e.g. difference from a fixed direction).
Interval data cannot use ratios (division). Rather rare
Bounds: very common to have lower bound. Unusual to have only an upper bound. Both often indicates a percentage. - often treated by transformation (e.g. log)
Count data: if multiple identical values, can affect plotting etc. If true independent counts, indicates error function.
The distinctions between the different types of variables are summarised in Figure ***. Note that it is also common to
Independence of data pointsEdit
Does the actual value cause correllations in "surrounding" values (e.g. time series), or do both reflect some common association (e.g. blocks/heterogeneity).
Time series Spatial data Blocks
Incorporating informationEdit
Time series, other sources of non-independence
Categorical variables in R are stored as a special vector object known as a
. This is
the same as a character vector filled with a set of names (don't get the two mixed up). In particular, R has to be told that each element can only be one of a number of known
). If you try to place a data point with a different, unknown level into the factor, R will complain. When you print a factor to the screen, R will also list the possible levels that factor can take
(this may include ones that aren't present)
The factor() function creates a factor and defines the available levels. By default the levels are taken from the ones in the vector***. Actually, you don't often need to use factor(), because when
reading data in from a file, R assumes by default that text should be converted to factors (see Statistical Analysis: an Introduction using R/R/Data frames). You may need to use as.factor().
Internally, R stores the levels as numbers from 1 upwards, but it is not always obvious which number corresponds to which level, and it should not normally be necessary to know.
Ordinal variables, that is factors in which the levels have a natural order, are known to R as ordered factors. They can be created in the normal way a factor is created, but in addition specifying
state.region #An example of a factor: note that the levels are printed out
state.name #this is *NOT* a factor
state.name[1] <- "Any text" #you can replace text in a character vector
state.region[1] <- "Any text" #but you can't in a factor
state.region[1] <- "South" #this is OK
state.abb #this is not a factor, just a character vector
character.vector <- c("Female", "Female", "Male", "Male", "Male", "Female", "Female", "Male", "Male", "Male", "Male", "Male", "Female", "Female" , "Male", "Female", "Female", "Male", "Male", "Male", "Male", "Female", "Female", "Female", "Female", "Male", "Male", "Male", "Female" , "Male", "Female", "Male", "Male", "Male", "Male", "Male", "Female", "Male", "Male", "Male", "Male", "Female", "Female", "Female") #a bit tedious to do all that typing
#might be easier to use codes, e.g. 1 for female and 2 for male
Coded <- factor(c(1, 1, 2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 1, 1, 2, 1, 1, 2, 2, 2, 2, 1, 1, 1, 1, 2, 2, 2, 1, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1, 1, 1))
Gender <- factor(Coded, labels=c("Female", "Male")) #we can then convert this to named levels
Matrices and Arrays
Much statistical theory uses
matrix algebra
. While this book does not require a detailed understanding of matrices, it is useful to know a little about how R handles them.
Essentially, a matrix (plural: matrices) is the two dimensional equivalent of a vector. In other words, it is a rectangular grid of numbers, arranged in rows and columns. In R, a matrix object can be
created by the matrix() function, which takes, as a first argument, a vector of numbers with which the matrix is filled, and as the second and third arguments, the number of rows and the number of
columns respectively.
R can also use array objects, which are like matrices, but can have more than 2 dimensions. These are particularly useful for tables: a type of array containing counts of data classified according to
various criteria. Examples of these "contingency tables" are the HairEyeColor and Titanic tables shown below.
As with vectors, the indexing operator [] can be used to access individual elements or sets of elements in a matrix or array. This is done by separating the numbers inside the brackets by commas. For
example, for matrices, you need to specify the row index then a comma, then the column index. If the row index is blank, it is assumed that you want all the rows, and similarly for the columns.
1. m <- matrix(1:12, 3, 4) #Create a 3x4 matrix filled with numbers 1 to 12
2. m #Display it!
3. m*2 #Arithmetic, just like with vectors
4. m[2,3] #Pick out a single element (2nd row, 3rd column)
5. m[1:2, 2:4] #Or a range (rows 1 and 2, columns 2, 3, and 4.)
6. m[,1] #If the row index is missing, assume all rows
7. m[1,] #Same for columns
8. m[,2] <- 99 #You can assign values to one or more elements
9. m #See!
10. ###Some real data, stored as "arrays"
11. HairEyeColor #A 3D array
12. HairEyeColor[,,1] #Select only the males to make it a 2D matrix
13. Titanic #A 4D array
14. Titanic[1:3,"Male","Adult",] #A matrix of only the adult male passengers
> m <- matrix(1:12, 3, 4) #Create a 3x4 matrix filled with numbers 1 to 12
> m #Display it!
[,1] [,2] [,3] [,4]
[1,] 1 4 7 10
[2,] 2 5 8 11
[3,] 3 6 9 12
> m*2 #Arithmetic, just like with vectors
[,1] [,2] [,3] [,4]
[1,] 2 8 14 20
[2,] 4 10 16 22
[3,] 6 12 18 24
> m[2,3] #Pick out a single element (2nd row, 3rd column)
[1] 8
> m[1:2, 2:4] #Or a range (rows 1 and 2, columns 2, 3, and 4.)
[,1] [,2] [,3]
[1,] 4 7 10
[2,] 5 8 11
> m[,1] #If the row index is missing, assume all rows
[1] 1 2 3
> m[1,] #Same for columns
[1] 1 4 7 10
> m[,2] <- 99 #You can assign values to one or more elements
> m #See!
[,1] [,2] [,3] [,4]
[1,] 1 99 7 10
[2,] 2 99 8 11
[3,] 3 99 9 12
> ###Some real data, stored as "arrays"
> HairEyeColor #A 3D array
, , Sex = Male
Hair Brown Blue Hazel Green
Black 32 11 10 3
Brown 53 50 25 15
Red 10 10 7 7
Blond 3 30 5 8
, , Sex = Female
Hair Brown Blue Hazel Green
Black 36 9 5 2
Brown 66 34 29 14
Red 16 7 7 7
Blond 4 64 5 8
> HairEyeColor[,,1] #Select only the males to make it a 2D matrix
Hair Brown Blue Hazel Green
Black 32 11 10 3
Brown 53 50 25 15
Red 10 10 7 7
Blond 3 30 5 8
> Titanic #A 4D array
, , Age = Child, Survived = No
Class Male Female
1st 0 0
2nd 0 0
3rd 35 17
Crew 0 0
, , Age = Adult, Survived = No
Class Male Female
1st 118 4
2nd 154 13
3rd 387 89
Crew 670 3
, , Age = Child, Survived = Yes
Class Male Female
1st 5 1
2nd 11 13
3rd 13 14
Crew 0 0
, , Age = Adult, Survived = Yes
Class Male Female
1st 57 140
2nd 14 80
3rd 75 76
Crew 192 20
> Titanic[1:3,"Male","Adult",] #A matrix of only the adult male passengers
Class No Yes
1st 118 57
2nd 154 14
3rd 387 75
R is very particular about what can be contained in a vector. All the elements need to be of the same type, an moreover must be either types of number
, logical values, or strings of text
If you want a collection of elements which are of different types, or not of one of the allowed vector types, you need to use a list.
1. l1 <- list(a=1, b=1:3)
2. l2 <- c(sqrt, log) #
Visualising a single variableEdit
Before carrying out a formal analysis, you should always perform an Initial Data Analysis, part of which is to inspect the variables that are to be analysed. If there are only a few data points, the
numbers can be scanned by eye, but normally it is easier to inspect data by plotting.
Scatter plots, such as those in Chapter 1, are perhaps the most familiar sort of plot, and are useful for showing patterns of association between two variables. These are discussed later in this
chapter, but in this section we first examine the various ways of visualising a single variable.
Plots of a single variable, or univariate plots are particularly used to explore the distribution of a variable; that is its shape and position. Apart from initial inspection of data, one very common
use of these plots is to look at the residuals (see Figure 1.2): the unexplained part of the data that remains after fitting a statistical model. Assumptions about the distribution of these residuals
are often checked by plotting them.
The plots which follow illustrate a few of the more common types of univariate plot. The classic text is Tufte (cite: the visual display of quantitative information).
Categorical variablesEdit
For categorical variables, the choice of plots is quite simple. The most basic plots simply involve counting up the data points at each level.
• Figure 2.1: Categorical data plots
Figure 2.1(a) displays these counts as a bar chart; another possibility is to use points as in Figure 2.1(b). In the case of gender, the order of the levels is not important: either 'male' or
'female' could come first. In the case of class, the natural order of the levels is used in the plot. In the extreme case, where intermediate levels might be meaningful, or where you wish to
emphasise the pattern between levels, it may be reasonable to connect points by lines. For illustrative purposes, this has been done in Figure 2.1(b), although the reader may question whether it is
appropriate in this instance.
plot(1:length(Gender), Gender, yaxs="n"); axis(2, 1:2, levels(Gender), las=1)
In some cases we may be interested in the actual sequence of data points. This is particularly so for time series data, but may also be relevant elsewhere. For instance, in the case of gender, the
data was recorded in the order that each child was born. If we think that the preceeding birth influences the following birth (unlikely in this case, but just within the realm of possibility if
pheremones are involved), then we might want to do Symbol-by-Symbol plot . If we are looking for associations with time, however, then a bivariate plot may be more appropriate ***, or there are
particular features of the data that we are interested in (e.g. repeat rate), then other possibilities exist (doi:10.1016/j.stamet.2007.05.001).See chapter on time series
Quantitative variablesEdit
Table 2.2: Land area (km^2)
of the 50 states of the USA
Alabama 133666
Alaska 1527463
Arizona 295022
Arkansas 137538
California 411012
Colorado 269998
Connecticut 12973
Delaware 5327
Florida 151669
Georgia 152488
Hawaii 16705
Idaho 216411
Illinois 146075
Indiana 93993
Iowa 145790
Kansas 213062
Kentucky 104622
Louisiana 125673
Maine 86026
Maryland 27394
Massachusetts 21385
Michigan 150778
Minnesota 217735
Mississippi 123583
Missouri 180485
Montana 381085
Nebraska 200017
Nevada 286297
New Hampshire 24097
New Jersey 20295
New Mexico 315113
New York 128401
North Carolina 136197
North Dakota 183021
Ohio 106764
Oklahoma 181089
Oregon 251179
Pennsylvania 117411
Rhode Island 3144
South Carolina 80432
South Dakota 199550
Tennessee 109411
Texas 692404
Utah 219931
Vermont 24887
Virginia 105710
Washington 176616
West Virginia 62628
Wisconsin 145438
Wyoming 253596
Table 2.3: A classic
data set of the number
of accidental deaths
by horse-kick in 14
cavalry corps of the
Prussian army from
A quantitative variable can be plotted in many more ways than a categorical variable. Some of the most common single-variable plots are discussed below, using the land area of the 50 US states as our
example of a continuous variable, and a famous data set of the number of deaths by horse kick as our example of a discrete variable. These data are tabulated in Tables 2.2 and 2.3
Some sorts of data consist of many data points with identical values. This is particularly true for count data where there are low numbers of counts (e.g. number of offspring).
There are 3 things we might want to look for in these sorts of plots
• points that seem extreme in some way (these are known as outliers). Outliers often reveal mistakes in data collection, and even if they don't, they can have a disproportionate effect on further
analysis. If it turns out that they aren't due to an obvious mistake, one option is to remove them from the analysis, but this causes problems of its own/
• shape & position of the distribution (e.g. normal, bimodal, etc.)
• similarity to known distributions (QQ)
We'll keep the focus mostly on the variable "landArea" ***
Figure 2.3:
Basic plots
The simplest way to represent quantitative data is to plot the points on a line, as in Figure 2.3(a). This is often called a 'dot plot, although this is also sometimes used to describe a number of
other types of plot (e.g. Figure 2.7)^[12]. To avoid confusion, it may be best to call it a one-dimensional scatterplot. As well as simplicity, there are two advantages to a 1D scatterplot
1. All the information present in the data is retained.
2. Outliers are easily identified. Indeed, it is often useful to be able to identify which data points are outliers. Some software packages allow you to identify points interactively (e.g. by
clicking points on the plot). Otherwise, points can be labelled, as has been done for some outlying points in Figure 2.3a^[13].
One dimensional scatterplots do not work so well for large datasets. Figure 2.3(a) consists of only 50 points. Even so, it is difficult to get an overall impression of the data, to (as the saying
goes) "see the wood for the trees". This is partly because some points lie almost on top of each other, but also because of the sheer number of closely placed points. It is often the case that
features of the data are best explored by summarising it in some way.
Figure 2.3(b) shows a step on the way to a better plot. To alleviate the problem of points obscuring each other, they have been displaced, or jittered sideways by a small, random amount. More
importantly, the data have been summarised by dividing into quartiles (and coloured accordingly, for ease of explanation). The quarter of states with the largest area have been coloured red. The
smallest quarter of states have been coloured green.
More generally, we can talk about the quantiles of our data^[14]. The red line represents the 75% quantile: 75% of the points lie below it. The green line represents the 25% quantile: 25% of the
points lie below it. The distance between these lines is known as the Interquartile Range (IQR), and is a measure of the spread of the data. The thick black line has a special name: the median. It
marks the middle of the data, the 50% quantile: 50% of the points lie above, and 50% lie below. A major advantage of quantiles over other summary measures is that they are relatively insensitive to
outliers, or changes in scale ****.
Figure 2.3(c) is a coloured version of a widely used statistical summary plot: the boxplot. Here it has been coloured to show the correspondence to Figure 2.3(b). The box marks the quartiles of the
data, with the median marked within the box. If the median is not positioned centrally within the box, this is often an indicator that the data are skewed in some way. The lines on either side of the
box are known as "whiskers", and summarise the data which lies outside of the upper and lower quartiles. In this case, the whiskers have simply been extended to the maximum and minimum observed
Figure 2.3(d) is a more sophisticated boxplot of the same data. Here, notches have been drawn on the box: these are useful for comparing the medians in different boxplots. The whiskers have been
shortened so that they do not include points considered as outliers. There are various ways of defining these outliers automatically. This figure is based on a convention that considers outlying
points as those more than one and a half times the IQR from either side of the box. However it is often more informative to identify and inspect interesting points (including outliers) by visual
inspection. For example, in Figure 2.1a it is clear that Alaska and (to a lesser extent) Texas are unusually large states, but that California (identified as an outlier by this automatic procedure)
is not so set-apart from the rest.
Figure 2.4:
Basic plots
for discrete
One problem with plotting on a single line is that, if points are repeated, there ***. This is particularly problematic for discrete data. NB, there is no particular reason (or established
convention) for these plots to be vertical. Figure 2.2 shows . The stacked plot (Figure 2.4d) is similar to a histogram (Figure 2.5).
This gives another way of picturing the median & other quantiles: as dividing the area into sections ***
We can space out the points along the other axis. For example, if the order of points in the dataset is meaningful, we can just plot each point in turn. This is true for whatsit's horse-kick data.
The data by year are plotted in Figure 2.6.
One thing we can always do is to sort the data points by their value, plotting the smallest first, etc. This is seen in Figure 2.3b. If all the data points were equally spaced (and excluded each
other****), we would see a straight line. The plot for the logged variables shows that this transformation has evened out the spacing somewhat. This is called a quantile plot, for the following
when the axes are swapped, this is called the empirical cumulative distribution function. The unswapped data is useful for understanding qq plots. Also for understanding quantiles. median, etc.
We could put a scale break in, but a better option is usually to transform the variable.
Figure 2.8:
The effect of
Sometimes, plotting on a different scale (e.g. a logarithmic scale) can be more informative. We can visualise this either as a plot with a non-linear (e.g. logarithmic) axis, or as a conventional
plot of a transformed variable (e.g. a plot of log(my.variable) on a standard, linear axis). Figure 2.1(b) illustrates this point: the left axis marks the state areas, the right axis marks the
logarithm of the state areas. This sort of rescaling can highlight quite different features of a variable. In this case, it seems clear that there are a batch of nine states which seem distinctly
smaller than most, and while Alaska still seems extraordinarily large, Texas does not seem so unusual in that respect. This is also reflected by the automatic labelling of outliers in the
log-transformed variable.
It is particularly common for smaller numbers to have greater resolution. As discussed in later chapters ***, logarithmic scales are particularly useful for multiplicative data ***.
Figure 2.8:
The effect of
square root
There are other common transformations, for example, the square-root transformation (often used for count data). This may be more appropriate for state areas, if the limiting factor for state size is
(e.g.) the distance across the state, or factors associated with it (e.g. the length of time to cross from one side to another). Figure 2.1c shows a sqrt rescaling of the data. You can see that in
some sense this is less extreme than the log transformation...
Univariate plots
Producing rough plots in R is extremely easy, although it can be time consuming tweaking them to get a certain look. The defaults are usually sensible.
1. stripchart(state.areas, xlab="Area (sq. miles)") #see method="stack" & method="jitter" for others
2. boxplot(sqrt(state.area))
3. hist(sqrt(state.area))
4. hist(sqrt(state.area), 25)
5. plot(density(sqrt(state.area))
6. plot(UKDriverDeaths)
8. qqnorm()
9. ecdf(
Multiple variables in a table. Notation. most packages do this.
Statistical Analysis: an Introduction using R/R/Data frames
Statistical Analysis: an Introduction using R/R/Reading in data
Bivariate plottingEdit
Quantitative versus quantitativeEdit
Scatter plots problems with overplotting? Sunflowerplots etc.
Quantitative versus categoricalEdit
Vioplots (&boxplots)
Categorical versus categoricalEdit
Statistical Analysis: an Introduction using R/R/Bivariate plots
1. ↑ The convention (which is followed here) is to write variable names in italics
2. ↑ These are special words in R, and cannot be used as names for objects. The objects T and F are temporary shortcuts for TRUE and FALSE, but if you use them, watch out: since T and F are just
normal object names you can change their meaning by overwriting them.
3. ↑ if the logical vector is shorter then the original vector, then it is sequentially repeated until it is of the right length
4. ↑ Note that, when using continuous (fractional) numbers, rounding error may mean that results of calculations are not exactly equal to each other, even if they seem as if they should be. For this
reason, you should be careful when using == with continuous numbers. R provides the function all.equal to help in this case
5. ↑ But unlike ifelse, it can't cope with NA values
6. ↑ For this reason, using == in if statements may not be a good idea, see the Note in ?"==" for details.
7. ↑ These are particularly used in more advanced computer programming in R, see ?"&&" for details
8. ↑ Similar examples are given in Chatfield ****
9. ↑ There are actually 3 types of allowed numbers: "normal" numbers, complex numbers, and simple integers. This book deals almost exclusively with the first of these.
10. ↑ This is not quite true, but unless you are a computer specialist, you are unlikely to use the final type: a vectors of elements storing "raw" computer bits, see ?raw
11. ↑ This dataset was collected by the Russian economist von Bortkiewicz, in 1898, to illustrate the pattern seen when events occur independently of each other (this is known as a Poisson
distribution). The table here gives the total number of deaths summed over all 14 corps. For the full dataset, broken down into corp, see Statistical Analysis: an Introduction using R/Datasets
12. ↑ Authors such as Wild & Seber call it a dot plot, but R uses the term "dotplot" to refer to a Cleveland (1985) dot-plot as shown in Figure 2.1(b). Other authors (***) specifically use it to
refer to a sorted ("quantile") plot as used in Figure *** (cite)
13. ↑ Labels often obscure the plot, so For plots intended only for viewing on a computer, it is possible to print labels in a size so small that they can only be seen when zooming in.
14. ↑ There a many different methods for calculating the precise value of a quantile, when it lies between two points. See Hyndman and Fan (1996)
Carifio, J. & Perla, R. (2007). Ten Common Misunderstandings, Misconceptions, Persistent Myths and Urban Legends about Likert Scales and Likert Response Formats and their Antidotes. Journal of Social
Sciences, 2, 106-116. http://www.scipub.org/fulltext/jss/jss33106-116.pdf
Last modified on 19 July 2010, at 14:28 | {"url":"http://en.m.wikibooks.org/wiki/Statistical_Analysis:_an_Introduction_using_R/Chapter_2","timestamp":"2014-04-17T12:40:28Z","content_type":null,"content_length":"151870","record_id":"<urn:uuid:07b414a2-7f8b-4491-863e-2eb736fa5c3b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
The fastest path through a network with random time-dependent travel times
Results 1 - 10 of 40
- In Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence , 1995
"... Standard algorithms for finding the shortest path in a graph require that the cost of a path be additive in edge costs, and typically assume that costs are deterministic. We consider the problem
of uncertain edge costs, with potential probabilistic dependencies among the costs. Although these depend ..."
Cited by 30 (3 self)
Add to MetaCart
Standard algorithms for finding the shortest path in a graph require that the cost of a path be additive in edge costs, and typically assume that costs are deterministic. We consider the problem of
uncertain edge costs, with potential probabilistic dependencies among the costs. Although these dependencies violate the standard dynamicprogramming decomposition, we identify a weaker stochastic
consistency condition that justifies a generalized dynamic-programming approach based on stochastic dominance. We present a revised pathplanning algorithm and prove that it produces optimal paths
under time-dependent uncertain costs. We illustrate the algorithm by applying it to a model of stochastic bus networks, and present sample performance results comparing it to some alternatives. For
the case where all or some of the uncertainty is resolved during path traversal, we extend the algorithm to produce optimal policies. This report is based on a paper presented at the Eleventh
Conference on Unc...
- Networks
"... In congested transportation and data networks, travel (or transmission) times are time-varying quantities that are at best known a priori with uncertainty. In such stochastic, time-varying (or
STV) networks, one can choose to use the a priori least-expected time (LET) path or one can make improved r ..."
Cited by 28 (0 self)
Add to MetaCart
In congested transportation and data networks, travel (or transmission) times are time-varying quantities that are at best known a priori with uncertainty. In such stochastic, time-varying (or STV)
networks, one can choose to use the a priori least-expected time (LET) path or one can make improved routing decisions en route as traversal times on traveled arcs are experienced and arrival times
at intermediate locations are revealed. In this context, for a given origin–destination pair at a specific departure time, a single path may not provide an adequate solution, because the optimal path
depends on intermediate information concerning experienced traversal times on traveled arcs. Thus, a set of strategies, referred to as hyperpaths, are generated to provide directions to the
destination node conditioned upon arrival times at intermediate locations. In this paper, an efficient label-setting-based algorithm is presented for determining the adaptive LET hyperpaths in STV
networks. Such a procedure is useful in making critical routing decisions in Intelligent Transportation Systems (ITS) and data communication networks. A side-by-side comparison of this procedure with
a label-correctingbased algorithm for solving the same problem is made. Results of extensive computational tests to assess and compare the performance of both algorithms, as well as to investigate
the characteristics of the resulting hyperpaths, are presented. An illustrative example of both procedures is provided. © 2001 John Wiley & Sons, Inc.
- IEEE/ACM Transactions on Networking , 1993
"... We consider the problem of trave ling with least expec ted dela y in networ ks whose link delays change probabilistically acc ording to Markov cha ins. This is a typical routing problem in
dynamic computer communication networ ks. We formulate sever al optimization problems, posed on infinite and fi ..."
Cited by 17 (2 self)
Add to MetaCart
We consider the problem of trave ling with least expec ted dela y in networ ks whose link delays change probabilistically acc ording to Markov cha ins. This is a typical routing problem in dynamic
computer communication networ ks. We formulate sever al optimization problems, posed on infinite and finite horizons, and consider them with and without using memory in the decision making proc ess.
We prove that all these problems ar e, in genera l, intrac table. Howe ver, for networks with nodal stochastic delays, a simple polynomial optimal solution is prese nted. This is typical of high-spee
d networks, in which the dominant delays are incurre d by the nodes. For more gene ral networks, a tracta ble ε-optimal solution is pre sented.
"... The K shortest paths problem has been extensively studied for many years. Efficient methods have been devised, and many practical applications are known. Shortest hyperpath models have been
proposed for several problems in different areas, for example in relation with routing in dynamic networks. Ho ..."
Cited by 17 (4 self)
Add to MetaCart
The K shortest paths problem has been extensively studied for many years. Efficient methods have been devised, and many practical applications are known. Shortest hyperpath models have been proposed
for several problems in different areas, for example in relation with routing in dynamic networks. However, the K shortest hyperpaths problem has not yet been investigated. In this paper we present
procedures for finding the K shortest hyperpaths in a directed hypergraph. This is done by extending existing algorithms for K shortest loopless paths. Computational experiments on the proposed
procedures are performed, and applications in transportation, planning and combinatorial optimization are discussed.
- In Proc. of International Conference on Automated Planning and Scheduling , 2006
"... We present new complexity results and efficient algorithms for optimal route planning in the presence of uncertainty. We employ a decision theoretic framework for defining the optimal route: for
a given source S and destination T in the graph, we seek an ST-path of lowest expected cost where the edg ..."
Cited by 13 (6 self)
Add to MetaCart
We present new complexity results and efficient algorithms for optimal route planning in the presence of uncertainty. We employ a decision theoretic framework for defining the optimal route: for a
given source S and destination T in the graph, we seek an ST-path of lowest expected cost where the edge travel trimes are eandom variable and the cost is a nonlinear function of total travel time.
Although this is a natural model for route-planning on real-world road networks, results are sparse due to the analytic difficulty of finding closed form expressions for the exptected cost (Fan,
Kalaba and Moore), as well as the computational/combinatorial difficulty of efficiently finding an optimal path which minimizes the exptected cost. We identify a family of appropriate cost models and
travel time distributions that are closed under convolution and physically valid. We obtain hardness results for routing problems with a given start time and cost functions with a global minimum, in
a variety of deterministic and stochastic settings. In general the global cost is not separable into edge costs, precluding classic shortest-path approaches. However, using partial minimization
techniques, we exhibit an efficient solution via dynamic programming with low polynomial complexity.
- EUROPEAN JOURNAL OF OPERATIONAL RESEARCH , 1998
"... We consider routing problems in dynamic networks where arc travel times are both random and time dependent. The problem of finding the best route to a fixed destination is formulated in terms of
shortest hyperpaths on a suitable time-expanded directed hypergraph. The latter problem can be solved in ..."
Cited by 12 (5 self)
Add to MetaCart
We consider routing problems in dynamic networks where arc travel times are both random and time dependent. The problem of finding the best route to a fixed destination is formulated in terms of
shortest hyperpaths on a suitable time-expanded directed hypergraph. The latter problem can be solved in linear time, with respect to the size of the hypergraph, for several definitions of hyperpath
length. Different criteria for ranking routes can be modeled by suitable definitions of hyperpath length. We also show that the problem becomes intractable if a constraint on the route structure is
- IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS , 2002
"... This paper examines the value of real-time traffic information to optimal vehicle routing in a nonstationary stochastic network. We present a systematic approach to aid in the implementation of
transportation systems integrated with real time information technology. We develop decisionmaking procedu ..."
Cited by 11 (1 self)
Add to MetaCart
This paper examines the value of real-time traffic information to optimal vehicle routing in a nonstationary stochastic network. We present a systematic approach to aid in the implementation of
transportation systems integrated with real time information technology. We develop decisionmaking procedures for determining the optimal driver attendance time, optimal departure times, and optimal
routing policies under stochastically changing traffic flows based on a Markov decision process formulation. With a numerical study carried out on an urban road network in Southeast Michigan, we
demonstrate significant advantages when using this information in terms of total costs savings and vehicle usage reduction while satisfying or improving service levels for just-in-time delivery.
- IEEE Transactions on Mobile Computing , 2005
"... Wireless networks combined with location technology create new problems and call for new decision aids. As a precursor to the development of these decision aids, a concept of communication
distance is developed and applied to six situations. This concept allows travel time and bandwidth to be combin ..."
Cited by 8 (3 self)
Add to MetaCart
Wireless networks combined with location technology create new problems and call for new decision aids. As a precursor to the development of these decision aids, a concept of communication distance
is developed and applied to six situations. This concept allows travel time and bandwidth to be combined in a single measure so that many problems can be mapped onto a weighted graph and solved
through shortest path algorithms. The paper looks at the problem of intercepting an out-of-communication team member and describes ways of using planning to reduce communication distance in
anticipation of a break in connection. The concept is also applied to ad hoc radio networks. A way of performing route planning using a bandwidth map is developed and analyzed. The general
implications of the work to transportation planning are discussed.
- Networks , 2003
"... The Shortest Path with Recourse Problem involves finding the shortest expected-length paths in a directed network each of whose arcs have stochastic traversal lengths (or delays) that become
known only upon arrival at the tail of that arc. The traveler starts at a given source node, and makes routin ..."
Cited by 8 (0 self)
Add to MetaCart
The Shortest Path with Recourse Problem involves finding the shortest expected-length paths in a directed network each of whose arcs have stochastic traversal lengths (or delays) that become known
only upon arrival at the tail of that arc. The traveler starts at a given source node, and makes routing decisions at each node in such a way that the expected distance to a given sink node is
minimized. We develop an extension of Dijkstra’s algorithm to solve the version of the problem where arclengths are nonnegative and reset after each arc traversal. All known no-reset versions of the
problem are NP-hard. We make a partial extension to the case where negative arclengths are present.
, 1999
"... We consider stochastic networks' in which link travel times are dependent, discrete random variables. We present methods' for computing bounds' on path travel times using stochastic dominance
relationships among link travel times, and discuss techniques for controlling tightness of the bounds'. We a ..."
Cited by 7 (5 self)
Add to MetaCart
We consider stochastic networks' in which link travel times are dependent, discrete random variables. We present methods' for computing bounds' on path travel times using stochastic dominance
relationships among link travel times, and discuss techniques for controlling tightness of the bounds'. We apply these methods' to shortest-path problems, show that the proposed algorithm can provide
bounds' on the recommended path, and elaborate on extensions of the algorithm for demonstrating the anytime property. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1163781","timestamp":"2014-04-17T23:57:45Z","content_type":null,"content_length":"39626","record_id":"<urn:uuid:05af2990-6192-4b31-bdf2-8ec426c5472b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trace functions, II: Examples
Continuing after my last post, this one will be a list of examples of trace functions modulo some prime number $p$. For each of the examples, I will give a bound for its conductor, which I recall is
the main numerical invariant that allows us to measure the complexity of the trace function $K(n)$ (formally, the conductor is attached to the object $\mathcal{F}$ that gives rise to $K$, but we can
define the conductor of a trace function to be the minimal conductor of such a $\mathcal{F}$.) These objects $\mathcal{F}$ will be called sheaves, since this is the language used in the paper(s) of
Fouvry, Michel and myself, but one doesn’t need to know anything about sheaves to understand the examples.
I will start with a list of concrete functions which are trace functions, and then explain some of the basic operations one can perform on known trace functions to obtain new ones. All these examples
will be (I hope) very natural, but it is usually a deep theorem that the functions come from sheaves.
Throughout, $p$ is a fixed prime number. Generically, $\psi$ denotes a non-trivial additive character modulo $p$, for instance
$\psi(x)=e^{2i\pi x/p},$
(which may also be viewed casually as an $\ell$-adic character), and $\chi$ denotes a multiplicative character modulo $p$ (non-trivial, unless specified otherwise.)
(1) Characters and mixed characters
Let $f$ and $g$ be non-zero rational functions in $\mathbf{F}_p(T)$. Let
for $x$ which is not a pole of $f$, or a zero or pole of $g$, and $K(x)=0$ in that case. Then $K$ is a trace weight. The (or an) associated sheaf is of rank $1$, and its conductor is bounded by the
sum of degrees of numerators and denominators of $f$ and $g$. However, the size of the conductor arises for different reasons for $f$ and $g$: for the “additive” component $f$, singularities are
poles of $f$, and the contribution of each pole $x_0$ comes from the Swan conductor, which is bounded by the order of the pole at $x_0$; for the “multiplicative” component $g$, the singularities are
zeros and poles of $g$, and each only contributes $1$ to the conductor: the Swan conductors for $K_g=\chi(g(x))$ are all zero.
For analytic applications, the main point is that, by fixing $f$ and $g$ over $\mathbf{Q}$, one obtains for each $p$ large enough (so that the reduction modulo $p$ makes sense), and each choice of
characters $\psi$ and $\chi$, a trace weight associated to $f$ and $g$ which has conductor uniformly bounded (depending on $f$ and $g$ only). Thus any estimates valid for all primes with implied
constants depending only on the conductor of the trace functions involved will become an interesting estimate concerning $f$ and $g$. This applies to the main theorem of my paper with Fouvry and
Michel concerning orthogonality of Fourier coefficients of modular forms and trace functions…
These examples are the most classical, and are very useful. Even the simple case $g=1$ and $f(X)=X^{-1}$ is full of surprises.
(2) Fiber-counting functions
Another very useful example comes from a fixed non-constant rational function $f\in \mathbf{F}_p(T)$, which is viewed as defining a morphism
$f\,:\, \mathbf{P}^1\rightarrow \mathbf{P}^1.$
Consider then
$K(x)=|\{y\in \mathbf{P}^1\,\mid\, f(y)=x\}|.$
This is a trace weight, associated to the direct image sheaf
which in representation theoretic terms is an induced representation from a finite-index subgroup, so that it remains relatively simple.
Here the rank $r$ of the sheaf is the degree $\deg(f)$ of $f$ as a morphism (i.e., the generic number of pre-images of a point $x$); the singularities are the finitely many $x$ in $\mathbf{P}^1$ such
that the equation
has fewer than $r$ solutions (in $\mathbf{P}^1(\bar{\mathbf{F}}_p)$) and, at least if $p>\deg(f)$, the Swan conductors vanish everywhere, so that the conductor is bounded in terms of the degrees of
the numerator and denominator of $f$ only. In particular, if $f$ is defined over $\mathbf{Q}$, varying $p$ (large enough) will provide a family of trace functions modulo primes with uniformly bounded
conductor, similar to the characters of the previous example with fixed rational functions as arguments.
The main reason this function is useful is that, for any other (arbitrary) function $\varphi$ on $\mathbf{P}^1(\mathbf{F}_p)$, we have tautologically
(in other words, it is maybe better to interpret $K$ as the image measure of the uniform measure on the finite set $\mathbf{P}^1(\mathbf{F}_p)$ under $f$, and this formula is the classical
“integration” formula for an image measure…)
One also often takes the function
where $1$ is the average of $K$ over $\mathbf{F}_p$. This is also a trace function (the sheaf corresponding to $K$ contains a trivial quotient, and this is the trace function of the kernel of the map
to this trivial quotient). We now have
(3) Number of points on families of algebraic varieties
More generally, we can count points on one-parameter families of algebraic varieties of dimension $d\geq 1$. For instance, families of elliptic curves or of more general curves are quite common. To
be concrete, one may have a polynomial $f\in \mathbf{F}_p[T,Y,Z]$, where $T$ is seen as the parameter, and consider the curves
$C_t\,:\, f(t,X,Y)=0.$
Usually, it is not so much the number of points as the correction term that is most interesting. For instance, if the curves are generically geometrically irreducible, and have a single point at
infinity, the size of $C_t(\mathbf{F}_p)$ is (for all but finitely many $t$) of the form
where $a_(C_t)$ satisfies the Weil bound
$|a(C_t)|\leq 2g(C_t)\sqrt{p},$
in terms of the genus of $C_t$. In fact, once one ensures that the family of curves is such that the genus of the curves is the same $g\geq 0$ (for all but finitely many $t$), the function
is a trace function on the corresponding dense open set of $\mathbf{A}^1$, for some sheaf which has rank $2g$. For the other values of $t$, the trace function of the corresppnding middle-extension
sheaf might differ from the value $a(C_t)$ defined as above using the number of points, but since the number of those singularities is bounded by the conductor, one can usually (analytically at
least) not worry too much about this. Similarly, in many cases the sheaf is tamely ramified everywhere (i.e., all Swan conductors vanish), and so the conductor is well-controlled.
In contrast with the first two examples, the construction of a sheaf with this trace function is not elementary: it is an example of the so-called “higher direct image sheaves” (with compact
support). Since, for every “good” $t$, the Riemann Hypothesis for curves shows that
where the $\theta_{i,t}$ are complex numbers of modulus $1$, we can interpret the existence of this sheaf as saying that the algebraic variation of the “eigenvalues” $\theta_{i,t}$ is itself
controlled by an algebraic object. This is one of the main insights that algebraic geometry (and étale cohomology in particular) brings to analytic number theory.
The family of elliptic curves
in my bijective challenge is of this type.
(4) Families of Kloosterman sums
One of the great examples, for analytic number theory, is given by families of Kloosterman sums: for an integer $m\geq 1$, and a non-zero $a\in\mathbf{F}_p$, we let
$Kl_m(a)=\frac{(-1)^{m-1}}{p^{(m-1)/2}}\sum_{x_1\cdots x_m=a}e\Bigl(\frac{x_1+\cdots +x_m}{p}\Bigr).$
The Weil bound for $m=2$, and the even deeper work of Deligne for larger $m$, prove that
$|Kl_m(a)|\leq m$
for all $a$ invertible modulo $p$. Further work, relying once more on the powerful formalism of étale sheaves and higher direct images in particular, shows that the function
is (the restriction to invertible $a$ of) a trace function for an irreducible sheaf, with conductor bounded in terms of $m$ only.
(5) The Fourier transform
If we have a function $K(x)$ modulo $p$, we define its Fourier transform by
$\hat{K}(t)=\frac{1}{\sqrt{p}}\sum_{x\in \mathbf{F}_p}{K(x)e\Bigl(\frac{xt}{p}\Bigr)}$
for $t\in\mathbf{F}_p$ (the normalization here is convenient, as I will explain). It is now a very deep fact that, if $\latex K$ comes from a sheaf, then so does $-\hat{K}$ (the minus sign is
natural, but this has to do with rather deep algebraic geometry…) More precisely, one has to be careful because of the fact that the Fourier transform of an additive character (as a function) is a
multiple of a delta function. The latter does fit nicely in the framework of étale sheaves, but not as a middle-extension sheaf or Galois representation (because it is zero on a dense open set, so it
would have to be zero to be a middle-extension sheaf or to come from a Galois representation). There is a geometric solution to this issue, but it involves speaking of perverse sheaves and related
machinery, which we have barely started to understand: the Fourier transform works perfectly well at the level of perverse sheaves, and one can use their trace functions just as well as those of
Galois representations. Since, in our current applications, we can always deal separately with additive characters (or delta functions), we have avoided having to deal with perverse sheaves (up to
The existence of the $\ell$-adic Fourier transform of sheaves was first proved by Deligne, but the theory of the sheaf-theoretic Fourier transform was largely built by Laumon (with further
contributions, in particular, from Brylinski and Katz). To illustrate how powerful it is, consider
a relatively simple case of Example (1). We then have
so that the existence of the Fourier transform at the level of sheaves implies the existence of the Kloosterman sheaf parameterizing classical Kloosterman sums as in the previous example.
Other examples that arise from our previous examples are many families of exponential sums, for instance
(arising from Example (1); one must assume either that $f(x)$ is not a polynomial of degree $\leq 1$ or that $\chi$ is non-trivial to have a well-defined sheaf), or
for $tot=0$ with $K(0)$ equal to the number of poles of $f$ (the sum over $x$ is over values where the rational function $f$ is defined), that arises from Example (2) (applied with the function $\
This operation of Fourier transform has one last crucial feature for applications to the analysis of trace functions: the conductor of $\hat{K}$ is bounded in terms of that of $K$ only. This is
something we prove in our paper using Laumon’s analysis of the singularities of the Fourier transform, and in fact we show that if the conductor of $K$ is at most $M\geq 1$, then the conductor of $\
hat{K}$ is at most $10M^2$. Hence the examples above, if the rational functions $f$ (and/or $g$) are fixed in $\mathbf{Q}(T)$ and then reduced modulo various primes, always have conductor bounded
uniformly for all $p$.
(6) Change of variable
Given a non-constant rational function $f\in\mathbf{F}_p(T)$ seen as a morphism
$\mathbf{P}^1\rightarrow \mathbf{P}^1,$
and a trace function $K(x)$, one can form the function
This is again, essentially, a trace function: as in Example (3), one may have to tweak the values of $f^*K$ at some singularities (because pull-back of middle-extension sheaves do not always remain
so), but this is fairly easily controlled. Moreover, one can also control the conductor of $f^*K$ in terms of that of $K$, taking into account the degree of latex f$. A specially simple case of great
importance is when $f$ is an homography
$f(x)=\frac{ax+b}{cx+d},\quad\quad\quad ad-bcot=0,$
(an automorphism of $\mathbf{P}^1$) in which case no tweaking is necessary to defined $f^*K$, and the conductor is the same as that of $K$ (which certainly seems natural!)
We can now compose these various operations. One construction is the following (a finite-field Bessel transform): start with $K$, apply the Fourier transform, change the variable $t$ to $t^{-1}$,
apply again the Fourier transform. If we call $\check{K}$ the resulting function, the examples above show that if $K$ is a trace function with conductor $\leq M$, then $\check{K}$ will also be one,
and its conductor will be bounded solely in terms of $M$ (in fact, it will be $\leq 100M^4$ by the bound discussed in Example (5)).
In the next post in this series, I will discuss the Riemann Hypothesis for trace functions and its applications. But probably before I will discuss the more recent works of Fouvry, Michel and myself,
since we now have three further papers in our series — two small, and one big.
Post a Comment | {"url":"http://blogs.ethz.ch/kowalski/2012/11/14/trace-functions-ii-examples/","timestamp":"2014-04-17T15:26:53Z","content_type":null,"content_length":"51702","record_id":"<urn:uuid:3ff646ca-1021-44ed-a90a-1e8fc5de6843>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
Out of Bounds
Is there a limit on how viscous fluids can be? Various string-theory-based calculations predict that the ratio of a fluid’s shear viscosity to its entropy density is bounded from below by $ħ/4π$ (or,
$ħ/(4πkB$), depending on the units). So far, only highly idealized models of fluids approach this limit, but physicists are actively debating if the quark-gluon plasma, generated in relativistic
heavy-ion collisions, might be an experimentally accessible example of a “perfect liquid” (See 26 October 2009 Trends).
Now, in a paper published in Physical Review Letters, Anton Rebhan and Dominik Steineder, both at the Vienna University of Technology in Austria, show that the theoretical bound on viscosity may well
be violated within the idealized setup of string theory itself. Rebhan and Steineder consider the string-theory-based description of an idealized plasma with an intrinsic spatial anisotropy, and show
that the bound is indeed violated, and that the amount of violation is directly related to the spatial anisotropy of the plasma. Given that the real quark-gluon plasma is produced in a highly
anisotropic situation — immediately after the nuclear collisions, the resulting plasma expands predominantly along the beam axis — the implications of this theoretical finding on the physics of
relativistic heavy-ion collisions is keenly awaited. The obvious challenge is to understand if any of the implications of Rebhan and Steineder’s calculations, which are based on a highly idealized
model, can be extrapolated to a “real life” experimental scenario. – Abhishek Agarwal | {"url":"http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.108.021601","timestamp":"2014-04-19T12:26:16Z","content_type":null,"content_length":"13508","record_id":"<urn:uuid:9882f5a2-79ab-4ce8-86b0-897fad17b147>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: The Contragredient
Joint with D. Vogan
Spherical Unitary Dual for Complex Classical Groups
Joint with D. Barbasch
The Contragredient
Problem: Compute the involution of the space of
L-homomorphisms corresponding to (the
i.e. : W
F LG, what ?
(Assume known. . . )
(Well defined, same for all ?)
Nowhere to be found ("much needed gap in the literature"),
even for F = R
Character: (g) = (g-1)
Lemma: There is an automorphism C of LG satisfying: C(g)
is G-conjugate to g-1 for g semisimple
(The Chevalley automorphism, extended to LG)
Lemma: there is an automorphism of WR satisfying: (g) is
WR conjugate to g-1 | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/965/2908842.html","timestamp":"2014-04-18T14:21:46Z","content_type":null,"content_length":"7710","record_id":"<urn:uuid:3ef3ac15-8b10-4200-a0cb-61ddfa333569>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
How To Prove It – Exercise 0.5
Solutions to Exercises in the Introduction of How To Prove It by Daniel J Velleman.
Problem (5): Use the table in Figure 1 and the discussion on Page 5 to find two more perfect numbers.
Euclid proved that if $2^n-1$ is a prime, then $2^{n-1}(2^n-1)$ is a perfect number.
From the given table, we will take two numbers such that $2^n-1$ is a prime number: 5 and 7.
When $n = 5$:
$2^{n-1}(2^n-1) = 2^4(2^5-1)$ = 496, which is our first perfect number.
Similarly when n = 7, we get the next perfect number as 8128. | {"url":"http://diovo.com/2012/10/how-to-prove-it-exercise-0-5/","timestamp":"2014-04-21T13:10:10Z","content_type":null,"content_length":"14699","record_id":"<urn:uuid:cc522773-f232-49ef-9bcb-c780b35e24fe>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Return to List
The Theory of Sets of Points: Second Edition
             
AMS Chelsea From the Preface to the First Edition (1906): "There are no definitely accepted landmarks in the didactic treatment of Georg Cantor's magnificent theory, which is the subject of the
Publishing present volume. A few of the most modern books on the Theory of Functions devote some pages to the establishment of certain results belonging to our subject, and required for the
special purposes in hand ... But we may fairly claim that the present work is the first attempt at a systematic exposition of the subject as a whole."
1972; 326 pp;
hardcover In this second edition, notes have been added by I. Grattan-Guinness drawn from extensive annotations in the author's own copy. A further appendix has been added.
ISBN-10: Graduate students and research mathematicians.
• Rational and irrational numbers
ISBN-13: • Representation of numbers on the straight line
978-0-8284-0259-0 • The descriptive theory of linear sets of points
• Potency, and the generalised idea of a cardinal number
List Price: US$41 • Content
• Order
Member Price: • Cantor's numbers
US$36.90 • Preliminary notions of plane sets
• Regions and sets of regions
Order Code: CHEL/ • Curves
259 • Potency of plane sets
• Plane content and area
• Length and linear content
• Appendices
• Bibliography
• Index of proper names
• General index | {"url":"http://cust-serv@ams.org/bookstore?fn=20&arg1=chelsealist&ikey=CHEL-259","timestamp":"2014-04-16T10:16:28Z","content_type":null,"content_length":"15081","record_id":"<urn:uuid:560a4e9d-f13b-4e71-aaf6-1a2b7c71cae8>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
AW: st: RE: wald tests with mfx
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
AW: st: RE: wald tests with mfx
From "Martin Weiss" <martin.weiss1@gmx.de>
To <statalist@hsphsun2.harvard.edu>
Subject AW: st: RE: wald tests with mfx
Date Mon, 13 Jul 2009 22:58:50 +0200
You are more than welcome! I did not endorse your research strategy as I do
not think that what you are attempting is necessary, but on a technical
level, you can get your hands on the desired returned result in this
Stata provides a full array of postestimation tools for every estimation
command, and this list is usually exhaustive. If you want the -vce- of the
original estimation, you can get it like this:
sysuse auto, clear
generate wgt=weight/1000
tobit mpg wgt gear_ratio, ll(17)
estat vce
-----Ursprüngliche Nachricht-----
Von: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von Rich Steinberg
Gesendet: Montag, 13. Juli 2009 22:51
An: statalist@hsphsun2.harvard.edu
Betreff: Re: st: RE: wald tests with mfx
Thanks for responding, and so quickly. Good answer, but I learned from
following your advice that this reproduces only the standard errors, not
the covariances I would need for a Wald test. I don't see an e( ) that
saves the vce. And I can't use the vce from the original tobit because
the sample is changing.
Martin Weiss wrote:
> <>
> " For the latter, I know from the Help file that mfx saves what I need as
> e(Xmfx_se_dydx), but I can't figure out how to see that."
> ***
> sysuse auto, clear
> generate wgt = weight/100
> tobit mpg wgt len tu head, /*
> */ ll(17) ul(24)
> mfx compute, /*
> */ predict(e(17,24))
> mat l e(Xmfx_se_dydx)
> mat A= e(Xmfx_se_dydx)
> matrix list A
> di A[1,3]
> ***
> HTH
> Martin
> -----Original Message-----
> From: owner-statalist@hsphsun2.harvard.edu
> [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Rich Steinberg
> Sent: Montag, 13. Juli 2009 22:10
> To: statalist@hsphsun2.harvard.edu
> Subject: st: wald tests with mfx
> As a relatively unsophisticated user, I have tried for a few hours and
> failed to solve the following problem. After running tobit, I want to
> test for the equality of marginal effects for the unconditional
> observable dependent variable. No problem with mfx or dtobit. But I
> don't want to evaluate these marginal effects test at the mean, median
> or zero of the full sample prior to testing. Instead, I want to use the
> estimates from the full sample, but evaluate the marginal effect at
> means of various subsamples. So I have, for example:
> tobit totgiv $income $control, ll vce(cluster fid68)
> mfx if welfare01>0, pred(ystar(0,.))
> Now, I want to test for the equality of two elements of $income in this
> subsample. But everything I try works on the tobit coefficients, not
> the mfx output. So how do I retrieve this for a "test" command
> (ideally) or even display the vce from the mfx to do the test by hand?
> For the latter, I know from the Help file that mfx saves what I need as
> e(Xmfx_se_dydx), but I can't figure out how to see that. This should be
> easy, but stumped me.
> Thanks everyone.
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-07/msg00529.html","timestamp":"2014-04-20T23:43:56Z","content_type":null,"content_length":"10612","record_id":"<urn:uuid:18b7cfec-cd61-4fdf-9564-5d2debf03fb5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: GEOMETRIC PRESENTATIONS FOR
Abstract. We give several new positive finite presentations for the pure braid
group that are easy to remember and simple in form. All of our presentations
involve a metric on the punctured disc so that the punctures are arranged
"convexly", which is why we describe them as geometric presentations. Mo-
tivated by a presentation for the full braid group that we call the "rotation
presentation", we introduce presentations for the pure braid group that we
call the "twist presentation" and the "swing presentation". From the point
of view of mapping class groups, the swing presentation can be interpreted
as stating that the pure braid group is generated by a finite number of Dehn
twists and that the only relations needed are the disjointness relation and the
lantern relation.
The braid group has had a standard presentation on a minimal generating set
ever since it was first defined by Emil Artin in the 1920s [3]. In 1998, Birman, Ko,
and Lee [5] gave a more symmetrical presentation for the braid group on a larger
generating set that has become fashionable of late (see, for example, [4], [7], [8], or
[11]). Our goal is to apply a similar idea to the pure braid group. The standard
finite presentation for the pure braid group (also due to Artin [2]) is slightly com- | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/035/1544925.html","timestamp":"2014-04-19T09:33:11Z","content_type":null,"content_length":"8442","record_id":"<urn:uuid:59198cba-7de1-40d0-9c5d-55922f270fed>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00301-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exploring Quadratic Graphs - Problem 3
Graphing an equation like this is a little tricky, because there is a whole lot of things being done to x. You need to be really careful with the order of operations, when you’re doing your
substitution. So it told me to make a table, I’m going to choose some x values. Personally, I tend to make a lot of mistakes with negatives, I don’t know about you. So I’m going to start with x
equals 0 and I’ll go from there.
If my x value was 0, my y number would be 0 take away 3. That’s -3² to +9 plus 1 is 10, that’s the first point in my table. Then if I try 1, 1 take away 3 is -2² is 4, plus 1 is 5. 2 take away 3 is
-1, -1 times itself is +1 plus 1 again. 1, 2 I’m just moving along with my x numbers.
If I plug in x equals 3, I’ll have 0² plus 1. If I plug in 4, 4 take away 3 is 1² plus 1 is 2. I’m happy because I started to find that symmetry. This 2 showed up again, so I know without having to
do any more Math, how to complete my table. 5 is going to be here, 6 is going to be here, matched up with my next consecutive x numbers 4, 5, 6.
These problems even though they involve a lot of Math here in the equation, they can be filled in pretty quickly when you use short-cuts of symmetry. The last thing I know before I start making my
graph, is that I’m going to have the Vertex point 3,1. So let’s get these dots on the graph.
I’m going to start with 0,10. So 0 is my side to side number and 10 is my up and down number 0, 1, 2 , 3, 4, 5, 6, 7, 8, 9, 10 that’s my y intercept. My next point was (1,5) 1, 2, 3, 4, 5. (2, 2)
then I have my Vertex 3,1. That Vertex is really important because that’s where I’m going to introduce my Axis of Symmetry and start putting dots on there without having to count any more.
This vertical line that goes right through my Vertex is not on the parabola, but it helps me graph these other dots. Like this guy is one away from the Axis of Symmetry, same height. The same thing
here, equally as high but 2 away, 3 away, boom, those are the points from my table. I know and I didn’t even have to count them, then I can draw my parabola that connects it. I missed that point, but
you guys get the idea.
The way this is useful is because symmetry can make these graphs go a lot more quickly. Your teacher will be impressed also, if you can describe how you created this table, not by doing it all out
one by one, but by doing patterns and symmetry. They not only show up in the table, but they also show up in the graph. It will make your homework be a lot more quick.
table quadratic shift | {"url":"https://www.brightstorm.com/math/algebra/quadratic-equations-and-functions/exploring-quadratic-graphs-problem-3/","timestamp":"2014-04-23T09:29:55Z","content_type":null,"content_length":"64869","record_id":"<urn:uuid:32eec5db-4d34-4fde-be02-d219f94b4e8c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US20020118874 - Apparatus and method for taking dimensions of 3D object
[0001] The present invention generally relates to an apparatus and method for taking the dimensions of a 3D rectangular moving object; and, more particularly, to an apparatus for taking the
dimensions of the 3D rectangular moving object in which a 3D object is sensed, an image of the 3D object is captured and features of the object are then extracted to take the dimensions of the 3D
object, using an image processing technology.
[0002] Traditional methods of taking the dimensions include a manual method using a tape measure, etc. However, as this method is used for an object not moving, it is disadvantageous to apply this
method to an object on a moving conveyor environment.
[0003] In U.S. Pat. No. 5,991,041, Mark R. Woodworth describes a method of taking the dimensions using a light curtain for taking the height of an object and two laser range finders for taking the
right and left sides of the object. In the method, as the object of a rectangular shape is conveyed, values taken by respective sensors are reconstructed to take the length, width and height of the
object. This method is advantageous in taking the dimensions of a moving object such as an object on the conveyor. However, there is a problem that it is difficult to take the dimensions of the still
[0004] In U.S. Pat. No. 5,661,561 issued to Albert Wurz, John E. Romaine and David L. Martin, it is used a scanned, triangulated CCD (charge coupled device) camera/laser diode combination to capture
the height profile of an object when it passes through this system. This system that loaded dual DSP (digital signal processing) processor board, then calculates the length, width, height, volume and
position of the object (or package) based on this data. This method belongs to a transitional stage in which a laser-based dimensioning technology moves to a camera-based dimensioning technology. But
there are disadvantages that this system united with the laser technology has the difficulties of hardware embodiment.
[0005] U.S. Pat. No. 5,719,678 issued to Reynolds et al. discloses a method for automatically determining the volume of an object. This volume measurement system includes a height sensor and a width
sensor positioned in generally orthogonal relationship. Therein, CCD sensors are employed as the height sensor and the width sensor. Of course, the mentioned height sensor can adopt a laser sensor to
measure the height of the object.
[0006] U.S. Pat. No. 5,854,679 is concerned with a technology using only cameras, which employs plane images obtained from the top of the conveyor and lateral images obtained from the side of the
conveyor belt. As a result, these systems employ a parallel processing system in which individual cameras are each connected to independent systems in order to take the dimensions at rapid speed and
high accuracy. However, there are disadvantages that the scale of the system and the cost for the embodiment of the system increase.
[0007] Therefore, it is a purpose of the present invention to provide an apparatus and method for taking dimensions of a 3D object in which the dimensions of a still object as well as a moving object
on a conveyor can be taken.
[0008] In accordance with an aspect of the present invention, there is provided an apparatus for taking dimensions of a 3D object, comprising: an image input device for obtaining an object image
having the 3D object; an image processing device for detecting all edges within a region of interest of the 3D object based on the object image obtained in said image input device; a feature
extracting device for extracting line segments of the 3D object and features of the object from the line segments based on the edges detected in said image processing device; and a dimensioning
device for generating 3D models using the features of the 3D object and for taking the dimensions of the 3D object from the 3D models.
[0009] In accordance with another aspect of the present invention, there is provided a method of taking dimensions of a 3D object, the method comprising the steps of: a) obtaining an object image
having the 3D object; b) detecting all edges within a region of interest of the 3D object; c) extracting line segments from the edges of the 3D object and then extracting features of the 3D object
from the line segments; and d) generating 3D models based on the features of the 3D object and taking the dimensions of the 3D object from the 3D models.
[0010] In accordance with further another aspect of the present invention, there is provided a computer-readable recording medium storing instructions for executing a method of taking dimensions of a
3D object, the method comprising the steps of: a) obtaining an object image having the 3D object; b) detecting all edges within a region of interest of the 3D object; c) extracting line segments from
the edges of the 3D object and then extracting features of the 3D object from the line segments; and d) generating 3D models based on the features of the 3D object and taking the dimensions of the 3D
object from the 3D models.
[0011] Other objects and aspects of the invention will become apparent from the following description of the embodiments with reference to the accompanying drawings, in which:
[0012]FIG. 1 illustrates a system for taking the dimensions of a 3D moving object applied to the present invention;
[0013]FIG. 2 is a block diagram of a dimensioning apparatus for taking the dimensions of 3D moving object based on a single CCD camera according to the present invention;
[0014]FIG. 3 is a flow chart illustrating a method of extracting a region of interest (ROI) in a region of interest extraction unit and in an object sensing unit;
[0015]FIG. 4 is a flowchart illustrating a method of detecting an edge in an edge detecting unit of the image processing device;
[0016]FIG. 5 is a flowchart illustrating a method of extracting line segments in a line segments extraction unit and a method of extracting features in a feature extraction unit;
[0017]FIG. 6 is a diagram of an example of the captured 3D object;
[0018]FIG. 7 is a flow chart illustrating a process of taking the dimensions in a dimensioning device; and
[0019]FIG. 8 shows geometrically the relationship in which points of 3D object are mapped on two-dimensional images via a ray of a camera.
[0020] Hereinafter, the present invention will be described in detail with reference to accompanying drawings, in which the same reference numerals are used to identify the same element.
[0021] Referring to FIG. 1, a system for taking the dimensions of 3D moving object includes a conveyor belt 2 for moving the 3D rectangular object 1, a camera 3 installed over the conveyor belt 2 for
taking an image of the 3D rectangular object 1, a device 4 for supporting the camera 3 and a dimensioning apparatus 5 which is coupled to the camera 3 and includes an input/output device, e.g., a
monitor 6 and a keyboard 7.
[0022]FIG. 2 illustrates a dimensioning apparatus for taking the dimensions of a 3D moving object based on a single CCD camera according to the present invention,
[0023] Referring to FIG. 2, the dimensioning apparatus according to the present invention includes an image input device 110 for capturing an image of a desired 3D object, an object sensing device
120 for sensing the 3D object through the image inputted via the image input device 110 to perform an image preprocessing, an image processing device 130 for extracting a region of interest (ROI) and
detecting the edges, and a feature extracting device 140 for extracting line segments and the image within the regions of interest (ROI), a dimensioning device 150 for calculating the dimensions of
the object based on the result of the image processing device and generating a 3D model of the object, and storage device 160 for storing the result of the dimensioning device. Then, the 3D model of
the generated object is displayed on the monitor 5.
[0024] The image input device 110 includes the camera 3 and a frame grabber 111. Also, the image input device may further include at least an assistant camera. The camera 3 may include XC-7500
progressive CCD camera having the resolution of 758×582 and capable of producing a gray value of 256, manufactured by Sony Co., Ltd. (Japan). The image is converted into digital data by a frame
grabber 111, e.g., MATROX METEOR II type. At this time, parameters of the image may be extracted using MATROX MIL32 Library under Window98 environment.
[0025] The object sensing device 120 compares an object image obtained by the image input device 110 with a background image. The object sensing device 120 includes an object sensing unit 121 and an
image preprocessing unit 123 for performing a preprocessing operation for the image of the sensed object.
[0026] The image processing device 130 includes a region of interest (ROI) extraction unit 131 for extracting 3D object regions, and an edge detection unit 133 for extracting all the edges within the
located region of interest (ROI).
[0027] The feature extracting device 140 includes a line segment extraction unit 141 for extracting line segments from the result of detecting the edges and a feature extraction unit 143 for
extracting features (or vertexes) of the object from the outmost intersection of the extracted line segments.
[0028] The dimensioning device 150 includes a dimensioning unit 151 for obtaining a world coordinate on the two-dimensional plane and the height of the object from the features of the 3D object
obtained from the image to calculate the dimensions of the object, and a 3D model generating unit 153 for modeling the 3D shape of the object from the obtained world coordinate.
[0029] A method of taking the dimensions of the 3D object in the system for taking the dimensions of 3D moving object will be now explained.
[0030] The image input device 110 performs an image capture for the 3D rectangular object 1. The 3D object 1 is conveyed by means of a conveyor (now shown). At this time, the image input device 11
continuously captures images and then transmits the image obtained by the object sensing device 120 to the image processing device 130.
[0031] The object sensing device 120 continuously receives images from the image input device 110 and then determines whether there exists an object. If the object sensing unit 121 determines whether
there is an object, the image preprocessing unit 123 performs noise reduction of the object. If there is no object, the image preprocessing unit 123 does not operate but transmits a control signal to
the image input device 110 to repeatedly perform an image capture process.
[0032] The image processing device 130 compares the object image from the image obtained by the image input device 110 with the background image to extract a region of a 3D object and to detect all
the edges within the located region of interest (ROI).
[0033] At this time, locating the object region is performed by a method of comparing the previously stored background image and an image including an object.
[0034] The edge detection unit 133 in the image processing device 130 performs an edge detection process based on statistic characteristics of the image. The edge detection method using the statistic
characteristics can perform edge detection that is insensitive to variations of external illuminations. In order to rapidly extract the edge, candidate edge pixels are estimated, and the size and
direction of the edge are determined for the estimated edge pixels.
[0035] The feature extracting device 140 extracts line segments of the 3D object and then extracts features of the object from the line segments.
[0036]FIG. 3 is a flow chart illustrating a method of extracting a region of interest (ROI) extraction unit 131 and sensing an object in an object sensing unit 121.
[0037] Referring now to FIG. 3, first, a difference image between the image including the object obtained in the image input device 110 and the background image is obtained at steps S301, S303 and S
305. Then, a projection histogram is generated for each of a horizontal axis and a vertical axis of the obtained difference image at step S307. Next, a maximum area section for each of the horizontal
axis and the vertical axis is obtained from the generated projection histogram at step S309. Finally, a region of interest (ROI), being an intersection region, is obtained from the maximum area
section of each of the horizontal axis and the vertical axis at step S311. After the region of interest (ROI) is obtained, in order to determine whether there is any object, the average and variance
values within the region of interest (ROI) are calculated at step S313. Finally, as the results of the determination, if there is an object, i.e., the mean value is larger than a first threshold and
the variance value is larger than a second threshold, the located region of interest (ROI) is used as an input to the image processing device 130. If not, the object sensing unit 121 continuously
extracts the region of interest (ROI).
[0038]FIG. 4 is a flow chart illustrating a method of detecting an edge in the edge detection unit 133 of the image processing device 130.
[0039] Referring to FIG. 4, the method of detecting an edge roughly includes a step of extracting statistical characteristics of an image for determining the threshold value, a step of determining
candidate edge pixels and edge detection pixels and a step of connecting the detected edge pixels to remove edge pixels having a short length.
[0040] In more detail, if an image of N×N size is first inputted at step S401, the image is sampled by a specific number of pixels at step S403. Then, an average value and a variance value of the
sampled pixels are calculated at step S405 and the average value and variance value of the sampled pixels are then set to a statistical feature of a current image. A threshold value Th1 is determined
based on statistical characteristics of the image at step S407.
[0041] Meanwhile, if the statistical characteristics of the image is determined, candidate edge pixels for all the pixels of the inputted image are determined. For this, the maximum value and the
minimum value among the values between eight pixels neighboring to the current pixel x are detected at step S409. Then, the difference between the maximum value and the minimum value is compared with
the threshold value (Th1) at step S411. The threshold value (Th1) is set based on the statistical characteristics of the image, as mentioned above.
[0042] As a result of the determination in the step S411, if the difference value between the maximum value and the minimum value is greater than the threshold value (Th1), it is determined that a
corresponding pixel is an edge pixel and a process proceeds to step S413. Meanwhile, if the difference value between the maximum value and the minimum value is smaller than the threshold value (Th1),
i.e., a corresponding pixel is a non-edge pixel, and then stored in the database.
[0043] If the corresponding pixel is a candidate edge pixel, the size and direction of the edge is determined using a sobel operator [Reference: ‘Machine Vision’ by Ramesh Jain] at step S413. In the
step S413, the direction of the edge is represented using a gray level similarity code (GLSC).
[0044] After the direction of the edge is represented, edges having a different direction from neighboring edges among these determined edges are removed at step S415. This process is called an edge
non-maximal suppression process. At this time, an edge lookup table is used. Finally, remaining candidate edge pixels are determined at step S417. Then, if the connected length is greater than the
threshold value Th2 at step S419, an edge pixel is finally determined and is then stored in the edge pixel database. On the contrary, if the linked length is smaller than the threshold value Th2, it
is determined to be a non-edge pixel, which is then stored in the non-edge pixel database. The images having pixels determined as the edge pixels by this method are images representing an edge
portion of an object or a background.
[0045] After the edge of the 3D object is detected, the edge will have the thickness of one pixel. Line segment vectors are extracted in the line segment extraction unit 141 and features for taking
the dimensions from the extracted line segments are also extracted in the feature extraction unit 141.
[0046]FIG. 5 is a flow chart illustrating a process of extracting line segments in the line segment extraction unit 141 and a process of extracting features in the feature extraction unit 143.
[0047] Referring to FIG. 5, if a set of edge pixels of the 3D object obtained in the image processing device 130 is inputted at step S501, the set of edge pixels are divided into a lot of
straight-line vectors. At this time, the set of the linked edge pixels are divided into straight-line vectors using a polygon approximation at step S503. Line segments in thus divided straight
vectors are fixed using singular value decomposition (SVD) at step S507. The polygon approximation and the SVD are described in an article ‘Machine Vision’ by Ramesh Jain, Rangachar Kasturi and Brian
G. Schunck, pp.194-199, 1995, which they are not subject matter in the present invention and detailed description of them will be skipped. After the above procedures are performed for all the list of
edges at step S509, the extracted straight-line vectors are recombined in separate neighboring straight-lines at step S511.
[0048] If line segments thus constituting the 3D object are extracted, the feature extraction unit 143 performs a feature extraction process. After the outermost line segment of the object is found
from the extracted line segments at step S513, the outermost vertex between the outermost line segments is detected at step S515. Thus, the outermost vertexes are determined to be candidate features
at step S517. Through these processes of extracting features, damage and blurring effect due to distortion of shape of the 3D object image can be compensated for.
[0049] Next, the dimensioning device 150 takes the dimensions of a corresponding object from the feature extracting device 140. A process of taking the dimensions in a dimensioning device will be
described with reference to FIGS. 6 and 7.
[0050]FIG. 6 is a diagram of an example of the captured 3D object on a 2D image.
[0051] Referring to FIG. 6, reference numerals 601 to 606 denote outermost vertexes of the captured 3D object, respectively, the point 601 is a point that the value of the x coordinate on the image
has the smallest value and the point 604 is a point that the value of the x coordinate on the image has the greatest value.
[0052]FIG. 7 is a flow chart illustrating a process of taking the dimensions in a dimensioning device.
[0053] First, among the outermost vertexes 601 to 606 of the object obtained in the feature extraction device, the point 601 having the smallest x coordinate value is selected at step S701. Then, the
inclinations between neighboring vertexes are compared at step S703 to select a path including both the point 601 and the greater inclination. That is, if the inclination between the points 601 and
602 is larger than the inclination between the points 601 and 606 in the 3D object, a path made by 601, 602, 603 and 604 are selected at step S705. On the contrary, if the inclination between two
points 601 and 602 is smaller than the inclination between two points 601 and 606, another path made by 601, 606, 605 and 604 is selected. Next, assuming that the points on the bottom place
corresponding to the points 601, 602, 603 and 604 are w1, w2, w3 and w4. If a path made by 601, 602, 603 and 604 are selected, the point 603 is like w3 and the point 604 is like w4. The world
coordinates of two points 603 and 604 may be obtained using a calibration matrix. For example, a Tsai'method may be used for the calibration. Tsai'method is described in more detail in an article by
R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf camera and lenses”, IEEE Trans. Robotics and Automation, 3(4), August 1987.
Through the process of this calibration, one-to-one mapping is performed between a world coordinate on the plane on which the object is located, and an image coordinate. Also, x and y of w2 is like
the value of w3. Therefore, the world coordinates of w2 can be obtained by calculating the height between w2 and w3. After the coordinate of w2 is obtained, an orthogonal point w1 between the point A
and the bottom plane is obtained. Finally, the length of the object is determined by w1 and w3. The width of the 3D object can be obtained by obtaining the length between w3 and w4.
[0054]FIG. 8 shows the basic model for the projection of points in the scene with 3D object 801, onto the image plane. In FIG. 8, a point f is a position of a camera and a point O is the origin of
the world coordinate system. As two points q and s on the world coordinate system (WCS) exist on the same ray 2, two points q and s are projected onto the same point p on the image plane 802. Given
the real world coordinates on S-plane 803, where 3D object is put, and the height H of the camera and the origin of the world coordinate system, we can determine the height h of the object between
two points q on the ray 2 and q′ on S-plane 703, by the following method.
[0055] Referring to FIG. 8, three points O, f and s make a triangle, and another three points q, q′ and s make another triangle. The ratio of the corresponding sides of two triangles must be the
same, because these two triangles are similar. The height of the object can therefore be calculated by the following equation (1).
[0056] where H is a height from the point O to the position of the camera f, D is a length from the point O to the point s, and d is a length from the point q′ to the point s.
[0057] Also, the equation (1) can be transferred into the following equation (2).
[0058] Unlike height, the width and the length of the object can directly be calculated by using calibrated points on S-plane. Especially, when the camera could take a look at the sides that have the
width and the length of the object, the above methods including two equations are so effective. However we can suppose the case that the camera can't directly take a look at the side, which have the
length of the object. In this case, the other methods or equations are needed and should be derived. Like examples of equations (1) and (2), the points on the S-plane are used. Referring to FIG. 8,
the first triangle made by three points O, s, and t are similar to the second triangle made by three points O, q′ and r′. Using the trigonometric relationship, the theta made by the triangle tOs, can
be calculated by the following equation (3).
[0059] Also, with this theta, the length between two points q′ and r′ is determined by the following equation (4).
{overscore (q′r′)}={square root}{square root over (A ^2+(D*d)^2−2A(D−d) cos θ)}(4)
[0060] As mentioned above, in the present invention, a single CCD camera is used to sense the 3D object and to take the dimensions of the object, and additional sensors are not necessary for sensing
the object. Therefore, the present invention can be applied to sense both of the moving object and the still object. The present invention could not only reduce the cost necessary for system
installation but also the size of the system.
[0061] Although the preferred embodiments of the invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and
substitutions are possible, without departing from the scope and spirit of the invention as disclosed in the accompanying claims. | {"url":"http://www.google.co.uk/patents/US20020118874","timestamp":"2014-04-16T13:06:18Z","content_type":null,"content_length":"99786","record_id":"<urn:uuid:18a3625f-0cf6-43bd-b23e-9b8f79b9b80b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
A matter of trust?
Consider the following expression:
sCARA4 := -ln(-(mu/sigma^2)^(mu^2/(mu-sigma^2))*(sigma^2/mu)^(mu^2/(mu-sigma^2))+(sigma^2/mu)^(mu^2/(mu-sigma^2))*((exp(phi)*sigma^2+mu-sigma^2)*exp(-phi)/sigma^2)^(mu^2/(mu-sigma^2))+1)/phi;
Now try to find out whether the first derivative to mu is positive for all positive mu, phi and sigma, except for some rare exceptions (e.g. sigma^2=mu).
Since 'is' is not a satisfying option, I used 'Explore' to check that, for mu=1.0 .. 100, sigma=1.0 .. 100, phi=1.0 .. 10.
Now, e.g. Explore outputs for mu=29.38, phi=1.0, sigma=1.0 the value -2.499130625 and thus a negative value, which was not expected. Maple also explores a negative result for some other
Anyway, when you examine the expression sCARA4 in more detail, you can simplify it by hand (because Maple is somehow not able to recognize that the first term in the logarithm equals -1 and therefore
it cancels with the +1 at the end):
If you then apply diff(%,mu) to that simplified expression, the output is completely different!
First of all, it is always positive (as expected, except for the mentioned exceptions).
Second, the value for mu=29.38, phi=1.0, sigma=1.0 is 0.9991907709.
Well now I ask myself if I neglected anything, or if this is finally a matter of trust in Maple´s ability to differentiate and/or Explore correctly?
Thanks for clarification. | {"url":"http://www.mapleprimes.com/questions/94834-A-Matter-Of-Trust","timestamp":"2014-04-17T15:51:08Z","content_type":null,"content_length":"51571","record_id":"<urn:uuid:f9dac071-82e8-4c76-85bf-45061bdc86db>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help on maximum/minimum values
November 7th 2009, 04:49 PM #1
Nov 2009
Need help on maximum/minimum values
I am having some trouble on one of my homework problems. I can't figure out what the equation should be. Any help you can give will be wonderful!!
A racer can cycle around a circular loop at the rate of 2 revolutions per hour. Another cyclist can cycle the same loop at the rate of 5 revolutions per hour. If they start at the same time (t=
0), at what first time are they farthest apart?
I am having some trouble on one of my homework problems. I can't figure out what the equation should be. Any help you can give will be wonderful!!
A racer can cycle around a circular loop at the rate of 2 revolutions per hour. Another cyclist can cycle the same loop at the rate of 5 revolutions per hour. If they start at the same time (t=
0), at what first time are they farthest apart?
see the following "very similar" problem ...
I am having some trouble on one of my homework problems. I can't figure out what the equation should be. Any help you can give will be wonderful!!
A racer can cycle around a circular loop at the rate of 2 revolutions per hour. Another cyclist can cycle the same loop at the rate of 5 revolutions per hour. If they start at the same time (t=
0), at what first time are they farthest apart?
As both cyclists are getting apart from each other at a velocity of 3 rev./h and they'll be the farthest apart when they'll be at the extreme points of a diameter of the circular loop, you only
have to calculate when the faster cyclist will complete one half of a loop with respect to the slower one...
November 7th 2009, 04:51 PM #2
November 7th 2009, 06:11 PM #3
Oct 2009 | {"url":"http://mathhelpforum.com/calculus/113061-need-help-maximum-minimum-values.html","timestamp":"2014-04-18T07:04:30Z","content_type":null,"content_length":"38316","record_id":"<urn:uuid:86a7c516-1294-49ff-82e5-c26c519df295>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
How To Prove It – Exercise 0.5
Solutions to Exercises in the Introduction of How To Prove It by Daniel J Velleman.
Problem (5): Use the table in Figure 1 and the discussion on Page 5 to find two more perfect numbers.
Euclid proved that if $2^n-1$ is a prime, then $2^{n-1}(2^n-1)$ is a perfect number.
From the given table, we will take two numbers such that $2^n-1$ is a prime number: 5 and 7.
When $n = 5$:
$2^{n-1}(2^n-1) = 2^4(2^5-1)$ = 496, which is our first perfect number.
Similarly when n = 7, we get the next perfect number as 8128. | {"url":"http://diovo.com/2012/10/how-to-prove-it-exercise-0-5/","timestamp":"2014-04-21T13:10:10Z","content_type":null,"content_length":"14699","record_id":"<urn:uuid:cc522773-f232-49ef-9bcb-c780b35e24fe>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
College Algebra: An Introduction to Inverses Video | MindBites
College Algebra: An Introduction to Inverses
About this Lesson
• Type: Video Tutorial
• Length: 6:39
• Media: Video/mp4
• Use: Watch Online & Download
• Access Period: Unrestricted
• Download: MP4 (iPod compatible)
• Size: 72 MB
• Posted: 06/26/2009
This lesson is part of the following series:
College Algebra: Full Course (258 lessons, $198.00)
College Algebra: Systems of Equations (33 lessons, $44.55)
College Algebra: Inverses and Matrices (5 lessons, $7.92)
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, College Algebra. This course and others are available from Thinkwell, Inc. The full course can be
found athttp://www.thinkwell.com/student/product/collegealgebra. The full course covers equations and inequalities, relations and functions, polynomial and rational functions, exponential and
logarithmic functions, systems of equations, conic sections and a variety of other AP algebra, advanced algebra and Algebra II topics.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from Connecticut
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger has
won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association of
America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The Heart
of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math journals,
including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of numbers, and the
theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
About this Author
2174 lessons
Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare
students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider
of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/.
Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through...
Recent Reviews
This lesson has not been reviewed.
Please purchase the lesson to review.
This lesson has not been reviewed.
Please purchase the lesson to review.
So now when you think about just regular numbers, okay, when you think about multiplication, the opposite of multiplication is division. And how does that work exactly? Well if you have a number like
5, it turns out that there's a special number that, if I multiply it by 5, I still get 5. And that special number is called the multiplicative identity, a.k.a. 1. So in fact, 1 has the property that,
if you multiply it by any number, whether on the right or the left, you still get that number. And as a result, you can ask for the multiplicative inverse, the inverse of 5, which would be . And why
is the inverse a ? Because if I take 5 and I multiply it by , what I see is the identity, 1. So these two numbers are multiplicative inverses, because their product gives me the multiplicative
identity, 1. The identity has the property of 5 times the thing, which gives me 5. The thing times the 5 gives me 5.
Well now I want to start taking a look at the analogues of these ideas with square matrices. So the first idea is what does the identity matrix look like? That would be the matrix, which has the
property that, if I multiply it by another matrix, I get back the other matrix. The identity doesn't change the value of the matrix. And then, the follow up question is how do you find the inverses?
How do you have the analogue of in this context?
Okay, well let me first of all tell you what the identity matrix looks like. So the analogue of 1, so the identity matrix is a matrix that just has ones along the diagonal, and zeros everywhere else.
Zeros everywhere, except along the main diagonal there, in which case you have ones. So for example, a two by two identity matrix would look like this, ones along the diagonal, zeros everywhere else.
Is that really an identity matrix? Well let's see. Let's multiply it by a two by two matrix and see what happens. So let's take 3, 5, 5, 8, and do matrix multiplication, and see if we actually end up
with this again. If this is supposed to act like 1, then 1 times anything should give me the anything again. Let's see what we get. Well remember how matrix multiplication works? I sort of do this
kind of thing. So I take 3 x 1 + 0 x 5. Well that's 3. To find out what goes here, I take this row with this column, and I see 1 x 5 + 0 x 8. So I see 5. To get the second row, first column, I go to
the second row, first column. So I do this activity, which is 0 + 5, which is 5. And finally I do 0 + 8, which is 8. And look, this is the same as that. So in fact, this does act like an identity
when you multiply.
Okay, cool, so there's the identity matrix. It's the matrix that just has ones along this diagonal and 0 everywhere else. What about inverses? Well let me show you what an inverse would look like. In
fact let me just start anew with an example. Let's multiply these people out. Let's take 3, 5, 5, 8, and let's multiply it by -8, 5, 5, -3. This is a completely different matrix you'll notice. But
let's just do the matrix multiplication and see what we get. Here I see 3 x -8, which is -24. And then I add 5 x 5, which is 25. So I have -24 + 25. That's just 1. What would I have here? Well I do
this thing. I see 3 x 5, and then a 5 x -3, so that's 15 + -15. That's 0. What do I have here? Well I do this and I see 5 x -8, which is -40 plus 8 x 5, which is 40. So -40 + 40 = 0. And here I have
25 - 24, which is 1. So look, I get the identity matrix. That means that this matrix must be the inverse matrix of this one. And why, because their product gives me the identity; just like with
numbers, is the inverse of 5, because x 5 equals the identity, 1. In this case the identity matrix looks like this. And so in order to actually have this be the inverse, we must have the product of
these two things actually give me this.
Well the question now is how do you actually find the inverse? It's not just the reciprocal of every single element there. So how do you actually find the inverse of a matrix? And what's sort of the
analogue of like 0? You know you can't find the multiplicative inverse of 0, right, because 1 over 0 is undefined. Well it turns out the analogue of that here is whether the matrix is singular or
not. Now remember a matrix is singular if its determinant is 0. And it turns out that the only matrices that have inverses are those that are nonsingular. So the only matrices that have inverses are
those for which the determinant is not 0. So before, like you know you can't divide by 0, the same thing here. You can't have an inverse of a matrix whose determinant is 0. But notice the determinant
of this, that's easy to see, it's 3 x 8, which is 24. And then I subtract off 25, so the determinant of this is -1, which is not 0, so it should have an inverse. And in fact, I just happen to know
what it is. It equals this. So you can tell if a matrix is invertible or not, has an inverse, by just looking at its determinant. If its determinant is 0, it cannot be inverted. You cannot find its
multiplicative inverse. And if the determinant is not 0, then you can find its inverse. The question now is how. How did I know this is the right matrix that's the inverse of this one? Coming up
next, I'll show you the secret for the two by two case, and then later I'll show you the secret for the three by three case and even higher. I'll see you there.
Systems of Equations
Inverses and Matrices
An Introduction to Inverses Page [2 of 2]
Get it Now and Start Learning
Embed this video on your site
Copy and paste the following snippet:
Link to this page
Copy and paste the following snippet: | {"url":"http://www.mindbites.com/lesson/3208-college-algebra-an-introduction-to-inverses","timestamp":"2014-04-20T10:58:43Z","content_type":null,"content_length":"58487","record_id":"<urn:uuid:25fda68b-8b16-4942-aa65-e48121fe398d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
22 December 1997 Vol. 2, No. 51
THE MATH FORUM INTERNET NEWS
SimCalc | Non-Euclidean Geometry | Kwanzaa Math / Star of David
SIMULATIONS FOR CALCULUS LEARNING - SIMCALC
A knowledge of the mathematics of change is vitally
important to living and working in a rapidly evolving
democratic society. Problems involving rates, accumulation,
approximations, and limits appear in everyday situations
involving money, motion, planning, nutrition - virtually
any situation where varying quantities appear.
SimCalc aims to democratize access to the mathematics of
change for all students, providing software that combines
advanced simulation technology with innovative curricular
solutions. The project begins in the early grades with
powerful ideas that extend beyond both classical calculus
and traditional calculus reform
SimCalc MathWorlds v1.1b4 for the Macintosh was released
in November, 1997 and is available for download. You can
also obtain a current version from the Math Forum's FTP
NON-EUCLIDEAN GEOMETRY: SELECTED RESOURCES
NONEUCLID - JOEL CASTELLANOS
A software simulation that offers straightedge and compass
constructions in hyperbolic geometry for use in high school
and undergraduate education. This site offers an introduction,
over 25 pages of illustrated hypertext, exercises, and a
discussion of why it's important for students to study
hyperbolic geometry. Basic concepts include:
- Non-Euclidean Geometry
- The Shape of Space
- The Pseudosphere
- Parallel Lines
- Postulates and Proofs
- Area
- X-Y Coordinate System
NonEuclid software for Windows may be downloaded from the site.
Mac users might enjoy KALEIDOTILE, a tiling program based on
the Geometry Center's display at the St. Paul Science Museum.
You can use KaleidoTile to create and manipulate tessellations
of the sphere, Euclidean plane, and hyperbolic plane.
A base sketch and downloadable scripts for interactive
investigation of hyperbolic geometry using the Poincaré disk
model. For example, one can easily discover that the
construction of the incircle of a triangle that works in
the Euclidean plane also works in the hyperbolic plane.
An exercise that helps students see that since angular excess
corresponds to negative curvature, the hyperbolic plane is
a negatively curved space.
"Non-Euclidean Geometry," an essay covering the history of
this subject from Euclid's Elements through Riemann's
spherical geometry, can be found via the MACTUTOR HISTORY
Last but not least, see David Eppstein's GEOMETRY JUNKYARD for
more links to sites about hyperbolic geometry on the Web:
KWANZAA MATH: THE MKEKA - DEBORAH LEWIS & CATHERINE WESTER
Students apply their knowledge of math to create a mkeka,
the traditional woven mat that is one of the seven symbols
of Kwanzaa. They may then count all the rectangles and find
squares in the mkeka.
STAR OF DAVID - JUDY BROWN
Students find the total number of triangles in the Star of
David pictured at the top of the page, and then count the
quadrilaterals and hexagons in the star.
Other Star of David activities might be created using:
Coffee can geometry
Find the angles
Find the hidden shapes
Continuing Judy Brown's 12 Days of Christmas activity, see
"The Twelve Days of Christmas and Pascal's Triangle":
CHECK OUT OUR WEB SITE:
The Math Forum http://mathforum.org/
Ask Dr. Math http://mathforum.org/dr.math/
Problem of the Week http://mathforum.org/geopow/
Internet Resources http://mathforum.org/~steve/
Join the Math Forum http://mathforum.org/join.forum.html
Send comments to the Math Forum Internet Newsletter editors
_o \o_ __| \ / |__ o _ o/ \o/
__|- __/ \__/o \o | o/ o/__/ /\ /| |
\ \ / \ / \ /o\ / \ / \ / | / \ / \ | {"url":"http://mathforum.org/electronic.newsletter/mf.intnews2.51.html","timestamp":"2014-04-19T01:53:53Z","content_type":null,"content_length":"9709","record_id":"<urn:uuid:684de902-c283-4d4c-ada0-258c571adac3>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
books about string theory
books about string theory
String theory
Critical string models
Extended objects
Topological strings
This page lists literature on string theory.
(See also at string theory FAQ.)
Mathematically inclined monographs about string theory
There is to date no textbook on string theory genuinely digestible by the standard pure mathematician. Even those that claim to be are not, as experience shows. But here are some books that make a
strong effort to go beyond the vagueness of the “mainstream” books, which are listed further below.
• Pierre Deligne, Pavel Etingof, Dan Freed, L. Jeffrey, David Kazhdan, John Morgan, D.R. Morrison and Edward Witten (eds). Quantum Fields and Strings, A course for mathematicians, 2 vols. Amer.
Math. Soc. Providence 1999. (web version)
This is a long collection of (in parts) long lectures by many top string theorists and also by some genuine top mathematicians. Correspondingly it covers a lot of ground, while still being
introductory. Especially towards the beginning there is a strong effort towards trying to formalize or at least systematize much of the standard lore. But one can see that eventually the task of
doing that throughout had been overwhelming. Nevertheless, this is probably the best source that there is out there. If you only ever touch a single book on string theory, touch this one.
• Leonardo Castellani, Riccardo D'Auria, Pietro Fre, Supergravity and Superstrings - A Geometric Perspective
This focuses on the discussion of supergravity-aspects of string theory from the point of view of the D'Auria-Fre formulation of supergravity. Therefore, while far, far from being written in the
style of a mathematical treatise, this book stands out as making a consistent proposal for what the central ingredients of a mathematical formalization might be: as explained at the above link,
secretly this book is all about describing supergravity in terms of infinity-connections with values in super L-infinity algebras such as the supergravity Lie 3-algebra.
• Hisham Sati, Urs Schreiber, Mathematical Foundations of Quantum Field and Perturbative String Theory, Proceedings of Symposia in Pure Mathematics, AMS (2011)
This volume tries to give an impression of the rather recent massive progress that has happened in the mathematical understanding of fundamental ingredients of perturbative string theory,
revolving around the proof of the cobordism hypothesis and related topics of higher category theory and physics. This is not an introductory textbook, even though some contributions do contain
introductory material. Rather, this is meant to be read by people who already understand the basic idea of string theory and would like to see what the mathematical picture behind it all is going
to be.
• Igor V. Dolgachev, Introduction to string theory
• Paul Aspinwall, Tom Bridgeland, Alastair Craw, Michael Douglas, Mark Gross, Dirichlet branes and mirror symmetry, Amer. Math. Soc. Clay Math. Institute 2009.
Mainstream physics monographs
More elementary
More advanced
• Michael Green, John Schwarz, Edward Witten, Superstring theory, 3 vols. Cambridge Monographs on Mathematical Physics
• Joseph Polchinski, String theory, 2 vols.
• Joseph Polchinski, Joe’s Little Book of String, class notes, UCSB Phys 230A, String Theory, Winter 2010, pdf
• Alexander Polyakov, Gauge fields and strings,
• Brian Hatfield, Quantum field theory of point particles and strings, Frontiers in Physics, 752 pages, Westview Press 1998
• Clifford Johnson, D-branes
• Richard Szabo, An introduction to string theory and D-brane dynamics
• Кетов С.В. “Введение в квантовую теорию струн и суперструн” djvu
Physics lecture notes
Popular level books and string propaganda
• Brian Greene, The elegant universe: superstrings, hidden dimensions, and the quest for the ultimate theory
• Michio Kaku, various volumes
• video and slides of Witten’s KITP overview Future of String Theory
Big mathematically inclined surveys
• Hisham Sati, Geometric and topological structures related to M-branes, comprehensive survey
• Anton Kapustin, D. O. Orlov, Lectures on mirror symmetry, derived categories, and $D$-branes, Russian Mathematical Surveys, 2004, 59:5, 907–940 (Russian version: pdf, [arxiv version:
Other lists of bibliography
Revised on April 18, 2014 07:44:27 by
Urs Schreiber | {"url":"http://ncatlab.org/nlab/show/books+about+string+theory","timestamp":"2014-04-20T15:55:35Z","content_type":null,"content_length":"39104","record_id":"<urn:uuid:76693a36-ea39-4df6-9adf-e22a98ab941b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unfair coin
September 20th 2012, 05:25 PM
Unfair coin
I am trying to determine a probability. One side of my coin (heads) is "two and one-half times" heavier than the other side (which is tails).
Can anyone tell me what a probability would be for that? I am needing to do a simulation where I have to flip the coin 100 times and determine how many times it lands on heads .. which would be
the heavier side.
Any suggestions?
September 20th 2012, 05:27 PM
Re: Unfair coin
Once I know the "numbers" for both sides, how will I determine how many times it would land on heads if I was to flip the coin 100 times? Is there any equation?
September 20th 2012, 10:00 PM
Re: Unfair coin
Hello, jthomp18!
I am trying to determine a probability.
One side of my coin (heads) is "two and one-half times" heavier than the other side (which is tails).
Can anyone tell me what a probability would be for that?
I am needing to do a simulation where I have to flip the coin 100 times
and determine how many times it lands on heads .. which would be the heavier side.
Any suggestions?
I had to baby-talk my way through this one.
Suppose that Heads is twice as heavy as Tails.
I would assume that Heads would be twice as likely to be on the bottom.
. . That is, the coin would show Tails.
So we have: . $P(T) \,=\,2\!\cdot\!P(H)$
We also know that: . $P(H) + P(T) \:=\:1$
Solve the system and we get: . $P(H) = \tfrac{1}{3},\;P(T) = \tfrac{2}{3}$
For your problem, Tails is $\tfrac{5}{2}$ times as likely as Heads.
Solve the system: . $\begin{array}{ccc} P(T) \:=\:\frac{5}{2}\!\cdot\!P(T) \\ P(H) + P(H) \:=\:1 \end{array}$
. . and we get: . $P(H) \,=\,\tfrac{2}{7},\;P(T) \,=\,\tfrac{5}{7}$
September 24th 2012, 11:45 AM
Re: Unfair coin
I don't believe Soroban answer.
If a coin with a heavier side it flip, it will rotate around an off center center of gravity, but will still rotate with a constant angular speed, and will have the same probability to touch the
ground on either side.
Also, if this coin stay in the air long enough for the air friction stop its rotation, it will stop with the heavier side down. This is its lower potential state. And therefore will touch the
ground with its heavier side first. This will happen with an almost 100% probability.
Now, what will give this coin more chance to stop the heavier side down is the fact that it will bounce. And in this bouncing process, the coin have more chance to finish in its lower potential
position. (If the coin had touch the ground with the heavier side up, it will have tendency to rotate when bouncing. in the other case, it will have tendency not to rotate).
However, I am really far to believe that this process will be a linear function with the distribution of the weight inside the coin. (Specially due to the fact that a coin is thin)
If you ave this unfair coin, try to flip it many time and let us know the result. | {"url":"http://mathhelpforum.com/statistics/203798-unfair-coin-print.html","timestamp":"2014-04-21T04:35:14Z","content_type":null,"content_length":"8389","record_id":"<urn:uuid:5c949a64-b209-4eb9-bdcc-82faf6e69de2>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00205-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A single die is rolled 8 times. What is the probability that a six is rolled exactly once, if it is known that at least one six is rolled?
• 5 months ago
• 5 months ago
Best Response
You've already chosen the best response.
1 out of 8 dice is already know to be a six, so you have 7 unknown dice, each with a 5/6 chance of not rolling a six. can you figure it out with that information?
Best Response
You've already chosen the best response.
Note that your sample space will be such that there is always a throw in which you have got 6. so now keeping that in mind, simply calculate the probability. first find the probability that you
didn't get any 6 after throwing a 6, this is the numerator and our denominator will contain (1 - probability of not getting 6 in the any throws). You will get the answer.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/526f3409e4b0e209601e5638","timestamp":"2014-04-19T22:37:20Z","content_type":null,"content_length":"30451","record_id":"<urn:uuid:c4e17120-a3a3-4630-80f1-ddcd90188575>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what is the multiplicative inverse?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
are you asking for the definition?
Best Response
You've already chosen the best response.
multiplicative inverse (mathematics) one of a pair of numbers whose product is 1: the reciprocal of 2/3 is 3/2; the multiplicative inverse of 7 is 1/7.
Best Response
You've already chosen the best response.
ok wait wht is the multiplicative inverse of 5/7?
Best Response
You've already chosen the best response.
the inverse of a number means that if you multiply the number by its inverse you will get 1. so therefore, to work out the inverse you do 1 divided by the number.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ohhhhhh ok thank u
Best Response
You've already chosen the best response.
Do you understand now?
Best Response
You've already chosen the best response.
yea i do
Best Response
You've already chosen the best response.
wait so would the inverse of -2 3/4 be 4/11?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
wait im confused
Best Response
You've already chosen the best response.
wait wait wait never mind
Best Response
You've already chosen the best response.
You are missing the minus sign.
Best Response
You've already chosen the best response.
oh so its -4/11?
Best Response
You've already chosen the best response.
Correct. What is the multiplicative inverse of -1?
Best Response
You've already chosen the best response.
1/-1 right?
Best Response
You've already chosen the best response.
What does that equal?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Correct, excellent. how about the multiplicative inverse of 0?
Best Response
You've already chosen the best response.
wouldnt tht just b 0?
Best Response
You've already chosen the best response.
Are you sure? Think about it....
Best Response
You've already chosen the best response.
wait 1?
Best Response
You've already chosen the best response.
Please don't guess. Try to come up with a reasonable answer.
Best Response
You've already chosen the best response.
well im not trying 2 guess
Best Response
You've already chosen the best response.
I know, I know.... sorry but this is one of THE most difficult questions to understand for most students, I apologize.
Best Response
You've already chosen the best response.
wait it is 0
Best Response
You've already chosen the best response.
A multiplicative inverse is also called a "reciprocal" as is DEFINED as so, reciprocals: Two numbers whose product is 1.
Best Response
You've already chosen the best response.
0/1 would b 1/0
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ohhhh so the answer is 1?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ugh then wht is it?
Best Response
You've already chosen the best response.
The answer is "there is no answer"
Best Response
You've already chosen the best response.
omg thts wht my calculator said
Best Response
You've already chosen the best response.
Let me show you the DEFINING equation for reciprocals again: $$a*b=1$$ When asked to find the reciprocal of a number replace one of the letters in this equation with that number and solve for the
other letter, for example: what is the reciprocal of 2? So $$2*b=1$$ What number multiplied by 2 gives 1?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
YAY!!!! i finally understand something
Best Response
You've already chosen the best response.
What is the reciprocal of 0.5?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Yes!!! simplify that please. What number multiplied by 0.5 equals 1?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
kk give me another one
Best Response
You've already chosen the best response.
What is the reciprocal of 0?
Best Response
You've already chosen the best response.
wait we just did this one
Best Response
You've already chosen the best response.
Follow the steps I gave you above.
Best Response
You've already chosen the best response.
0 * b = 1
Best Response
You've already chosen the best response.
EXCELLENT!!! Now you are really learning :-D
Best Response
You've already chosen the best response.
lol so then it would b 1/0?
Best Response
You've already chosen the best response.
$$\frac{1}{0}$$ is not a number, because division by 0 is undefined.
Best Response
You've already chosen the best response.
well yea but if we were using different numbers would it look almost like tht?
Best Response
You've already chosen the best response.
Give me an example please.
Best Response
You've already chosen the best response.
like in the problem im doing now hold on
Best Response
You've already chosen the best response.
\[4\frac{ 1 }{ 8} would be \frac{ 8 }{ 33 } \]
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Let me try and explain why you can never divide by 0. Dividing by 0 would mean multiplying by the reciprocal of 0, but 0 has no reciprocal because 0 times ANY number is 0, NOT 1.
Best Response
You've already chosen the best response.
OMG tht makes soooo much sense now
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
So what is the answer to "what is the multiplicative inverse of 0?"
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
there is no answer u cant put any number there
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
yay omg thank u so much
Best Response
You've already chosen the best response.
np :)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
wait can u help me with another problem?
Best Response
You've already chosen the best response.
the smallest owl found in the united states is the elf owl, which weighs 1 1/2 ounces. one of the largest owls is the Eurasian eagle owl which weighs nearly 10 pounds or 156 ounces. the Eurasian
eagle owl is how many times as heavy as the elf owl?
Best Response
You've already chosen the best response.
thts the problem^^^
Best Response
You've already chosen the best response.
Any ideas?
Best Response
You've already chosen the best response.
well idk how 2 set the problem up
Best Response
You've already chosen the best response.
Decide what unknown number is asked for and what facts are known.
Best Response
You've already chosen the best response.
do i have 2 convert 1 1/2 into an improper fraction?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ok so thts 3/2
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
then do i put 156 over 1?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
so would it b \[\frac{ 156 }{ 1 } - \frac{ 3 }{ 2 }\]
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
oh so its 156 divided by 3/2?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
so its 312/3
Best Response
You've already chosen the best response.
so the answers 104 ounces?
Best Response
You've already chosen the best response.
Not ounces, but how many times heavier.
Best Response
You've already chosen the best response.
ohhhhh kk thank u
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5114167de4b09e16c5c75398","timestamp":"2014-04-19T19:51:38Z","content_type":null,"content_length":"370125","record_id":"<urn:uuid:48681208-2444-40c2-9987-497284197e0b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
Essington Math Tutor
Find an Essington Math Tutor
I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching because
this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun...
12 Subjects: including calculus, trigonometry, SAT math, ACT Math
...I am currently a graduate mathematics student at Villanova University. I believe in helping students to understand and enjoy math as I do, I will not do the work for the student but will help
them understand the process behind it. As a recent graduate of college I understand how students think ...
13 Subjects: including algebra 1, algebra 2, calculus, geometry
...I have taught organic chemistry in high school. I worked in a chemical plant (oxidation of C2H4 to get C2H4Cl2) as well as C2H2 + h2 to get C2H4 + C2H6 as a Process Engineer where the
application of organic chemistry is the key. While dealing with alkenes and alkynes primarily, I handled other ...
17 Subjects: including calculus, chemistry, elementary (k-6th), physics
...Whether it's math, science, English, Spanish, or even an elective like psychology that has you down, I am here to coach your test taking skills, get your grades up, and teach you the strategies
you need to get into a top college or just get through the school year.With several years of experience...
66 Subjects: including algebra 1, ACT Math, calculus, chemistry
...Topics include, but are not limited to: (1) operations with real numbers, (2) linear equations and inequalities, (3) relations and functions, (4) polynomials, (5) algebraic fractions, and (6)
nonlinear equations. When I taught Algebra 2, I would take my students across the street to McDonald's a...
12 Subjects: including precalculus, algebra 1, algebra 2, geometry | {"url":"http://www.purplemath.com/Essington_Math_tutors.php","timestamp":"2014-04-19T17:15:02Z","content_type":null,"content_length":"23825","record_id":"<urn:uuid:53b4a586-2e74-45ac-85de-25aeb4e0ad60>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
SAMUEL KARLIN AND JAMES MCGREGOR
1. Introduction, It was shown in [14] that if P(t) = (PtJ(t)) is the
transition probability matrix of a birth and death process, then the
3im On
a r e
where ix < i2 < < in and j \ < j2 < < jn strictly positive when
t > 0. In this paper it is shown that these determinants have an inter-
esting probabilistic significance.
(A) Suppose that n labelled particles start out in states ίlf ,in
and execute the process simultaneously and independently. Then the
determinant (1) is equal to the probability that at time t the particles
will be found in states j l y , j n respectively without any two of them
ever having been coincident (simultaneously in the same state) in
the intervening time.
From this statement it follows that the determinant is non-negative, and
as will be seen strict positivity can be deduced from natural hypotheses,
for example if Pi j (t) > 0 for a — 1, • • n and every t > 0.
The truth of the above statement rests chiefly on the facts that the
process is one-dimensional—its state space is linearly ordered, and that
the path functions of the process are everywhere '' continuous". Of
course the path functions are discontinuous in the ordinary sense but the
discontinuities are only of magnitude one. Thus when a transition occurs
the diffusing particle moves from a given state only into one of the two
neighboring states, and even if the particle goes off to infinity in a finite
time it either remains there or else it returns in a continuous way and
does not suddenly reappear in one of the finite states. These two prop-
erties of one-dimensionality and "continuity" have the effect that
when several particles execute the process simultaneously and indepen-
dently, a change in the order of the particles cannot occur unless a
coincidence first takes place. (The states are all stable so that with prob-
ability one a transition involves only one of the particles.)
It is also important for our results that the processes involved have
the strong Markoff property of Hunt [10], [11], (see also [19]). However
it is a consequence of theorems of Chung [3] that any continuous time
Received December 18, 1958. This work was supported in part by an Offiice of Naval
Research Contract at Stanford University.
1142 SAMUEL KARLIN AND JAMES MCGREGOR
parameter Markoff chain whose states are all stable has the strong Mar-
koff property.
There exist processes of birth-and-death type whose path functions
may have discontinuities at infinity. Such processes have been described
in some detail by Feller. Although the above result (A) does not apply
to these processes they fall within a more general class of processes
which we discuss next.
We consider a stationary Markoff process whose state space is a set
of integers and whose states are all stable. Let (Ptj(t)) be the transition
probability matrix. Then
(B) Suppose that n labelled particles start in states ίlf * ,in
and execute the process simultaneouly and independently. For each
permutation σ ofl, , n let Aσ denote the event that at time t the
particles are in states j ^ ^ , • ,i σ <» respectively, without any two
of them ever having been coincident in the intervening time. Then
ίlf #
p(t; ' M = Σ(signcj)Pr{A σ }
where the sum runs over all permutations of l, ,n and
sign σ — 1 or —1 according as σ is an even or an odd permutation.
The first stated result is seen to be a special case of this one. For
• •
if the path functions are " continuous " and ix< • • < in, j \ < • • <jn
then Fΐ{Aσ} is zero except when σ is the identity permutation. There
is one other case in which the general formula permits an interesting
simplification, namely when the process is a local cyclic process. By
this we mean that the states may be viewed as N+l points 0,1, • • N •,
on a circle and transitions occur only between neighboring states, 1 and
N being neighbors of zero and N — 1 and 0 neighbors of N. We take
0 <%!< < in < N and 0 < j \ < < j n < N and then Pr{Aσ} is zero
unless σ is a cyclic permutation. Since the cyclic permutations of an odd
number of objects are all even permutations we have in this situation
(3) pit; ^ ' " ' M -- Σ P r { A σσ } ,
Σ ^odd.
η / Jn
. . . 9 / / li
cyclic σ
Jit 9 Jn
This determinant is therefore non-negative.
Analogous results hold for one dimensional diffusion processes. Let
P(t,xyE) be the transition probability function of a stationary process
whose state space is an interval on the extended real line. It will be
assumed that the process has the strong Markoff property and that its
path functions are continuous everywhere. Given two Borel sets E, F
the inequality E < F will denote that x < y for every x e E,y e F.
We take n states xλ < x2 < < xn and n Borel sets Eλ<E2 <•••<£'„
COINCIDENCE PROBABILITIES 1143
and form the determinant
^EJ >>.P(t,xltEn)
(4) P(t; ' '"'
E " P(t,xn,E1)...P(t,xn,En)
(C) Suppose that n labelled particles start in states x19 , xn
and execute the process simultaneously and independently. Then the
determinant (4) is equal to the probability that at time t the parti-
cles will be found in the sets El9 •••, En respectively without any
two of them ever having been coincident in the intervening time.
Next consider a stationary strong Markoff process whose state space
is a metric space and whose path functions are continuous on the right.
We take n states xl9 , xn and n Borel sets E19 , En and again form
the determinant (4).
(D) Suppose that n labelled particles start in the states xl9 , xn
and execute the process simultaneously and independently. For each
permutation σ of 1, 2, , n let Aσ denote the event that at time t
the particles are in the states Eσι9 *, Eσ respectively without any
two of them ever having been coincident in the intervening time.
(5) p(t; ' '
where the sum runs over all permutations σ.
The last result contains all of the preceding ones as special cases.
It has another interesting special case, namely when the state space is
a circle and the path functions are continuous.
There is a mapping θ -• eiθ = x of the closed interval 0 < θ < 2π
onto the circle. Given n Boral sets E19 * ,En on the circle we say
Eλ < < En if there are n Borel sets E[ < < E'n in the interval
(0, 2π] or [0, 2π) which are mapped onto El9 -*-,En respectively by the
above mapping. Specializing the sets to be one point sets gives the
meaning for x1 < < xn when x19 , xn are n points on the circle.
Now let P(t9x9 E) be the transition probability function of a strong
Markoff process on the circle with continuous path functions. Because
of the continuity of paths a change in the cyclic order of several diffus-
ing particles on the circle cannot occur unless a coincidence first takes
place. Thus the terms in (5) corresponding to non-cyclic permutations σ
will all be zero. Finally we take advantage of the fact that the cyclic
permutations of an odd number of objects are all even permutations,
and obtain the following.
(E) Suppose xx< < xn9 Ex < < En and n labelled parti-
1144 SAMUEL KARLIN AND JAMES MCGREGOR
cles start at xl9 , xn respectively and execute the process simultane-
ously and independently. If n is odd and Aσ is defined as before
(6) ( Σ
v / cyclιcσ
where the sum runs over all cyclic permutations.
Similar but more complicated results are valid in still more general
situations. For example we restrict our discussion to stationary processes
although both the methods and the results can be extended to non-
stationary processes. A generalization of another type which has in-
teresting applications is obtained when the n particles execute different
Let Pa{t, x, E), a = 1, ••, n be transition probability functions of
n strong Markoff process on the real line with continuous path functions.
Choose n states xx < < xn and n Borel sets Ex < < En and form
the determinant
(7) det PΛ(t, x«, Eβ) .
If n labelled particles start in states x19 ••, xn respectively, and execute
the processes simultaneously and independently, the ith particle executing
the ith process, then the determinant (7) is the probability that at time
t the particles will be found in the sets Elf * ,En respectively, without
any two of them ever having been coincident in the intervening time.
The formal proofs of formulas (5) and (6) and of the interpretation
of p(t, * m">x») are elaborated in §5. For this purpose the rele-
EuEf- Ej
vant preliminaries and definitions concerning Markoff processes are
summarized in § 4.
In § 6 we offer some observations on the problem of determining
when the strong Markoff property applies to direct products of processes.
In this connection we direct attention to those aspects of this problem
relevant to our analysis of the main theorem of § 5.
Section 2 contains a brief heuristic proof of (C) in the situation of
two particles. This is inserted in order to motivate the formal proof of
§ 5. Section 3 discusses the connections of the concept of total positivity,
to statements (A) - (E).
Total positivity is significant in relation to the theory of vibrations
of mechanical systems [8], the method of inversion of convolution trans-
forms [9], and the techniques of mathematical economics [13]. In this
paper total positivity is shown to be also important in describing the
structure of one dimensional strong Markoff processes whose path func-
tions are continuous. In a vague sense the most general totally positive
COINCIDENCE PROBABILITIES 1145
kernel can be built from convolutions of stochastic processes whose path
functions are continuous. In principle, the representation desired is
similar to the representation formula which applies to Pόlya frequency
functions discovered by Schoenberg [20]. A detailed discussion of this
idea will be published separately. In this connection we mention that
Loewner has completely analyzed the generation of totally positive mat-
rices from infinitesimal elements [18].
In § 7 we investigate conditions which insure that the determinant
(4) is strictly positive. We find that this is the case if P(t, x, E) > 0
whenever t > 0, E is any open set and P(t,x, E) represents the transi-
tion probability function of a strong Markoff process on the real line
with continuous path functions.
The following converse proposition is of interest. Suppose the transi-
tion function P(t, x, E) of a Markoff process has the property that all
determinants of the form (4) are non-negative. Does there exist a
realization of the process such that almost all path functions are conti-
nuous? This is true with some mild further restrictions. In § 8 with
the aid of a theorem of Ray [19] we are able to establish a partial converse
based on a restriction about the local character of P(t, x, E). It will be
recognized that most cases of Markoff processes obey this requirement.
In § 9 we characterize the most general one dimensional spatially
homogeneous process whose transition kernel is totally positive.
The final section presents a series of examples of totally positive
kernels derived from Markoff processes with continuous path functions.
2 A heuristic argument. In this section we give a non-rigorous
outline of the method of proof for the case of two particles. Let P(t, x, E)
be the transition probability function of a stationary Markoff process on
the real line. Suppose that two distinguishable particles start at x1 and
x2 > xx and let Ex < E2 be two Borel sets. The determinant
pit; *i M = P(t, xlf E^Pit, x2, E2) - P(t, xlf E2)P(t, x2, Eλ)
is equal to Pr {A[} — Pr {A2} where A[ is the event that at time t the
first particle is in Ely the second in E2 and A[ is the event that at time
t the first particle is in E2, the second in Eλ. Each event A'if regarded
as a collection of paths, may be split up into two disjoint sets At + A"
where A% consists of all the paths in A[ for which no coincidence occurs
before time t and A" consists of the paths in A[ with at least one coin-
cidence before time t. We assume the paths are sufficiently smooth so
that for each path in A" and A2 there is a first coincidence time. This
will certainly be the case if all paths are continuous on the right.
Choose a path in A" and at the time of first coincidence interchange the
1146 SAMUEL KARLIN AND JAMES MCGREGOR
labels of the two particles. This converts the given path into a path in
A[ and the resulting map of A" into A" is clearly one-to-one and onto.
Because of the Markoff property and because the particles act indepen-
dently it is plausible that this map is measure preserving so that
Pr {A[ } - Pr {A'J}
and granting this it follows that
. XU XΛ = p r Γ^JJ _ p r f^,|
Ex Ej
= Pr {Aλ} - Pr {A*} ,
which is the general form of the result. If the path functions are all
continuous then Pr {A2} = 0 and the formula becomes
p(t; >
3. Total positivity. A matrix is called (strictly) totally positive if
all of its minors of all orders are (strictly positive) non-negative. Such
matrices and their continuous analogues the totally positive kernels occur
in a variety of applications and have been studied by numerous authors.
A lucid outline of the theory together with an extensive bibliography
has been given by Schoenberg [21], Krein and Gantmacher [8], Our re-
sults indicate the existence of large natural classes of semi-groups of
totally positive matrices and totally positive kernels. One simply takes
the transition probability function of a one dimensional diffusion process
with continuous path functions. A number of interesting examples are
given in § 10.
Conversely the total positivity of the transition function may be used
to draw conclusions regarding continuity of the path functions. A pro-
gram along these lines has already been carried out by the authors for
the case of birth and death processes [12]. (see also § 8.)
Our attention was first drawn to total positivity in connection with
diffusion processes by unpublished results of C. Loewner who showed
that the fundamental solution of
du d2u . 7 du
— = a — 2 -f o —
dt dx dx
on a finite interval with smooth a and b and classical boundary conditions,
is totally positive.
4. Definitions. As indicated in the introduction we are chiefly con-
cerned with processes on the integers, the real line, or the circle. In
COINCIDENCE PROBABILITIES 1147
order to deal with all cases at once it is convenient to discuss certain
results for a more general process whose state space is a metric space
Let X be a metric space, S3 the Borel field generated by the open
sets of X, and S3' the Borel ring generated by the finite intervals on
0 < t < co. Suppose there is given a set Ω called the sample space and
an X-valued function x(t, ω),0<t<cv,ωeΩ. Let 9JΪ be the Borel
field of subsets of Ω generated by the sets of the form {ω; x(t, ω)e E}
where t > 0 and E e S3. Suppose that for each x e X there is given
a probability measure Px on X such that Px{ω; x(0, ω) = x} = 1. Then
the function x(t, ω) is called a stochastic process on X with sample space
Ω and distributions { P J .
The stochastic process is said to have right continuous path functions
if for every fixed ω the function x( , ώ) is right continuous on 0<£<oo.
Let ^f/t denote the Borel field generated by all sets {ω; x(s, ω)e E}
where E e 33 and 0 < s < t. Conditional probabilities relative to SDΪj will
be denoted by Px{ \x(8)fs<t}. The stochastic process is called a
stationary Markoff process if for every fixed t
Px{x{tt + t, ω) e E i f i = 1 , , n \ x(s), s<t}
= P*u,*> {α(ίι, o)) 6 Et, i = 1, , n]
with probability one when 0 < tx < < tn and Eu , En e 93.
We will be concerned only with stationary Markoff processes in X
with right continuous path functions. It will always be assumed that
the function
P(t,x,E) = Px{x{t,ω) e E}
is measurable relative to S3' (x) S3. This function satisfies the Chapman-
Kolmogoroff equation:
P(ί + s, x, E) = ^P(t, x, dy)P(s, y, E) .
Let F be a closed set in X. The time of first hitting F is defined
τF(ώ) — inf {ί x(ty ω) e F]
where the inf of the void set is taken to be + co. The place of first
hitting F is defined, if τF(ω) < co, as
ξF(ω) = x(τF(ω), ω) .
The Markoff process will be called a strong Markoff process if for
any closed set F we have the first passage relation
1148 SAMUEL KARLIN AND JAMES MCGREGOR
Px{x(t, ώ) e E} = Px{x(t, ω) 6 E, τr(ω) > t}
ξF(ω) 6 dy
In this relation it is implicitly assumed that the sets {ω τ{ω) < t}
and {ω τ(ω) < ί, |(α>) e H} where H is a closed subset of i* , are 2δ£
measurable for each t. A discussion of the validity of these assumptions
made in § 6. It is there shown that under very slight conditions on the
transition function the assumption holds.
It seems reasonable to believe that the direct product of a finite
number of strong Markoff processes is again a strong Markoff process.
At the present time we are not able to prove that this is generally true,
although in the proof of the main theorem we assume this result. On
the other hand proofs can be given which cover the vast majority of
the special cases of interest. As noted above it follows from theorems
of Chung that the strong Markoff property is preserved under direct
products for processes with countably many states all of which are stable.
This includes the birth and death case. In § 6 we give a proof for direct
products of a one dimensional diffusion process whose transition prob-
ability function P(t, x, E) is jointly continuous in t and x. This covers
the case when P(t, x, E) comes from a diffusion equation
^ = α(αθ^+&(«) —
θt dx dx
with a(x), b(x) continuous and a(x) > 0. References to other theorems of
this kind are given in § 6.
Let Xif ί = 1, , n be metric spaces and for each i let xt(t, ωt) be
a stationary Markoff process in Xt with sample space fl4 and distributions
{P^}. We form the product space X = Xx (x) (x) Xn in which the
generic point is an ti-tuple x = (xlf • , xn) with xt e Xt. The space X
with the distance p(x, y) = Σp(xif yt) is a metric space. The vector valued
function x(t, ω) — (xλ{ty ω^)y , xn(t, ωn)) is a stationary Markoff process
in X whose sample space is the direct product Ω of the Ω% and whose
distributions are the direct product measures
x(t, ω) is called the direct product of the given processes.
5 The main theorem. Let X be a metric space, and x{t, ω) a
stationary strong Markoff process in X with right continuous sample
functions, sample space Ω and distributions {Px}. We form the direct
products X, Ω of n copies of X and Ω respectively and the direct product
COINCIDENCE PROBABILITIES 1149
x(t, ω) of n copies of the given process. We say this direct product
process represents "n labelled particles executing the x(t, ώ) process simul-
taneously and independently", and this is the sense in which that phrase
is to be interpreted in statements (A)-(E) of the introduction. We
assume x(t, ω) is a strong Markoff process (see § 6).
The associated distributions are
The set F of coincident states consists of the points x=(x19 ,xn)
with at least two of the xi equal to one another. A permutation λ of
the n letters 1, 2, •••, n is called a transposition if there are two letters
i < j such that λ(i) = j , X(j) — i, and λ(r) = r if i Φ r Φ j . In this case
we use the notation \ = (ί,j). A coincident state x = (xu -••, xn) is
said to belong to the transposition λ = (i, j), i < j if xlf , x5-λ are all
different but xt = x3. Thus every coincident state belongs to a unique
transposition, and for a given λ the set of all coincident states belong-
ing to λ will be denoted by F{X). The group of all n\ permutations of
1, 2, •••, w will be denoted by S and the set of all transpositions by A.
Given n Borel sets Elt •••, En in X and a permutation σ e S, the
direct product set
Eσ = Eσω (8) . . . (8) EσW
is a Borel set in X. Let A^ = {ω; x(t, ω) e Eσ] where t > 0 is fixed.
Then if x — (xlf , xn)
pίt. Xi, , »» V Σ ( s i g n σ ) p_{A;}
V J s
Eu.. ,En °*
by definition of the determinant and of P~.
The time τ(ω) of first coincidence is defined as the time of first
hitting F:
τ(ω) = τF(ω) = inf {ί x(ί, S) 6 F} .
The place of first coincidence is ξ(ω) = x(τ(ω), ω). Our main result can
now be stated very simply as follows.
THEOREM 1. The sets
Aσ = {ω ω e A'σ, τ(ω) > t]
are all measurable and
P(t; * " • • • ' * » ) = Σ (sign σ)P7{Aσ] .
1150 SAMUEL KARLIN AND JAMES MCGREGOR
Proof. Since τ is measurable the sets Aσ are also measurable. For
each σ we apply the strong Markoff property to obtain
( ( Pj{x(t -s,ω)e Eσ}μ(dy)
JO JF
< s}
e ΛΓ
Now F is the union of the disjoint Borel sets F(X), λ e A, and if y e
F(X) t h e n Py{x(t -s,ω)eEar}= Py{x{t - s, ω) e Eλσ]. Hence
(sign σ) (W(s) f P?{x(t - s,ω) e Eσ}μ(dy)
Jo Ji?W
v - (signλσ)l dΦ(s) \ Py{x(t - s,ω) e Eλσ}μ(dy)
xeΛ Jθ jF(λ)
= ~ Σs(sign σ) [PJ{A;> - P^{^lσ}] .
This quantity is therefore zero and
( ί ;
^ •••'«-) =Σ β (signσ)P;-{i4;}
- Σ^(signσ)P-{Aσ} .
The various assertions (A) - (D) of the introduction can be obtained
by specializing the above theorem in the appropriate way.
6 Strong Markoff property for direct products* For the vast
majority of one-dimensional diffusion processes which are met in appli-
cations one finds that the transition probability function P(tf x, E) is
jointly continuous in t and x. It will be shown that the direct product
of ^-copies of such a process has the strong Markoff property. The proof
imitates the proof of a theorem of Dynkin and Jushkevich [7].
THEOREM 2. Let x{t> ώ) be a stationary Markoff process on the real
line with continuous path functions and transition probability function
P(t, x, E) which is jointly continuous in t, x. Then the direct product
x(t, ω) of n copies of this process is a strong Markoff process.
Proof. Let F be a closed set in the ti-dimensional space, τ(ω) the
time of first hitting F for the direct product process, and ξ(ώ) the place
of first hitting F, The fact that τ(ω) and ξ(ω) are measurable functions
COINCIDENCE PROBABILITIES 1151
is a trivial consequence of the continuity of the path functions. With a
given integer m > 1 let τm(ω) — kjmy where k is the integer such that
m m
and let ξm(ω) = x{τjω), ω). Then for any Borel set E
P»(x(t, ω) e Έ) = Pj{x(t, ω) e E, τjω) > t}
+ Σ Pϊ{x(t9 ω) e E, τm(ω)=lL\ .
(θ if τm{ω)φ —
0 if y 0 E .
ϊ- \x(t, ω) e .&, τM(ω) = —1 = £?j{
= Ej \E \Ak(ώ)f(x(t, ω)) I φ ) , s < A l l
= E7\Ak(ώ)E\f(x(t, ω)) I φ ) , s < A l l
= E- [Ak{ω)PτWm-^\x(t - A, (ή e l | J
- A, ω) e E~\Pj\ξm{ω) e dy, τ.(ω) = A
and hence we have the first passage relation for r m :
e ^} - Pj{x{t, ω) e E, τjo>) > t}
{ ]
ξm(ω) e dy
For every ω we have τm(ώ) > τm+1(ω) [ τ(ω) and by continuity of path
1152 SAMUEL KARLIN AND JAMES MCGREGOR
functions ξm{ω)—*ξ{ω) as m—*OD. Hence τm(ω)f ξm(ώ) converge in mea-
sure to τ(ώ), ξ(ω). Since P^{x(t — s, ω) e E] is jointly continuous in y
> x
and s and is bounded we may let m — c> in the above formula and ob-
tain the first passage relation for τ(ω). This completes the proof.
The referee has brought to our attention the following stronger
theorem of Blumenthal, [1, Theorem 1.1], which is slightly reworded
THEOREM. If the process has right continuous path functions and
if for every bounded continuous function f the function \ f(y)P(t, x, dy)
is continuous in x for each £>0, then the process has the strong Mar-
koff property.
In this theorem the state space X is any metric space. Naturally
this theorem requires more involved arguments than the above Theorem
2. Finally we mention that a very thorough discussion of the Markoff
chain case has been given by Chung [4].
7 Strict total positivity Let X be the non-negative integers and
x(t, ω) a stationary strong Markoff process on X with all states stable
and "continuous'' path functions. If P(t) = (Pa(t)) is the transition
probability matrix of the process then it follows from assertion (A) that
this matrix is totally positive. Let us call the process a strict process
if Pij{t) > 0 for every i, j and all t > 0. We will prove
THEOREM 3. If the process is strict then its transition probability
matrix is strictly totally positive for every t > 0.
Proof. The proof is similar to the proof of a related theorem in
[14], namely Theorem 20 on page 543. It is seen from the proof of that
theorem that it is sufficient for our purposes to prove that if iλ < i2 <
• < in then
Yί; *i'* ' M > 0
^ O m. . « O* '
for every ί > 0, that is the principal subdeterminants are strictly posi-
tive. However since
pί2t. ) (
it is enough to show that these determinants are strictly positive for
COINCIDENCE PROBABILITIES 1153
sufficiently small t > 0. Because the path functions are right continuous,
if {rfc) is an ordering of the positive rationale, the set
U fl (ω x(rk, ω) = i\ x(0, ω) = i]
1 <l/
has probability one. Hence for some m = m(i) > 0 there is a positive
probability Rt that a path starting at i remains at i for at least up to
time l/m(i). Now if 0 < t < max ljm{ik) then we have
and this proves the theorem.
Now let x(t, ω) be a stationary strong Markoff process on the real
line with continuous path functions satisfying the hypothesis of Theorem
1. Let P(t,x,E) be the transition probability function of the process.
The process will be called strict if P(t, x, E) > 0 whenever t > 0 and E
is any non-void open set. We will prove
THEOREM 4. Tf the process is strict then its transition probability
function is strictly totally positive in the sense that if £>0, a?1< <
xn and E1 < < En are non-void open sets then
> •"•'a? Λ > 0 .
We begin with two lemmas in which the hypotheses of the theorem are
DEFINITION. If α, b are two points on the real line then
τΛ(ω) = mf{t x(t, ω) = a} ,
M(t,x,a) = Px{τa(ω)<t} ,
M(t, x, a, b) = Px{τa{ω) < t, τb(ω) > t] .
LEMMA. // a < x < b then M(t, x, α, b) > 0 and M(t, x, b, a) > 0
for every t > 0.
Proof. Assume that M(t, x, b, a) = 0 for some t = t0 > 0 and hence
for every t < tQ. Then if J = [6, < > we have for every t < t0
Pit, x, J) - [P(t - s, α, J)dβM(s, a?, α)
1154 SAMUEL KARLIN AND JAMES MCGREGOR
and in virtue of the continuity of paths
P(t, a, J) = [p(t - s, x, J)dsM(s, a, x).
Now because of the continuity of paths we can choose tλ so 0 < \ < tQ
M(t, α, x)M(t, x, a) < 1/2 for 0 < t < tx .
Since P(s, a, J) < 1 for all s < tx it follows from the integral equations
P(s, α, /) < 1/2 for s < tx,
and by an iteration argument we obtain P(tlf a, J) — 0 which contradicts
the hypothesis. Hence M{t, x, 6, α)>0 for t>0. Similarly M(t, x, α, 6)>0
for t > 0.
DEFINITION. Given an open interval V = (α, 6) let
Λ(ί, x, V) = P,{τα(ω) > ί, τ6(ΰ>) > ί} .
LEMMA 2. 7/ ΛJ e V = (α, 6) then .#(£, a?, F) > 0 for all t > 0 .
Proof. Assume that for some xe V and £'>0 we have R{t', x, V)=0.
Then R{t, x, V) = 0 for all ί > ί\ Because of continuity of paths t0 =
inf {t JB(ί, a?, V) = 0} is positive. Now choose any # 6 V, y Φ x. To fix
the ideas we assume x < y < b. If ε > 0 is so small that M{t\ x, yy a)—
M(e, x, y,ά) > 0 then the inequality
0 - R(t', x, V) > \^R(t' - τ, y, V)dτM(τ, x, y, a)
shows that R{V - e , y, V) = 0. Consequently if t ^ i n f {« R(t, y, V) = 0}
0 < tx < t0 - ε < t0 .
But we can now repeat the argument and show that t0 < tx. This con-
tradiction proves the lemma.
Proof of the Theorem. L e t x±< < xn and Ex < < En be non-
void open sets. The index of the determinant
(t. *»—,*.
COINCIDENCE PROBABILITIES 1155
is defined to be the number k of values of i for which x% is not in Eim
Thus the index of an nth order determinant of this kind is an integer
between 0 and n inclusive.
In each set Et choose a non-void open interval Ut such that xt e Uι
if xi G Ei but Ui contains no x5 if xt 0 EL. Because of the probabilistic
These two determinants have the same index k. If k = 0, then from
the probabilistic interpretation and the second lemma above
Ulf~ fUn
Thus t h e subdeterminants with index zero a r e positive. Now suppose
the index is k > 0. We can find n open intervals U'l9 •••, £/„ whose
closures are mutually disjoint such t h a t xt e U't for every i and U[ = E7"4
if α?4 e Ϊ7 t . We can choose n points x[, , x'n such t h a t x\ e Ui for every
i and ίc{ = xt if ^ e ί7 ίβ Now in t h e collection U19 •••, Un, U[, •••, ?7^
there are exactly m = n + fc distinct intervals and they are disjoint.
Denote them by Vτ < < F m . Similary in x19 , α?w, a?ί, •••,#„ there
are exactly n + & distinct points. Denote them by y1 < < ym and
then yi e Vi for each i. L e t J5(ί) be t h e m-square matrix with elements
bu(t) = P(t, yt, Vj) .
The determinant pit; *">x») is a minor of B(t). Moreover B(t)
is totally positive, all of its elements are strictly positive, and its prin-
cipal minors have index zero and are therefore strictly positive. Hence
by Lemma 14 of [14] all minors of B(t) of index one are strictly positive.
Xlf # Xγι
This proves that p(t; ''' ) > 0 if the index of this determinant
l 9 ; n
is < 1. We now assume that for some integer r, 1 < r < n, all the
determinants of the t y p e P U Xl9 "°'Xn (with index < r are strictly
E ••E '
Let 1 < ix < < in < m, 1 < j \ < < j n <m and
Σ I iv - iv I = r + 1 .
1156 SAMUEL KARLIN AND JAMES MCGREGOR
and in this sum there is at least one term with
n n
Ih "v I S ' , 2-j I v jv I ^ '
V=l V = l
Vχi Vn
For this t e r m t h e i n t e g r a n d Pis; ***' ) is positive for every
\ Y .. # y /
^i, •••, vn in the range of integration because vy e Vjy for at least n—r
values of v. Also for this term the integrator P(t; ^V ***'^'» J has
dv19 , dvj
positive measure on the range of integration because y^ e VΛy for at
least n — r values of v. Hence the special term and also the entire sum
is strictly positive. This proves that p(t; ' * * ~'Xn ) > 0 if the index
\ jp jp /
of this determinant is < r + 1, and the theorem follows by induction on
the index.
8 Local character of P(t, xy E) and continuity of path functions*
Let P(ty x, E) be the transition probability function of a stationary Mar-
koff process on the real line. Given δ > 0 we define
V(x, δ) = [a + δ, C 3 ,
I'(x, δ) = U(x, δ) u V(x, δ) .
The transition probabilities are called of local character if P(tf x, I\x, δ)) =
o(t) for each x and δ > 0. They are called uniformly of local character
if for each δ > 0 and each compact set F on the real line the relation
P(t, x, Γ{x9 δ)) = o(t) holds uniformly for x e F. We will prove that if
the transition probabilities are positive of order two (see Theorem 5)
and if for some a > 0 we have P(tf xy I'(x, δ)) = o{ta) for each x and
each δ > 0 then the transition probabilities are uniformly of local charac-
ter, and in fact for every β > 0 the relation P{ty xy Γ{xy δ)) = o(tβ)
holds uniformly on compact sets. This is of interest in connection with
a theorem of Ray [19] to the effect that if the transition probabilities
are uniformly of local character and if P(ty xy X) = 1 where X is the
real line (not the extended real line) then the process has path functions
continuous except possibly at + oo and — oo.
COINCIDENCE PROBABILITIES 1157
THEOREM 5. Let P(t, x, E) be stationary transition probabilities on
the real line such that P(t, x, E) -> 1 as t -> 0 + if x is an interior
point of E. If P(t, x,E) is positive of order two (i.e. the second order
determinants of (4) are non-negative) and if there is an a > 0 such that
for every x and every δ > 0 we have P(t, x, Γ(x, δ)) = o(t ) then for every
compact set F on the real line and every β>0, δ>0 there is a constant
M = M(F, δ, β) such that
P(t, x, Γ(x, δ)) < Mt
for every x e F.
Proof. Given a point x on the real line and δ > 0 let y — x + δ/2
and N = (y — δ/4, y + δ/4). Then because of the second order positivity
P(ί, x, V(x, δ))P(ί, y, N) < P(t, x, N)P(t, y, V(x, δ)) .
Both factors of the right member of this inequality are O(t") while
P(ί, y, N) -> 1 as t -> 0. Hence P(ί, x, V(xf 8)) - O(t2").
This is valid for arbitrary x and δ, so the argument can be iterated,
and for any integer n > 1 we have
P(ί, x, V(x, δ)) = O(ί Λ ) .
The 0 symbol so far may depend on x and certainly depends on δ.
A similar argument applies to P(t, x, U(x, δ)) and combining them we
P(ί, x, Γ(x, δ)) = O(tη
for any β > 0.
Now suppose x < y < z, let E — (z, oo) and let TF be an open
interval containing y but whose closure does not contain z. Then
P(ί, x, £?)P(ί,!/, TΓ) < P(ί, α?, ΪΓ) P(ί, 2/, JS7)
< P(ί, 1/, S)
There is a positive ί0 = ίo(2/, £7) such that P(ί, y, TΓ) > 1/2 for ί < ί0
and therefore
P(ί, a?, £7) < 2P(ί, ?/, £7) if t < ί0 .
Similarly if 2<2/<x and E — (— ^yz) then there is a positive t1 — t1(yJ E)
such that
P(ί, x, S) < 2P(ί, y, E) if ί < t, .
Now let ί 1 = [α, 6] be a finite interval and δ > 0. Choose a finite
number of points ylf , ym such that every open subinterval of (a — δ,
1 1 5 8 SAMUEL KARLIN AND JAMES MCGREGOR
b + δ) of length (l/2)δ contains at least one of the points yt. Given
x e F there are indices a, β such that
x - 4" δ <y«<χ <Vβ<χ +-^ δ
Δ Δ
Since U(x, δ) c U(yx, δ/4) and F(x, δ) c V(yβ, δ/4) we have
P(ί, x, U(x, 8)) < 2P(t, ya, u(ym, A ) ) ,
P(t, x, V(x, 8)) < 2P(t, yβ, v(yβ, A))
for sufficiently small t. In fact these inequalities are valid if t is less
than the least of the numbers tQ(yi9 V(ytf δ/4)), tλ(yi9 U(yίf δ/4)), i — 1, 2, ,ra.
Since each of the finite collection of functions P(t, yt, V(yu δ/4)),
P(t, Vi, U(yif δ/4)), ί = 1, 2, ••-, m is o(ίβ) for any /3 > 0, it follows at
once that for fixed δ > 0, β > 0 P(t, x, Γ{x, δ)) = O(tβ) uniformly for
X 6 F.
9 Homogeneous processes* A process on the real line will be called
a homogeneous process if it is a stationary strong Markoff process with
right continuous path functions and its transition probability function
satisfies the homogeneity relation
P(t, x + h,E) = P{t, x,E -h)
where E — h = {y y + h e -B}. This class of processes includes all the
processes with stationary independent increments and is slightly more
general. If X denotes the real line then for any homogeneous process
the function
P(t, x, X) = P(t, 0, X) - a(t)
is independent of x. From the Chapman-Kolmogoroff equation a(t+s) =
a(t)a(s) and then because of monotonicity a(t) = e~βt where 0</9< + c».
The case β = 0 gives the processes with stationary independent incre-
ments. The general homogeneous process is obtained by taking a process
with stationary independent increments and stopping it after a random
time T with Pr {T > t] = e~βt. The trivial case β = + oo is excluded
in the remainder of this section.
There are two special kinds of homogeneous processes of particular
interest from our point of view. First the essentially determined ones
for which, if E is any open set
COINCIDENCE PROBABILITIES 1159
e~ if x + vt e E
P(t, . (
0 otherwise
where v is a real constant and 0 < β < oo. And second, those derived
from the Wiener process, for which
P(t, x, E) = -£LΛ expΓ- (Λ±^VΪ] dy
V2πσt U L 2σt J
where v is a real and σ a positive constant and 0 < β < oo. These two
types are interesting because they have continuous path functions and
the transition probability functions are therefore totally positive. For
those derived from the Wiener process it is strictly totally positive, while
for the essentially determined ones it is not. The main result in this
section is the following.
THEOREM 6. If the transition probability function of a homogene-
ous process is totally positive then the process is either an essentially
determined one or else one derived from the Wiener process.
Together with the results of § 5 this theorem shows that for homo-
geneous processes total positivity is equivalent to continuity of the path
functions. At the close of this section we show by a different method
that for homogeneous processes positivity of order two is already equi-
valent to continuity of the path functions. This assertion is probably
true not only for homogeneous processes but for arbitrary one dimensinal
strong Markoff processes with right continuous path functions. Although
we are not yet able to prove the result in this generality, we do have
a proof for the case of birth and death processes, which is published
separately [12].
Proof. Let P(t, x, E) be the transition probability function of a
totally positive homogeneous process and let P(t, x, (— oo, oo)) = e~βt.
We form the function
Pe(t, x, E) = (" e<»P(t, y, E)qζ{t, V - x)dy
where ε > 0 and ?8(ί, x) = (2ττεί)"1/2 exp [- (x2j2et)]. Then P ε is a
homogeneous strictly totally positive kernel for t > 0, it satisfies the
Chapman Kolmogoroff equation, and is analytic in its dependence on x.
There is therefore a density function ps(t, x) such that
P 8 (ί, x,E)=\ ps(t, y - x)dy .
For fixed ε, ps is measurable in t, x and is analytic in x for fixed ε, t.
From the formula
1160 SAMUEL KARLIN AND JAMES MCGREGOR
P.(ί, y-x)= lim ±-Pt(t, x, (y, y + h))
Λ->0 + h
we deduce that if xx < x2 < < xn and y1 < y2 < < yn then det
pε(£, αjj — y3) > 0 for £ > 0. Thus for fixed t and ε the function pε(t, x)
ί f°° \
is a Pόlya frequency function (we have I ps(t, x)dx — 1 ) in the sense
of Schoenberg [20] and the Laplace transform
1 _ f- e~xsp (t, x)dx
(M) J-
converges in a strip — a < Re [s] < a with α > 0, and has there a rep-
ψ(s, t) = e~ys2+8s Π (1 + δ v s)e-V
V = l
where γ > 0, δ, δv are real, 0 < γ + Σ δ ? < c o The constants γ, δ, δv
will of course depend on t. From the Chapman-Kolmogoroff equation
we have ψ(s, t) = [ψ(s, tjnj]n where n is any positive integer. Conse-
quently any zero of ψ(s, t) must be of order at least n and n being
arbitrary there can be no zeros. Hence
ψ(s, t) = e8s~ys2 , γ > 0.
Again using the Chapman KolmogorofE equation in the form ψ(s, t + τ) —
—ψ(sf t)ψ(s, τ) we deduce that δ = at, γ = b2t where α, b are real and
independent of t. Now if t > 0 is fixed F(x) = eβt P(t, x, (0, oo)) is non-
decreasing, F(— oo) = 0, i^(+oo) = 1, that is .P is a distribution function,
and the above result shows that the convolution of F with the normal
density qe(t, oo) is a distribution of normal type. By a well known
theorem [17], F is also of normal type and we have
ί: e~sxdF(x) = e~ats + (b2 - ε)ts2
with b2 — ε > 0. If b2 — ε = 0 the given homogeneous process is an es-
sentially determined one while if ¥ — ε > 0 it is one derived from the
Wiener process.
Another approach to the problem of determining when homogeneous
processes or equivalentely infinitely divisible processes are totally positive
is based on the Levy Khintchine representation. We consider an in-
finitely divisible process x(t) properly centered with no fixed points of
discontinuities whose characteristic function φ(t, s) has an expression
log <p(t, s) -
- tψ(s)
COINCIDENCE PROBABILITIES 1161
with the aid of (1) we are able to establish
(2) lim—Pr {\x(t) - x(0) | > λ} = ( dG(x)
wheji λ and — λ are continuity points of G. This limit relation is es-
sentially known but for lack of any available specific reference we sketch
a proof.
The proof consists of defining
Fτ{x(t) - x(0) < λ} - 1 ; .
ior Λ >u
H(t, λ) =
Pτ{x(t) - s(0) < X} f o r χ < 0
and forming the Fourier Stieljes transform of H which reduces to
(φ{t, s) — l)/ί. This clearly converges pointwise as t -» 0 to ψ(s). Invok-
ing the Levy convergence criteria following comparison with (1) establishes
An alternative proof of (2) can be based on verifying the validity
of (2) first for the case of a finite composition of independent Poisson
processes and afterwards passing to a limit to obtain the general in-
finitely divisible process.
The truth of (2) also follows by exploiting the properties of the
infinitely divisible process Uκt which counts the number of jumps of
magnitude exceeding λ that the process x(t) executes in time t. (See
[5] page 424).
Because of (2) and Theorem 5, we see that x(t) is totally positive
of order 2 if and only if \ dG(x) = 0 for all λ > 0. Hence the only
totally positive infinitely divisible process is the Wiener process except
for a drift factor.
10. Examples* In this section we present some examples of totally
positive semigroups of matrices and kernels. These matrices and kernels
are fundamental solutions of parabolic differential equations (or differen-
tial difference equations).
In generating examples of totally positive kernels it is useful to note
that if P(t,x,E) represents a totally positive kernel and P(t,x,E)
possesses a continuous density p(t, x, y) with respect to a α-finite measure
μ then p(t, x, y) is totally positive in the sense that
det p(t, xiy yj) > 0
where x1 < x2 < < xn and yx<y^< < yn. The proof consists of
1162 SAMUEL KARLIN AND JAMES MCGREGOR
selecting Ex< E2 < < En where Et is a sufficiently small open set
enclosing yi and computing
the limit taken as μ(E^ tends to 0 for all i.
Ex. ( i ) The analytic properties of birth and death matrices have
already been investigated in detail by the authors [14]. In Theorem 20
of that paper it is shown that with every solvable Stieltjes moment
problem there is associated one or more strictly totally positive semigroups
of matrices. A few examples of interest are recorded :
(a) Let L%(x) be the usual Laguerre polynomials ί normalized so that
L*(0) = ( n + a\\ , and let P(ί) be the infinite matrix with elements
PUt) = \~e-"Lϊ(x)L#x)χre-*dx .
Then P(t) is strictly totally positive for t > 0, a > — 1 .
(b) Let cn(x, a) be the Poisson-Charlier polynomials [15] and P(t) the
matrix with elements
PnJt) - Σ e-«cn(k, a)cm(k, a)^L .
fc=o kl
Then P(t) is strictly totally positive for t > 0, a > 0.
Ex (ii) The Wiener process on the real line is a strong Markoff
process with continuous path functions. The direct product of n copies
of this process is the w-dimensional Wiener process which is known to
be a strong Markoff process. Therefore the kernel
P(ί, x, E) = ~
is totally positive for t > 0 (strictly, since P(ί, x, E) > 0 when E is an
open set).
Ex. (iii) If Γ(t) = (Γ1(t), • • Γ*(ί)) is the ^-dimensional Wiener
process and X(t) is its radial part, i.e.,
then X(ί) is a process on 0 < x < oo with continuous path functions.
These processes have been studied by Levy [16], Spitzer [22] and others.
The corresponding diffusion equation and transition function are
COINCIDENCE PROBABILITIES 1163
du _ d2u , 2γ du
dt dx2 x dx
P(t, x,E)=\ p(t, x, y)dμ{y) ,
rv- fc-1
P(t, x, v) = Γ e - Λ T(ay)T(ay)dμ(a)
2 γ+1/2 Γ(γ + 3/2)
where J stands for the usual Bessel function.
These formulas make sense for arbitrary γ > 0 and have been
studied by Bochner [2]. The density may be written in the form
p(t, x, y) = (2t)- ( Y + 1 / a ) exp ( ^ ) exp
Now T(ixj2t) is a power series with positive coefficients, in fact
\ 2t J fc=o Jo-
a Γ(γ + 1/2)
and σ(s) is an increasing step function whose jumps occur at the even
integers. Let 0 < xλ < x2 < < xn and 0 < yλ < y2 < < yn. If
0 < Sj < s2 < < sw then the Vandermonde determinant
is known to be non-negative, positive if xx > 0. From the formula
det τ(^Ml) = \ \ J ^ M J ^ y»)dσ(Sl)dσ(s2). dσ(sn)
V 2ί / JJo S S l < V .<Sre<~ VSl s/ 81 β/
it readily follows that T(ixy/2t) and hence also p(t, x, y) is strictly to-
tally positive.
1164 SAMUEL KARLIN AND JAMES MCGREGOR
Ex. (iv) If we consider Brownian motion on the circle the transition
density function has the form
p(ί, ί,ψ) = l + 2Σe~ cos2πn(θ - ψ)
71 = 1
where θ and ψ traverse the unit interval. This formula may be derived
as the fundamental solution of the heat equation on the circle. In this
case the hypothesis of Theorem 1 are fulfilled and we deduce that all
odd order determinants of p(t, θ, ψ) are non-negative (actually strictly
positive) viz
If 0 ^ θx < θ2 < < θ2n+1 < 1 and 0 < ψ1 < ψ2 < <ψ2n+1 < 1
then det p(t, θί9 ψj) > 0.
1. R. M. Blumenthal, An extended Markov Property, Trans, A. M.S., 85(1957), 52-72.
2. S. Bochner, Sturm Liouvίlle and heat equations etc., P'roc. Conf. Diff. Eqns. Maryland,
(1955), 23-48.
3. K.L. Chung, Foundations of the theory of continuous parameter Markoff chains,
Proc. Third Berkeley Symposium, 2 (1956), 29-40.
4. K.L. Chung, On a basic property of Markov chains, Annals of Math., 68 (1958), 126-149.
5. J. L. Doob, Stochastic processes, New York 1953.
6. E. B. Dynkin, Infinitesimal operators of Markoff processes, Theory of probability and
its applications, 1 (1956), 38-60, (in Russian).
7. E. B. Dynkin, A. Juskevitch, Strong Markov processes, Theory of probability and its
applications, 1 (1956), 149-155, (in Russian).
8. F. Gantmacher and M. Krein, Oscillatory matrices and kernels and small vibrations
of mechanical systems, (in Russian), 2nd ed., Moscow 1950.
9. L I . Hirschman and D. V. Widder, The convolution transform, Princeton 1955.
10. G. A. Hunt, Some theorems concerning Brownian motion, Trans. Amer. Math. Soc,
8 1 (1956), 294-319.
11. G. A. Hunt, Markoff processes and potentials, Illinois Jour. Math., 1 (1957), 44-93.
12. S. Karlin and J., McGregor, A characterization of birth and death processes, Proc.
Nat. Acad. Sci., March (1959), 375-379.
13. S. Karlin, Mathematical methods and theory in games, programming and economics,
Addison Wesley, to appear.
14. S. Karlin and J. McGregor, The differential equations of birth-and-death processes
and the Stieltjes moment problem, Trans. Amer. Math. Soc. 8 5 (1957), 489-546.
15. S. Karlin and J. McGregor, Many server queuing processes with Poisson input and
exponential service times, Pacific J. Math., 8 (1958), 87-118.
16. P. Levy, Processus stochastiques et mouvement brownien, Paris 1948.
17. M. Loeve, Probability theory, van Nostrand, 1955 (p. 271, Theorem A).
18. C. Loewner, On totally positive matrices, Math. Zeit., 6 3 (1955), 338-340.
19. D. Ray, Stationary Markov processes with continuous path functions, Trans. Amer,
Math. Soc, 8 2 (1956) 452-493.
20. I. J. Schoenberg, On Pόlya frequency functions, Jour. d'Anal. Math., 1 (1951), 331-374.
21. I. J. Schoenberg, On smoothing operations and their generating functions, Bull. Am.
Math. Soc, 59 (1953), 199-230.
22. F. Spitzer, Some theorems concerning 2-dimensional Brownian motion, 8 7 (1958), | {"url":"http://www.docstoc.com/docs/80052379/COINCIDENCE-PROBABILITIES","timestamp":"2014-04-19T04:43:59Z","content_type":null,"content_length":"110900","record_id":"<urn:uuid:2ff71c59-0b69-4083-b6a0-070cce016a63>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
-Time Trade
- IEEE Transactions on Computers , 1986
"... In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a
manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on th ..."
Cited by 2927 (46 self)
Add to MetaCart
In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner
similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case,
a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity
proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to
problems in logic design verification that demonstrate the practicality of our approach. Index Terms: Boolean functions, symbolic manipulation, binary decision diagrams, logic design verification 1.
- IEEE Transactions on Computers , 1998
"... This paper presents lower bound results on Boolean function complexity under two different models. The first is an abstraction of tradeoffs between chip area and speed in very large scale
integrated (VLSI) circuits. The second is the ordered binary decision diagram (OBDD) representation used as a da ..."
Cited by 233 (10 self)
Add to MetaCart
This paper presents lower bound results on Boolean function complexity under two different models. The first is an abstraction of tradeoffs between chip area and speed in very large scale integrated
(VLSI) circuits. The second is the ordered binary decision diagram (OBDD) representation used as a data structure for symbolically representing and manipulating Boolean functions. These lower bounds
demonstrate the fundamental limitations of VLSI as an implementation medium, and OBDDs as a data structure. They also lend insight into what properties of a Boolean function lead to high complexity
under these models. Related techniques can be...
- Lectures on Parallel Computation , 1993
"... A vast amount of work has been done in recent years on the design, analysis, implementation and verification of special purpose parallel computing systems. This paper presents a survey of
various aspects of this work. A long, but by no means complete, bibliography is given. 1. Introduction Turing ..."
Cited by 77 (5 self)
Add to MetaCart
A vast amount of work has been done in recent years on the design, analysis, implementation and verification of special purpose parallel computing systems. This paper presents a survey of various
aspects of this work. A long, but by no means complete, bibliography is given. 1. Introduction Turing [365] demonstrated that, in principle, a single general purpose sequential machine could be
designed which would be capable of efficiently performing any computation which could be performed by a special purpose sequential machine. The importance of this universality result for subsequent
practical developments in computing cannot be overstated. It showed that, for a given computational problem, the additional efficiency advantages which could be gained by designing a special purpose
sequential machine for that problem would not be great. Around 1944, von Neumann produced a proposal [66, 389] for a general purpose storedprogram sequential computer which captured the fundamental
principles of...
"... Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing,
Post, Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and oper ..."
Cited by 57 (7 self)
Add to MetaCart
Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing, Post,
Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and operating systems were under development and therefore became both the subject and basis for a
great deal of theoretical work. The power of computers of this period was limited by slow processors and small amounts of memory, and thus theories (models, algorithms, and analysis) were developed
to explore the efficient use of computers as well as the inherent complexity of problems. The former subject is known today as algorithms and data structures, the latter computational complexity. The
focus of theoretical computer scientists in the 1960s on languages is reflected in the first textbook on the subject, Formal Languages and Their Relation to Automata by John Hopcroft and Jeffrey
Ullman. This influential book led to the creation of many languagecentered theoretical computer science courses; many introductory theory courses today continue to reflect the content of this book
and the interests of theoreticians of the 1960s and early 1970s. Although
- Journal of the ACM , 1981
"... ABSTRACT The problem of performing multtphcaUon of n-bit binary numbers on a chip is considered Let A denote the ch~p area and T the time reqmred to perform mult~phcation. By using a model of
computation which is a realistic approx~mauon to current and anucipated LSI or VLSI technology, ~t is shown ..."
Cited by 28 (1 self)
Add to MetaCart
ABSTRACT The problem of performing multtphcaUon of n-bit binary numbers on a chip is considered Let A denote the ch~p area and T the time reqmred to perform mult~phcation. By using a model of
computation which is a realistic approx~mauon to current and anucipated LSI or VLSI technology, ~t is shown that A T 2. for all a ~ [0, 1], where A0 and To are posmve constants which depend on the
technology but are mdependent of n. The exponent 1 + a is the best possible A consequence of this result is that binary multiphcatlon is "harder " than binary addmon More precisely, ff
(AT2~)M(n) and (AT2~)A(n) denote the mmimum area-time complexity for n-b~t binary multiphcauon and addmon, respectively, then (AT2~)M(n) _ 1 f~(nl-a) for 0 _< a--< na for ~<a_<l for°>, ( = fi(nl/2)
for all a _> 0).
- J. Lightwave Technology , 1994
"... can he avoided by ensuring that a switch is not used by two connections simultaneously. In order to support crosstalk-free communications among N inputs and N outputs, a space domain approach
dilates an NxN network into one that is essentially equivalent to a 2Nx2N network. Path conflicts, however m ..."
Cited by 13 (4 self)
Add to MetaCart
can he avoided by ensuring that a switch is not used by two connections simultaneously. In order to support crosstalk-free communications among N inputs and N outputs, a space domain approach dilates
an NxN network into one that is essentially equivalent to a 2Nx2N network. Path conflicts, however may still exist in dilated networks. This paper proposes a time domain approach for avoiding
crosstalk. Such an approach can be regarded as “dilating ” a net-work in time, instead of space. More specifically, the connections that need to use the same switch are established during different
time slots. This way, path conflicts are automatically avoided. The time domain dilation is useful for overcoming the limits on the network size while utilizing the high bandwidth of optical
interconnects. We study the set of permutations whose crosstalk-free con-nections can be established in just two time slots using the time domain approach. While the space domain approach trades
hardware complexity for crosstalk-free communications, the time domain approach trades time complexity. We compare the pro-posed time domain to the space domain approach by analyzing the tradeoffs
involved in these two approaches. I.
"... Abstract-This paper surveys nine designs for VLSI circuits that compute N-element Fourier transforms. The largest of the designs requires O(N2 log N) units of silicon area; it can start a new
Fourier transform every O(log N) time units. The smallest designs have about 1/Nth of this throughput, but t ..."
Add to MetaCart
Abstract-This paper surveys nine designs for VLSI circuits that compute N-element Fourier transforms. The largest of the designs requires O(N2 log N) units of silicon area; it can start a new Fourier
transform every O(log N) time units. The smallest designs have about 1/Nth of this throughput, but they require only 1/Nth as much area. The designs exhibit an area-time tradeoff: the smaller ones
are slower, for two reasons. First, they may have fewer arithmetic units and thus less parallelism. Second, their arithmetic units may be interconnected in a pattern that is less efficient but more
compact. The optimality of several of the designs is immediate, since they achieve the limiting area * time2 performance of Q(N2 log2 N). Index Terms-Algorithms implemented in hardware, area-time
complexity, computational complexity, FFT, Fourier transform, mesh-connected computers, parallel algorithms, shuffle-exchange | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=53075","timestamp":"2014-04-18T08:13:44Z","content_type":null,"content_length":"31950","record_id":"<urn:uuid:a5797ee6-2ef4-430a-859d-9e3bcf549dc1>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
A comparison of projection pursuit and neural network regression modeling
, 1994
"... We studied and compared two types of connectionist learning methods for model-free regression problems in this paper. One is the popular back-propagation learning (BPL) well known in the
artificial neural networks literature; the other is the projection pursuit learning (PPL) emerged in recent years ..."
Cited by 65 (1 self)
Add to MetaCart
We studied and compared two types of connectionist learning methods for model-free regression problems in this paper. One is the popular back-propagation learning (BPL) well known in the artificial
neural networks literature; the other is the projection pursuit learning (PPL) emerged in recent years in the statistical estimation literature. Both the BPL and the PPL are based on projections of
the data in directions determined from interconnection weights. However, unlike the use of fixed nonlinear activations (usually sigmoidal) for the hidden neurons in BPL, the PPL systematically
approximates the unknown nonlinear activations. Moreover, the BPL estimates all the weights simultaneously at each iteration, while the PPL estimates the weights cyclically (neuron-by-neuron and
layer-by-layer) at each iteration. Although the BPL and the PPL have comparable training speed when based on a Gauss-Newton optimization algorithm, the PPL proves more parsimonious in that the PPL
requires a fewer hi...
- IEEE Transactions on Neural Networks , 1995
"... Abstruct-This paper presents P polynomial conndo&t network called ridge polynomial network (RE”) that can dormly approximate any imntinuous function on a cootpad set in multidimensional input
space?TId, with arbitrary dqpe of pccmcy. Thii network provides a more e$cicnt and regular orchitccture comp ..."
Cited by 20 (3 self)
Add to MetaCart
Abstruct-This paper presents P polynomial conndo&t network called ridge polynomial network (RE”) that can dormly approximate any imntinuous function on a cootpad set in multidimensional input space?
TId, with arbitrary dqpe of pccmcy. Thii network provides a more e$cicnt and regular orchitccture compared to ordinary higher-order feedforward networks while maintaining their fast learning
property. The ridge polynomial network is a generalization of the pisigma network and uses a special form of ridge polynomials. It function f: Bd + B is approximated as [17], [25] / d d d d d d
provides a natural mechanism for irmmental ntbtwnk growth. Simulation results on a surface fitting problem, the dassiecPtion of high-dimensional data and the realbtion of a mdtlvariate polynomial
function are given to highlight the network. In particular, a canstructive 1 developed for the network is shown to yield smooth generalization and steady learning. I.
- IEEE Control Systems Magazine , 1997
"... this article, we describe and develop methodologies for mod- eling and transferring human control strategy (HCS). This research has potential application in a variety of areas such as the
Intelligent Vehicle Highway System (IVHS), human-machine interfacing, real-time training, space telerobotics, an ..."
Cited by 17 (6 self)
Add to MetaCart
this article, we describe and develop methodologies for mod- eling and transferring human control strategy (HCS). This research has potential application in a variety of areas such as the Intelligent
Vehicle Highway System (IVHS), human-machine interfacing, real-time training, space telerobotics, and agile manufacturing. We specifically address the following issues: (1) how to efficiently model
human control strategy through learning cascade neural networks, (2) how to select state inputs in order to generate reliable models, (3) how to validate the computed models through an independent,
Hidden Markov Model-based procedure, and (4) how to effectively transfer human control strategy. We have implemented this approach experimentally in the real-time control of a human driving
simulator, and are working to transfer these methodologies for the control of an autonomous vehicle and a mobile robot. In providing a framework for abstracting computational models of human skill,
we expect to facilitate analysis of human control, the development of humanlike intelligent machines, improved human-robot coordination, and the transfer of skill from one human to another
, 1996
"... This paper examines the implementation of projection pursuit regression (PPR) in the context of machine learning and neural networks. We propose a parametric PPR with direct training which
achieves improved training speed and accuracy when compared with nonparametric PPR. Analysis and simulations ..."
Cited by 11 (0 self)
Add to MetaCart
This paper examines the implementation of projection pursuit regression (PPR) in the context of machine learning and neural networks. We propose a parametric PPR with direct training which achieves
improved training speed and accuracy when compared with nonparametric PPR. Analysis and simulations are done for heuristics to choose good initial projection directions. A comparison of a projection
pursuit learning network with a one hidden layer sigmoidal neural network shows why grouping hidden units in a projection pursuit learning network is useful. Learning robot arm inverse dynamics is
used as an example problem.
- IEEE Trans. Neural Networks , 1996
"... In a regression problem, one is given a d- dimensional random vector X, the components of which are called predictor variables, and a random variable, Y , called response. A regression surface
describes a general relationship between variables X and Y . One nonparametric regression technique that h ..."
Cited by 6 (1 self)
Add to MetaCart
In a regression problem, one is given a d- dimensional random vector X, the components of which are called predictor variables, and a random variable, Y , called response. A regression surface
describes a general relationship between variables X and Y . One nonparametric regression technique that has been successfully applied to highdimensional data is projection pursuit regression (PPR).
In this method, the regression surface is approximated by a sum of empirically determined univariate functions of linear combinations of the predictors. Projection pursuit learning (PPL) proposed by
Hwang et al. formulates PPR using a two-layer feedforward neural network. One of the main differences between PPR and PPL is that the smoothers in PPR are nonparametric, whereas those in PPL are
based on Hermite functions of some predefined highest order R. While the convergence property of PPR is already known, that for PPL has not been thoroughly studied. In this paper, we demonstrate that
PPL networks... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1348796","timestamp":"2014-04-17T16:30:19Z","content_type":null,"content_length":"24939","record_id":"<urn:uuid:52d953b4-876e-4564-8009-483d4a101743>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
Second Order DE: Nonlinear Homogeneous
Here, for example, since sin(x)= x- (1/6)x^3+ ..., a first approximation would be to replace sin(x) by x: mx"+ cx'- kx= 0.
A slightly more sophisticated method is "quadrature": Let u= x' so that x"= u'. By the chain rule, u'= (du/dx)(dx/dt)= u u' so the equation becomes u u'+ cu- kx= 0. That is now a first order equation
for u as a function of x. The problem typically is that even after you have found u, integrating x'= u, to find x as a function of t, in closed form may be impossible (without the damping, this gives
elliptic integrals). | {"url":"http://www.physicsforums.com/showthread.php?t=189218","timestamp":"2014-04-16T07:40:57Z","content_type":null,"content_length":"26367","record_id":"<urn:uuid:41013953-2bd3-49d1-b72e-409b1bfca147>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: May 2012 [00244]
[Date Index] [Thread Index] [Author Index]
Re: Maximisation question
• To: mathgroup at smc.vnet.net
• Subject: [mg126562] Re: Maximisation question
• From: Bill Rowe <readnews at sbcglobal.net>
• Date: Sat, 19 May 2012 05:45:13 -0400 (EDT)
• Delivered-to: l-mathgroup@mail-archive0.wolfram.com
On 5/18/12 at 5:23 AM, jack.j.jepper at googlemail.com (J.Jack.J.)
>For any integer j, suppose we have a function f(j). Then how would I
>find the highest f(i) such that 100 <= f(i) <0 ? With thanks in
Use NMaximize. For example:
In[14]:= NMaximize[{4 x - .57 x^2, x \[Element] Integers}, {x}]
Out[14]= {6.88,{x->4}}
and by doing
In[15]:= Solve[D[4 x - .57 x^2, x] == 0, x]
Out[15]= {{x->3.50877}}
you can see the true maximum is not an integer | {"url":"http://forums.wolfram.com/mathgroup/archive/2012/May/msg00244.html","timestamp":"2014-04-16T10:23:16Z","content_type":null,"content_length":"25442","record_id":"<urn:uuid:b64bde7d-e8fe-460a-ac84-d6ac32810455>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00406-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jumping Cricket
Copyright © University of Cambridge. All rights reserved.
'Jumping Cricket' printed from http://nrich.maths.org/
El Crico the cricket has to cross a square patio to get home. He likes to jump along the lines made by the sides of the tiles.
He can jump the length of one tile. He can jump the length of two tiles. If he tries hard he can even jump the length of three tiles.
Here's one path where he could make four jumps to get home - 1, 2, 2, 1
Can you find a path that would get El Crico home in three jumps?
Can you find all the paths of three jumps? | {"url":"http://nrich.maths.org/172/index?nomenu=1","timestamp":"2014-04-17T12:48:26Z","content_type":null,"content_length":"3600","record_id":"<urn:uuid:f341faae-7d50-45e5-bbc1-bdb91ddf700a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bijection of an infinite series
November 30th 2009, 08:51 PM #1
Nov 2009
Bijection of an infinite series
So here is the question, that I am just really stuck on :
Problem. Let x(n) = (-1)^n / n, and A be a real number. Prove the
there exists such a 1-to-1 mapping (bijection) p : N --> N that
an infinite series with a generic term x(p(n)) converges and the sum of
this series is equal to A .
ok, So I understand that as n approaches infiniti, this series is really getting to zero from both sides. understood. so we need to prove it's an injection and a surjection. for an injection, do
you just have to prove the basic fact that if f(x_1) = f(x_2), then x_1=x_2 ?? Now with the surjection, I am just confused in general, and don't even know how to approach it.
Now with this term A, we want to prove that the series converges (so would i just show the limit, as stated above? or how would i do this with a real analysis approach??) and also, the sum of
this series is what? how can you conclude that its sum is A??
Thank you for the help. Just wanted to make clear what I understand and don't understand, etc.
What you are asking is exactly how to prove a particular case of the Riemann rearrengement theorem, that is if a series $\sum_n a_n$ is convergent but the series of its abolute values diverge
then, for each $A\in\mathbb{R}$ there exists a permutation $\pi:\mathbb{N}\rightarrow\mathbb{N}$ such that $\sum_na_{\pi(n)}=A$.
In this especific case, $a_n=\frac{(-1)^n}{n}$, you have that both $\sum_{n}a_{2n}$ and $\sum_{n}a_{2n-1}$ are divergent (in the general case you would have to take the positive and the negative
terms). Now, if $A$ is positive choose $n_0$ such that $P_1:=\sum_{n=1}^{n_0}a_{2n}>A$. Now define $\pi(n)=2n$ for $n\leq n_0$. Again since $\sum_{n}a_{2n-1}$ is divergent to $-\infty$ you can
take $n_1$ such that $N_1:=P_1-\sum_{n=1}^{n_1}a_{2n-1}<A$. Take $\pi(n_0+n)=2n-1$ for $n=1,\ldots,n_1$. Now you use
that $\sum_{n=n_0+1}^{\infty} a_{2n}$ and
$\sum_{n=n_1+1}^{\infty} a_{2n+1}$ are divergent to "move after A and before A" recursively. The key to show that the proces leads is that $a_n$ goes to 0, then passing to before A to after A or
the converse will be "with a such small step as desired" for a sufficient number of repetitions.
This is just a sketch of Riemann adapted to your case, the idea is simple, but writting with precision is a little bit harder.
Is that really what the problem says, just find a bijection from $\{\frac{(-1)^n}{n}\}$ to a series that converges to A?
Here is what I would do: First find a series that converges to a. Since the geometric series $\sum_{n=0}^\infty r^n$ converges to $\frac{1}{1-r}$, set $\frac{1}{1- r}= A$ and solve for r:
$r= \frac{A-1}{A}$ so the series $\sum_{n=0}^\infty \left(\frac{A-1}{A}\right)^n$ converges to A. Now just do the obvious bijection: map $\frac{(-1)^n}{n}$ to $\left(\frac{A-1}{A}\right)^n$.
Surely, there is more to it than that!
December 1st 2009, 12:17 AM #2
Jun 2009
December 1st 2009, 03:34 AM #3
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/differential-geometry/117751-bijection-infinite-series.html","timestamp":"2014-04-18T01:19:39Z","content_type":null,"content_length":"42716","record_id":"<urn:uuid:c3173fe0-0efc-4c41-bbbe-5f260dd6485e>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
Functors on rigid tensor categories.
up vote 4 down vote favorite
This is a question about the proof of proposition 1.13 in Deligne and Milne, Tannakian Categories. Let $C,C'$ be two rigid tensor categories and $F,G : C \rightarrow C'$ be two tensor functors. Let
$u : F \rightarrow G$ be a morphism of functors. Define the morphism $v : G \rightarrow F$ by $$ v(X) : G(X) \simeq G(X^\vee)^\vee \xrightarrow{{}^t u(X^\vee)} F(X^\vee)^\vee \simeq F(X).$$
Why is $v$ the inverse of $u$ ?
tannakian-category ct.category-theory
Isn't a well-placed question, is $v$ a funtor or a transformation between funtors $F$ and $G$? – Buschi Sergio Dec 11 '12 at 18:39
5 This is a job for... String Diagram Man! :-) – Todd Trimble♦ Dec 11 '12 at 21:04
Nitpick: instead of saying "morphism of functors", you should say "morphism of tensor functors" (there's a distinction). – Todd Trimble♦ Dec 12 '12 at 0:29
add comment
1 Answer
active oldest votes
Okay, here is a link to my web at the nLab which provides a diagrammatic proof (for one of the two equations that must be verified; the other equation is established similarly).
up vote 4 down Of course, the gigantic diagram which you will find did not spring from my head like Pallas Athena. It was assembled by first studying a simple string diagram proof, which
vote accepted unfortunately I can't draw for you in a convenient way. The big commutative diagram which results only looks intimidating.
I have added a condensed version of the diagram which might be easier to comprehend, together with a similar diagram for the other equation $v(X) \circ u(X) = 1$. Also added is a
generalization of this result to 2-categories. – Todd Trimble♦ Dec 13 '12 at 16:13
add comment
Not the answer you're looking for? Browse other questions tagged tannakian-category ct.category-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/116104/functors-on-rigid-tensor-categories?sort=oldest","timestamp":"2014-04-16T13:59:41Z","content_type":null,"content_length":"54902","record_id":"<urn:uuid:0db0f0a3-4810-4253-ac0d-fdc294a1b452>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
How It Works
InstaEDU makes it easy to find a great tutor and connect instantly
• Connect with the perfect tutor, anytime 24/7
Never get stuck on homework again. InstaEDU has awesome tutors instantly available around the clock.
• Work together with the best lesson tools
Our lesson space lets you use video, audio or text. Upload any assignment and work through it together.
Try it free—then lock in a super low rate
Anyone can try InstaEDU for up to 2 hours for free. After that, rates start at just 40¢/minute. | {"url":"http://instaedu.com/Geometry-online-tutoring/","timestamp":"2014-04-19T17:02:19Z","content_type":null,"content_length":"202894","record_id":"<urn:uuid:cab8e7d0-f80a-4428-85ac-5c6d7077c796>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00628-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fraction Calulator Help
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Feb 2013
Rep Power
Fraction Calulator Help
Hello community, I have been using Python for 4 months now and learning something new about it everyday. I do most of my actual coding on Android devices for Android devices. I'm the first to
admit that I'm not as smart as some people and trying to learn something like this is very confusing at times. I like to write simple programs that I can actually use on a semi-daily basis like
calculation programs. I am not a student looking for answers and to be honest, I wish I did stay in college. I am a person that loves to make a computer do what I want it to do.
That brings me to my question at hand. There were and still are many instances when I needed to add, subtract, multiply etc. fractions and was too lazy or forgetful to do the calculations on
paper. I work in the construction trade (mostly plumbing and electrical) and frequently I deal with fractions regarding measured distances. I have successfully written a very verbose and basic
fraction calculator in Python.
The current program just consists of inputting the first fraction, selecting add or subtract, inputting the second fraction and finally it returns the calculation. Pretty simple, right? I
recently implemented using the Android UI menus to select what the first fraction would be.
title = 'First Frac Denominator Range'
droid.dialogSetSingleChoiceItems(['1/2', 'Thirds', 'Fourths', 'Eighths', 'Sixteenths'])
result = droid.dialogGetResponse().result
ffdr = droid.dialogGetSelectedItems().result
for num1 in ffdr:
if num1 == 0:
showfrac = str('1/2')
frts() #frts() is a function that just displays the showfrac str
firstot = float(.50)
elif num1 == 1:
title = 'Thirds'
droid.dialogSetSingleChoiceItems(['1/3', '2/3'])
result = droid.dialogGetResponse().result
thirds = droid.dialogGetSelectedItems().result
for tnum1 in thirds:
if tnum1 == 0:
showfrac = str('1/3')
firstot = float(.33)
elif tnum1 == 1:
showfrac = str('2/3')
firstot = float(.66)
That is a brief incomplete example of what I have. It coverts whatever fraction into a float. I won't show the entire script because it is long. Then it asks if you want to add or subtract. After
that it asks for the second fraction in the same manner but it only deals with 1/2, 3rds, 4ths, 8ths and 16ths. It will either add or subtract the two float values and with a long series of if
and elif statements convert the result back to a str.
if total_of_two == 0.125:
answer = str('1/8')
elif total_of_two == .50:
answer = str('1/2')
Pretty inefficient way of doing it but it works. Ok FINALLY on to my REAL question. How could I incorporate whole numbers to accompany the fraction and calculation? I have read the Python Docs
regarding the "fractions" module and I couldn't gather any answer from that. BTW, I'm currently using Python 2.6.2. I tried an if statement that would give me the left over numbers if the total
was > 1 but I'm lost when it comes to trying to work with values > 1.
I appreciate any help and I'm not afraid to ask. I was not born with the knowledge of Python in my head lol. Before learning Python I had some experience with BASIC which helped me understand a
little about programming but after learning that 13 years ago in high school my brain is rusty. Thanks for reading!
whole_number = int(total_of_two)
fractional_part = total_of_two % 1
Works for non-negative numbers.
[code]Code tags[/code] are essential for python code and Makefiles!
Originally Posted by b49P23TIvg
whole_number = int(total_of_two)
fractional_part = total_of_two % 1
Works for non-negative numbers.
Thanks so much....this simple clarification has helped greatly!
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Feb 2013
Rep Power | {"url":"http://forums.devshed.com/python-programming/943050-fraction-calulator-help-last-post.html","timestamp":"2014-04-19T12:51:15Z","content_type":null,"content_length":"55747","record_id":"<urn:uuid:c1300599-fe5e-4591-8717-5870ee2c4adc>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
Indeterminate Form
September 28th 2009, 12:57 PM #1
Junior Member
Sep 2009
Indeterminate Form
x -> 0
What is the indeterminate form of
lim n-> infinity (1 + 7/n + x/n)^n
If I let u = (7+x)/n, the problem becomes
lim n-> infinity (1+u)^((7+x)/u)
And the answer is e^7, but I don't understand the process of obtaining the answer.
Actually the answer is $e^{7 + x}$.
You have $\lim_{n \rightarrow + \infty} \left( 1 + \frac{7 + x}{n}\right)^n = \lim_{n \rightarrow + \infty} \left( 1 + \frac{a}{n}\right)^n = e^a$.
(The last one is a standard limit and should be known).
$\lim_{n \rightarrow + \infty} \left( 1 + \frac{a}{n}\right)^n = e^a$
How can I show the work on my homework assignment for that particular step? I know you say it's a standard limit, but I can't find any information about it online... Or is it just a matter of
saying it's a standard limit?
$\lim_{n \rightarrow + \infty} \left( 1 + \frac{a}{n}\right)^n = e^a$
How can I show the work on my homework assignment for that particular step? I know you say it's a standard limit, but I can't find any information about it online... Or is it just a matter of
saying it's a standard limit?
Here's a site
As for your problem, from
$\lim_{n \to + \infty} \left( 1 + \frac{1}{n}\right)^n = e$
set $n = \frac{m}{a}$ noting that $n \to \infty$ gives $m \to \infty$ so
$\lim_{m \to + \infty} \left( 1 + \frac{a}{m}\right)^{m/a} = e$
$\left(\lim_{m \to + \infty} \left( 1 + \frac{a}{m}\right)^{m/a} \right)^a = e^a$.
That's perfect. I now understand the problem completely, thanks!
September 28th 2009, 11:10 PM #2
September 29th 2009, 04:59 AM #3
Junior Member
Sep 2009
September 29th 2009, 05:18 AM #4
September 29th 2009, 05:31 AM #5
Junior Member
Sep 2009 | {"url":"http://mathhelpforum.com/calculus/104863-indeterminate-form.html","timestamp":"2014-04-17T14:13:04Z","content_type":null,"content_length":"45029","record_id":"<urn:uuid:81002205-62c5-43cb-be8d-669259ecd91c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
How It Works
InstaEDU makes it easy to find a great tutor and connect instantly
• Connect with the perfect tutor, anytime 24/7
Never get stuck on homework again. InstaEDU has awesome tutors instantly available around the clock.
• Work together with the best lesson tools
Our lesson space lets you use video, audio or text. Upload any assignment and work through it together.
Try it free—then lock in a super low rate
Anyone can try InstaEDU for up to 2 hours for free. After that, rates start at just 40¢/minute. | {"url":"http://instaedu.com/Geometry-online-tutoring/","timestamp":"2014-04-19T17:02:19Z","content_type":null,"content_length":"202894","record_id":"<urn:uuid:cab8e7d0-f80a-4428-85ac-5c6d7077c796>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00628-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-User] Help optimizing an algorithm
Zachary Pincus zachary.pincus@yale....
Thu Jan 31 18:00:21 CST 2013
> For example, if my exposure times are (10, 20, 25), and for a given pixel, map_coordinates tells me 1.5, then that means that that pixel's exposure time is 22.5 (halfway between the exposure times with indices 1 and 2).
> Let's say that my sampling is just 2 4x3 images:
> >>> a = np.arange(24.0).reshape(2,4,3)
> >>> a
> array([[[ 0., 1., 2.],
> [ 3., 4., 5.],
> [ 6., 7., 8.],
> [ 9., 10., 11.]],
> [[ 12., 13., 14.],
> [ 15., 16., 17.],
> [ 18., 19., 20.],
> [ 21., 22., 23.]]])
> and I want to find the proper value for the (0, 0) pixel if its reported value was 6. What I want in this case is actually .5 (i.e. halfway between the two images). This is where I'm getting stuck, unfortunately. I'm missing some conversion or something, I think. Help would be appreciated.
Let's go back a few steps to make sure we're on the same page... You have a series of flat-field images acquired at different exposure times, which together define a per-pixel gain function, right? Then for each new image you want to calculate the "effective exposure time" for the count at a given pixel. Which is to say, the light input. Is this all correct?
So for each pixel, you are estimating the gain function f(exposure) -> value from your series of flat-field calibration images.
Because it's monotonic, you can invert this to g(value) -> exposure.
Then for any given value in an input image, you want to apply function g().
Again, is this all correct?
If so then you're almost there. The problem is one that you point out initially:
> I don't have uniform spacing for my sampling
Without uniform spacing, you can't convert a sample value into an index in the array without doing a search through the elements in the array to figure out where your value will fit. This is why Josef was talking about searchsorted() et al. So you need to first resample your gain function to have uniform sample spacing.
Let's do a one-pixel case first:
exposures = [0,1,10,20,25]
values = [100, 110, 200, 250, 275]
input_value = 260
Now, you could just use numpy.interp() to figure out the exposure time that is the linear interpolation from this:
output_exposure = numpy.interp(input_value, values, exposures)
Except that under the hood this does a linear search through the values array to find the nearest neighbors of input_value, and then does the standard linear interpolation. This is going to be slow to do for every pixel in an image, unless you code it in C or cython. (Which actually wouldn't be that bad.)
Instead let's resample the exposures and values to be uniform:
num_samples = 10
vmin, vmax = values.min(), values.max()
uniform_values = numpy.linspace(vmin, vmax, num_samples)
uniform_exposures = numpy.interp(uniform_values, values, exposures)
Note that we're still using numpy.interp() here: we still have to do the linear search! No free lunch. But we can do it just once and pre-compute the lookup table for a range of values, and then subsequently just calculate the correct index into it:
value_index = (num_samples - 1) * (input_value - vmin) / float(vmax - vmin)
Now we can do linear interpolation with map_coordinates():
exposure_estimate = scipy.ndimage.map_coordinates(uniform_exposures, [[value_index]], order=1)[0] # extra packing/unpacking just for scalar case.
Or just directly do the linear interpolation directly:
fraction, index = numpy.modf(value_index)
index = int(index)
l, h = uniform_exposures[[index, index + 1]]
exposure_estimate = h*fraction + l*(1 - fraction)
So you still need to loop through pixel by pixel and do numpy.interp(), but just once to get a uniformly spaced input array. Then you can use that for map_coordinates() as I described earlier. Remember that in the 2D case, you need to not only provide the appropriate value_index, but also the x- and y-indices, again as I described in the previous email. If you are still stuck, I'll write out example code equivalent to the above but for the 2d case.
This all clear? I'm happy to explain anything in further detail!
More information about the SciPy-User mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2013-January/034051.html","timestamp":"2014-04-17T13:49:13Z","content_type":null,"content_length":"6864","record_id":"<urn:uuid:3af700f7-d926-4a88-a69e-2c6b0dd70c5c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
= Preview Document = Member Document = Pin to Pinterest
• Factoring practice worksheet - Students will write all the factors for a particular number.
In and Out boxes to practice multiplication with numbers ranging from 1-20.
• Sample of our Animal Tracks Math series, coincides with interactive on member site.
A set of three color illustrated posters of multiplication story problems with Euro coins.
Gray scale multiplication table bookmarks to use alone or in conjunction with the interactive multiplication games.
Mr. Wilder is taller than Ms. White. Mr. Singer is shorter than Ms. White. Ms. Jackson is taller than Ms. White, but she is not the tallest teacher. Put all of these teachers in order according
to their height. Six word problems.
Jordan has eight apples. Cameron has half as many apples as Jordan. Natalie has three-quarters as many apples as Cameron. How many apples does each person have? How many do they have altogether?
Six word problems.
Colorful multiplication table bookmarks to use alone or in conjunction with the interactive multiplication games.
Brooke has eleven flowers. She has more tulips than roses. What are the possible combinations of tulips and roses, if she only has these two types of flowers? Six word problems.
Andrea brought seventy-five Valentines candies to school. If there are twenty-eight students in her class, how many candies can each student have if Andrea wants them all to have the same amount?
Will there be any left over? Six word problems.
Paige, Cassandra, and Katie earned $12.45 by working for a neighbor. Assuming they worked equal amounts, what would each girl’s share be? Six word problems.
A poster solving a multiplication word problem using Canadian money.
Gray scale multiplication table bookmarks to use alone or in conjunction with the interactive multiplication games
Dakota went to the store. He bought three note pads for $.75 each, four pencils at 2 for $.35 and one candy bar, which was being sold at 3 for $1.80. How much money did he spend? How much did he
get back from the $10.00 he gave the clerk? Six word problems.
There are pictures of snakes in three overlapping circles. There are ten snakes in circle A, twenty snakes in circle B, and thirteen snakes in circle C. Six of the snakes are in both circles A
and B. Five of the snakes are in both circles B and C. How many snakes are there in all? Six word problems.
Heather says, "I have two numbers in mind. When I subtract the smaller from the larger, the difference is seven. When I multiply the two numbers, the product is eighteen. What are my two numbers?
" Six word problems.
Abigail saw the same number of pigs and chickens at the farm. She counted twelve legs. How many were pig legs and how many were chicken legs? Six word problems.
Dylan poured punch at the class party. There are twenty-five people in Dylan’s class. He gave each person 100 ml of punch. How many liters of punch did the class need for the party? Six word
• Kyle had four bags of candy that he bought for $1.50 per bag. Each bag has six pieces of candy in it. How many more bags does he need to buy to give each of his twenty-five classmates one piece?
How much will it cost altogether? Six word problems.
Colorful multiplication table bookmarks to use alone or in conjunction with the interactive multiplication games.
Create a story problem for this answer: "Morgan caught four turkeys." Six word problems.
• Students fill in 10 missing products on a 5x5 multiplication grid.
Sydney got twenty-one e-mails on Monday, nineteen on Tuesday, thirty-seven on Wednesday, eight on Thursday, and twenty-three on Friday. How many e-mails did she get on Monday, Tuesday, Wednesday,
and Friday, combined? Six word problems.
• [member-created with abctools] From "10x0" to "10x12". These multiplication skills strips are great for word walls.
Gray scale multiplication table bookmarks to use alone or in conjunction with the interactive multiplication games.
Gray scale multiplication table bookmarks to use alone or in conjunction with the interactive multiplication games.
Colorful multiplication table bookmarks to use alone or in conjunction with the interactive multiplication games.
5 pages of worksheets to practice multiplication up to 10, plus answers.
Gray scale multiplication table bookmarks to use alone or in conjunction with the interactive multiplication games.
• Students fill in 48 missing products on a 9x9 multiplication grid.
5 pages of worksheets to practice multiplication up to 10, plus answers.
A page of rules and a page of practice for scientific notation, including; multiplication, numbers, with an answer sheet.
Great for practice. Use alone, or link all the bookmarks together on a ring.
• John finished a bicycle race in second place. The first four people crossed the finish line at: one-twenty, a quarter after one, five minutes to one and 1:07. What time did John cross the finish
line? Six word problems. | {"url":"http://www.abcteach.com/directory/subjects-math-multiplication-652-8-1","timestamp":"2014-04-17T18:36:48Z","content_type":null,"content_length":"149798","record_id":"<urn:uuid:cb8d15c7-5db5-4ed3-b17d-f286c5c42073>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
New Field of Galileon Cosmology Shows Acceleration
The biggest mystery in modern cosmology is to understand why the expansion rate of the universe is accelerating. The 2011 Nobel Prize in Physics was awarded for the discovery of the acceleration,
which commenced in a cosmic jerk five billion years ago.
The standard explanation for the acceleration is that it’s due to a cosmological constant, as Einstein put it, or, in a modern interpretation, to dark energy. If correct, this idea means that the
universe is filled with a component of energy. This is philosophically unsettling for theorists because it would then account for nearly three-quarters of the energy budget of the universe.
Concordance cosmology, with its baryonic matter, cold dark matter, and dark energy, gives a precision fit of the data to models, but in truth the theoretical underpinning can be regarded as quite
poor. That’s because current theory is silent on the nature of dark matter, and it does not explain the origin of the early inflation or the later acceleration.
Physicists who demand beauty and elegance in their equations are perplexed that we apparently know so little about so much of the universe. It’s a puzzle so baffling that it calls into question the
validity of general relativity (GR) over cosmological distances. GR has been tested to destruction in the laboratory and the solar system, and it has survived unscathed from every challenge for
nearly a century. But now the dark-energy skeptics are exploring ways of modifying the action of gravity at large distances, in order to avoid the presence of dark energy.
It’s misleading to think that modifying gravity is a bright way to find solutions to dark problems. There are many ways of deviating from Einstein’s path, most of which lead nowhere because they fail
to account for the observations, but there are exceptions. One of these alternative routes through the forest is scalar-tensor theory, which has been around for half a century. Einstein’s GR is a
geometrical theory of space-time that uses a metric tensor field as its fundamental building block. In scalar-tensor theories, a scalar field is added and it is coupled to the tensor field. This
leads to a whole new playground for theorists.
Galileon cosmology is now on a solid footing, just three years after being launched.
One type of scalar field, now named the “galileon,” is creating a buzz among theorists. It brings an extra degree of freedom to cosmological equations. Researchers are busy exploring the
consequences, and papers on galileon cosmology are getting a lot of attention. That’s because the galileon scalar field permits models of the universe in which a cosmic jerk kicks in naturally,
avoiding the need for a severe shock delivered by dark energy.
An analysis using Thomson Reuters Web of Knowledge allows an assessment of the impact being made by the galileon approach. No papers on galileon cosmology existed before 2009, when just five
appeared. In 2010 the count leapt to 18, and then bounded to 48 in 2011. The number of citations to the 92 papers in the sample was 20 in 2009, 180 in 2010, and an impressive 925 in 2011. In 2012,
the rising trend of papers published and citations earned has continued unabated.
A selection of eight of the most highly cited papers on modifying gravity with the galileon (Table 1) gives glimpses of where the action is in this new field. Paper #1, the first to describe the
galileon and the first to show that “self-accelerating” solutions exist, has earned 182 citations, which elevates it to the top of the citation rankings. In this hot paper Alberto Nicolis (Columbia
University, New York), Riccardo Rattazzi (Institute of Theoretical Physics, Lausanne, Switzerland) and Enrico Trincherini (Scuola Normale Superiore, Pisa, Italy) give a technical account of how to
generalize scalar theories so that their modifications to gravity do not apply “locally.” Local means “at less than cosmological distances,” where GR does not require modification in order to account
for observations. This paper is now widely cited by researchers who are struggling to understand dark energy.
Paper #2 provides an important foundation to those new to the field. Nathan Chow and Justin Khoury (University of Pennsylvania, Philadelphia, and the Perimeter Institute, Waterloo, Ontario) make a
study of the cosmology of a galileon field theory. Their analysis sets out a host of avenues for theorists to explore. The citation count of 73 in three years shows that the paper has been notable in
building a strong following for galileon cosmology.
The self-accelerating universe is center stage in #3 by Fabio Silva and Kazuya Koyama (Institute of Cosmology & Gravitation, Portsmouth, UK). Their models inflate spontaneously at late times, when
the universe is already billions of years old. But at early times and on small scales (for example, the solar system) they recover classical GR. Overall their models provide surprisingly rich
phenomenology, which has probably stimulated the good following (69 citations) the paper enjoys.
Inflation of a different kind is the focus of #4 by Tsutomu Kobayashi (University of Tokyo), Masahide Yamaguchi (Tokyo Institute of Technology), and Jun’ichi Yokoyama (also of University of Tokyo).
Inflation in the early universe is now a part of the standard cosmology, and a scalar field known as the inflaton drives it. This highly cited paper proposes a new class of models in which the
inflaton is replaced by the galileon. It’s a move that may be testable in forthcoming gravitational wave experiments.
In order to account for the origin of structure in the universe, it is inescapable that perturbations are imprinted in the cosmos soon after the Big Bang. Two papers in the selection, #5 and #7, are
concerned with the evolution of structure in galileon cosmology. Both conclude that this new cosmology does not pose problems for the emergence of large-scale structure.
In #6, Antonio De Felice and Shinji Tsujikawa (Department of Physics, Tokyo University of Science) examine a solution that leads to cosmic acceleration today. They point out that a future
observational study may provide some signatures for the modification of gravity from GR.
Papers #6 and #8 touch on the confrontation of galileon theory with cosmological data. Observational cosmology is now precision science thanks the data from the Wilkinson Microwave Anisotropy Probe,
which invites the question: Can galileon cosmology be tested?
The field equation of state examined in #6 has some peculiar behavior, which would open up the possibility of distinguishing galileon gravity from a cold dark matter model with a cosmological
Paper #8 from Amna Ali (Centre of Theoretical Physics, New Delhi, India), Radouane Gannouji (IUCCA, Pune, India) and M. Sami (also of New Delhi) employs data from supernova cosmology, baryon acoustic
oscillations, and the cosmic microwave background. They used these data to constrain the parameter space of their models.
To highlight centers of activity in this emergent field, Table 2 provides a listing of institutions that are particularly active in galileon cosmology, based on representation in our sampling of
papers published since 2009. Heading the list is Tokyo University of Science.
Galileon cosmology is now on a solid footing, just three years after being launched. It is a credible field of enquiry, rich in intellectual puzzles as well as mathematical challenges. Its reach is
Dr. Simon Mitton is Vice-President of the Royal Astronomical Society and is based at the University of Cambridge, U.K.
cosmology, Galileon cosmology, dark energy, accelerating universe, Tokyo University of Science | {"url":"http://sciencewatch.com/articles/new-field-galileon-cosmology-shows-acceleration","timestamp":"2014-04-20T13:25:03Z","content_type":null,"content_length":"36059","record_id":"<urn:uuid:df9094c6-cdab-45e6-add9-cf9a9f111155>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fall 2005 Class Schedule - Mathematics - ALL CLASSES
MATH 1A: Single-Variable Calculus and Analytic Geometry
Prerequisite: Mathematics 10 with a grade of 'C' or better.
Transferable: CSU; UC; CSU-GE: B4; IGETC: 2A; GAV-GE: B4; CAN: MATH 18, MATH SEQ. B
Limits and continuity, analyzing the behavior and graphs of functions, derivatives, implicit differentiation, higher order derivatives, related rates and optimization word problems, Newton's Method,
Fundamental Theorem of Calculus, and definite and indefinite integrals.
Sect# Type Room Instructor Units Days Time Start-End Footnotes
0537 LEC LS101 JUKL H 4.0 F 0110P - 0200P 1
JUKL H 4.0 MW 1245P - 0200P
MATH 1B: Single-Variable Calculus and Analytic Geometry
Prerequisite: Mathematics 1A with a grade of 'C' or better.
Transferable: CSU; UC; CSU-GE: B4; IGETC: 2A; GAV-GE: B4; CAN: MATH 20, MATH SEQ. B
This course is a standard second semester Calculus course covering methods of integration, applications of the integral, differential equations, parametric and polar equations, and sequences and
Sect# Type Room Instructor Units Days Time Start-End Footnotes
2136 LEC PH102 LOCKHART L 4.0 TuTh 0630P - 0820P
MATH 5: Introduction to Statistics
Prerequisite: Mathematics 233 with a grade of 'C' or better.
Transferable: CSU; UC; CSU-GE: B4; IGETC: 2A; GAV-GE: B4; CAN: STAT 2
Descriptive analysis and presentation of either single-variable data or bivariate data, probability, probability distributions, normal probability distributions, sample variability, statistical
inferences involving one and two populations, analysis of variance, linear correlation and regression analysis. Statistical computer software will be extensively integrated as a tool in the
description and analysis of data.
Sect# Type Room Instructor Units Days Time Start-End Footnotes
0539 ONLINE HUBBARD M 3.0 DHR 0000 - 0000
Above class meets entirely online
0540 L/L PH103 JUKL H 3.0 MW 0945A - 1100A 1
PH101 JUKL H 3.0 F 1010A - 1100A
0541 L/L PH103 DWYER M 3.0 TuTh 1110A - 1225P 1
PH101 DWYER M 3.0 F 1110A - 1200P
2137 L/L MHG4 VIARENGO A 3.0 Tu 0630P - 0920P 1
Above class meets at Morgan Hill Community site
MHG5 VIARENGO A 3.0 0530P - 0630P
Above class meets at Morgan Hill Community site
2138 L/L HOL2 BATES R 3.0 Th 0530P - 0920P 1
Above class meets at the Hollister Briggs site
MATH 7: Finite Mathematics
Prerequisite: Mathematics 233 with a grade of 'C' or better.
Transferable: CSU; UC; CSU-GE: B4; IGETC: 2A; GAV-GE: B4; CAN: MATH 12
Systems of linear equations and matrices, introduction to linear programming, finance, counting techniques and probability, properties of probability and applications of probability.
Sect# Type Room Instructor Units Days Time Start-End Footnotes
0542 LEC PH102 LOCKHART L 3.0 MW 1245P - 0200P 1
MATH 8A: First Half of Precalculus
Prerequisite: Mathematics 233 with a grade of 'C' or better.
Transferable: CSU; UC; CSU-GE: B4; IGETC: 2A; GAV-GE: B4
Math 8A prepares the student for the study of calculus by providing important skills in algebraic manipulation, interpretation, and problem solving at the college level. Topics will include basic
algebraic concepts, complex numbers, equations and inequalities of the first and second degree, functions, and graphs, linear and quadratic equations, polynomial functions, exponential and
logarithmic functions, systems of equations, matrices and determinants, right triangle trigonometry, and the Law of Sines and Cosines.
Sect# Type Room Instructor Units Days Time Start-End Footnotes
0543 LEC PH103 DRESCH M 4.0 TuTh 0945A - 1100A 1
DRESCH M 4.0 F 1010A - 1100A
0544 LEC SS206 WAGMAN K 4.0 F 0110P - 0200P 1
WAGMAN K 4.0 MW 1245P - 0200P
MATH 8B: Second Half of Precalculus
Prerequisite: Mathematics 8A with a grade of 'C' or better.
Advisory: Math 208 Survey of Practical Geometry.
Transferable: CSU; UC; CSU-GE: B4; IGETC: 2A; GAV-GE: B4
Math 8B prepares students for the study of calculus by providing important skills in algebraic manipulation, interpretation, and problem solving at the college level. Topics will include
trigonometric functions, identities, inverse trigonometric functions, and equations; applications of trigonometry, vectors, complex numbers, polar and parametric equations; conic sections; sequences,
series, counting principles, permutations, mathematical induction; analytic geometry, and an introduction to limits.
Sect# Type Room Instructor Units Days Time Start-End Footnotes
0545 LEC LS102 DACHKOVA E 4.0 TuTh 1110A - 1225P
DACHKOVA E 4.0 F 1110A - 1200P
MATH 12: Mathematics for Elementary Teachers
Prerequisite: Mathematics 208, or successful completion of a high school geometry course and Mathematics 233 with a grade of 'C' or better.
Transferable: CSU; UC; CSU-GE: B4; GAV-GE: B4
This course is intended for students preparing for a career in elementary school teaching. Emphasis will be on the structure of the real number system, numeration systems, elementary number theory,
and problem solving techniques. Technology will be integrated throughout the course.
Sect# Type Room Instructor Units Days Time Start-End Footnotes
2140 LEC MHG12 KERCHEVAL S 3.0 Tu 0600P - 0850P 1
Above class meets at Morgan Hill Community site
MATH 205: Elementary Algebra
Prerequisite: MATH 402 with a grade of 'C' or better or assessment test recommendation.
Transferable: GAV-GE: B4
This course is a standard beginning algebra course, including algebraic expressions, linear equations and inequalities in one variable, graphing, equations and inequalities in two variables, integer
exponents, polynomials, rational expressions and equations, radicals and rational exponents, and quadratic equations. Mathematics 205, 205A and 205B, and 206 have similar course content. This course
may not be taken by students who have completed Mathematics 205B or 206 with a grade of "C" or better. This course may be taken for Mathematics 205B credit (2.5 units) by those students who have
successfully completed Mathematics 205A with a grade of "C" or better.
Sect# Type Room Instructor Units Days Time Start-End Footnotes
0546 LEC PH103 LEE R 5.0 DAILY 0835A - 0925A 1
0547 LEC MHG13 KERCHEVAL S 5.0 MTuWTh 0910A - 1020A 1
Above class meets at Morgan Hill Community site
0548 LEC SS206 WAGMAN K 5.0 MTuWTh 0945A - 1050A 1
0549 LEC HOL2 MALOKAS J 5.0 MWF 1030A - 1200P 1
Above class meets at the Hollister Briggs site
0550 LEC CH109 JUKL H 5.0 MTuWTh 1110A - 1215P 1
0551 LEC CH109 DRESCH M 5.0 MTuWTh 1245P - 0150P 1
2141 LEC HOL4 RAND K 5.0 TuTh 0600P - 0820P 1
Above class meets at the Hollister Briggs site
2142 LEC MHG13 KING K 5.0 TuTh 0645P - 0905P 1
Above class meets at Morgan Hill Community site
MATH 205A: First Half of Elementary Algebra
Prerequisite: Effective Fall 2005: MATH 402 with a grade of 'C' or better or assessment test recommendation.
Advisory: Concurrent enrollment in Guidance 563A is advised.
Transferable: GAV-GE: B4
This course is the first half of the Elementary Algebra course. It will cover signed numbers, evaluation of expressions, ratios and proportions, solving linear equations, and applications. Graphing
of lines, the slope of a line, graphing linear equations, solving systems of equations, basic rules of exponents, and operations on polynomials will be covered.
Sect# Type Room Instructor Units Days Time Start-End Footnotes
0552 LEC SS210 LOCKHART L 2.5 F 0810A - 0900A 1
LOCKHART L 2.5 MW 0810A - 0925A
0553 LEC PH102 LOCKHART L 2.5 F 0100P - 0200P 1
LOCKHART L 2.5 TuTh 1245P - 0200P
2143 LEC SS206 EHLERS G 2.5 TuTh 0630P - 0820P 1
MATH 205B: Second Half of Elementary Algebra
Prerequisite: Math 205A with a grade of 'C' or better.
Advisory: Concurrent enrollment in Guidance 563B is advised.
Transferable: GAV-GE: B4
This course contains the material covered in the second half of the Elementary Algebra Course. It will cover factoring, polynomials, solving quadratic equations by factoring, rational expressions and
equations, complex fractions, radicals and radical equations, solving quadratic equations by completing the square and the quadratic formula. Application problems are integrated throughout the
Sect# Type Room Instructor Units Days Time Start-End Footnotes
0554 LEC LS102 DACHKOVA E 2.5 F 1210P - 0100P
DACHKOVA E 2.5 TuTh 1245P - 0200P
MATH 233: Intermediate Algebra
Prerequisite: Mathematics 205 or Mathematics 205A and 205B or Mathematics 206 with a grade of 'C' or better.
Transferable: GAV-GE: B4
Review of basic concepts, linear equations and inequalities, graphs and functions, systems of linear equations, polynomials and polynomial functions, factoring, rational expressions and equations,
roots, radicals, and complex numbers, solving quadratic equations, exponential and logarithmic functions, and problem solving strategies.
Sect# Type Room Instructor Units Days Time Start-End Footnotes
0555 LEC LS101 DWYER M 5.0 DAILY 0835A - 0925A 1
0556 LEC MHG3 BUTTERWORTH 5.0 MW 0910A - 1020A 1
Above class meets at Morgan Hill Community site
MHG10 BUTTERWORTH 5.0 TuTh
Above class meets at Morgan Hill Community site
0557 LEC HOL2 TANNIRU P 5.0 TuWThF 0910A - 1020A 1
Above class meets at the Hollister Briggs site
0558 LEC LS103 DACHKOVA E 5.0 MW 0945A - 1050A 1
CH109 DACHKOVA E 5.0 TuTh
0559 LEC SS206 WAGMAN K 5.0 MTuWTh 1110A - 1215P 1
0560 LEC PH103 LEE R 5.0 MTuWTh 1245P - 0150P 1
2144 LEC LS101 KNIGHT R 5.0 TuTh 0600P - 0820P
MATH 400: Elements of Arithmetic
Transferable: No
Essential arithmetic operations, whole numbers, integers, fractions, decimals, ratio, proportion, percent, applications of arithmetic, and critical thinking, as well as math-specific study skills.
Units earned in this course do not count toward the associate degree and/or other certain certificate requirements.
Sect# Type Room Instructor Units Days Time Start-End Footnotes
0561 L/L SS206 MALOKAS J 3.0 MTuWTh 0835A - 0925A 1
0562 L/L PH102 DWYER M 3.0 MW 0945A - 1100A 1
DWYER M 3.0 F 1010A - 1100A
0563 L/L PH102 DRESCH M 3.0 MW 1110A - 1225P 1
DRESCH M 3.0 F 1110A - 1200P
MATH 402: Pre-Algebra
Prerequisite: Completion of Math 400 with a 'C' or better, or assessment test recommendation.
Transferable: No
This course covers operations with integers, fractions and decimals and associated applications, percentages, ratio, and geometry and measurement, critical thinking and applications. Elementary
algebra topics such as variables, expressions, and solving equations are introduced.
Sect# Type Room Instructor Units Days Time Start-End Footnotes
0564 L/L MHG3 BUTTERWORTH 3.0 MTuWTh 0810A - 0900A 65
Above class meets at Morgan Hill Community site
0565 L/L LS102 DRESCH M 3.0 MTuWTh 0835A - 0925A 1
0566 L/L CH102 JUKL H 3.0 TuTh 1245P - 0200P 1
JUKL H 3.0 F 1210P - 0100P
2146 L/L PH103 FULLER G 3.0 TuTh 0630P - 0820P 1
MATH 404A: Self-Paced Basic Math
Transferable: No
This course is a remedial, modular, self-paced course. Application and critical thinking skills are developed in each module. Module A covers operations with whole numbers, equivalent fractions,
multiplying and dividing fractions. Module B covers adding and subtracting fractions, and operations with decimals. Module C covers ratio and proportion, percent, and units of measurement. Module D
reviews fractions, decimals, percentages, and covers operations with integers, and working with variables. Module E covers real numbers, fractions, exponents, scientific notation, and order of
operations. Module F covers expressions, polynomials, and equations. Module G covers geometric figures, perimeter and area, surface area and volume, triangles and parallelograms, and similar figures.
This course has the option of a letter grade or credit/no credit.
Sect# Type Room Instructor Units Days Time Start-End Footnotes
0567 L/L PH101 DACHKOVA E 1.0 M 1245P - 0200P 1 79
DACHKOVA E 1.0 F 0110P - 0200P
DACHKOVA E 1.0 W 1245P - 0300P
MATH 404B: Self-Paced Basic Math
Transferable: No
This course is a remedial, modular, self-paced course. Applications and critical thinking skills are developed in each module. Module A covers operations with whole numbers, equivalent fractions,
multiplying and dividing fractions. Module B covers adding and subtracting fractions, and operations with decimals. Module C covers ratio and proportion, percent, and units of measurement. Module D
reviews fractions, decimals, percentages, and covers operations with integers, and working with variables. Module E covers real numbers, fractions, exponents, scientific notation, and order of
operations. Module F covers expressions, polynomials, and equations. Module G covers geometric figures, perimeter and area, surface area and volume, triangles and parallelograms, and similar figures.
This course has the option of a letter grade or credit/no credit.
Sect# Type Room Instructor Units Days Time Start-End Footnotes
0568 L/L PH101 DACHKOVA E 1.0 M 1245P - 0200P 1 79
DACHKOVA E 1.0 F 0110P - 0200P
DACHKOVA E 1.0 W 1245P - 0300P
MATH 404C: Self-Paced Basic Math
Transferable: No
This is a remedial, modular, self-paced course. Applications and critical thinking skills are developed in each module. Module A covers operations with whole numbers, equivalent fractions,
multiplying and dividing fractions. Module B covers adding and subtracting fractions, and operations with decimals. Module C covers ratio and proportion, percent, and units of measurement. Module D
reviews fractions, decimals, percentages, and covers operations with integers, and working with variables. Module E covers real numbers, fractions, exponents, scientific notation, and order of
operations. Module F covers expressions, polynomials, and equations. Module G covers geometric figures, perimeter and area, surface area and volume, triangles and parallelograms, and similar figures.
This course has the option of a letter grade or credit/no credit.
Sect# Type Room Instructor Units Days Time Start-End Footnotes
0569 L/L PH101 DACHKOVA E 1.0 M 1245P - 0200P 1 79
DACHKOVA E 1.0 F 0110P - 0200P
DACHKOVA E 1.0 W 1245P - 0300P
MATH 404D: Self-Paced Basic Math
Transferable: No
This course is a remedial modular, self-paced course. Applications and critical thinking skills are developed in each module. Module A covers operations with whole numbers, equivalent fractions,
multiplying and dividing fractions. Module B covers adding and subtracting fractions, and operations with decimals. Module C covers ratio and proportion, percent and units of measurement. Module D
reviews fractions, decimals, percentages, and covers operations with integers, and working with variables. Module E covers real numbers, fractions, exponents, scientific notation, and order of
operations. Module F covers expressions, polynomials, and equations. Module G covers geometric figures, perimeter and area, surface area and volume, triangles and parallelograms, and similar figures.
This course has the option of a letter grade or credit/no credit.
Sect# Type Room Instructor Units Days Time Start-End Footnotes
0570 L/L PH101 DACHKOVA E 1.0 M 1245P - 0200P 1 79
DACHKOVA E 1.0 F 0110P - 0200P
DACHKOVA E 1.0 W 1245P - 0300P
MATH 404E: Self-Paced Basic Math
Transferable: No
This course is a remedial, modular, self-paced course. Applications and critical thinking skills are developed in each module. Module A covers operations with whole numbers, equivalent fractions,
multiplying and dividing fractions. Module B covers adding and subtracting fractions, and operations with decimals. Module C covers ratio and proportion, percent, and units of measurement. Module D
reviews fractions, decimals, percentages, and covers operations with integers, and working with variables. Module E covers real numbers, fractions, exponents, scientific notation, and order of
operations. Module F covers expressions, polynomials, and equations. Module G covers geometric figures, perimeter and area, surface area and volume, triangles and parallelograms, and similar figures.
This course has the option of a letter grade or credit/no credit.
Sect# Type Room Instructor Units Days Time Start-End Footnotes
0571 L/L PH101 DACHKOVA E 1.0 M 1245P - 0200P 1 79
DACHKOVA E 1.0 F 0110P - 0200P
DACHKOVA E 1.0 W 1245P - 0300P
MATH 404F: Self-Paced Basic Math
Transferable: No
This course is a remedial, modular, self-paced course. Applications and critical thinking skills are developed in each module. Module A covers operations with whole numbers, equivalent fractions,
multiplying and dividing fractions. Module B covers adding and subtracting fractions, and operations with decimals. Module C covers ratio and proportion, percent, and units of measurement. Module D
reviews fractions, decimals, percentages, and covers operations with integers, and working with variables. Module E covers real numbers, fractions, exponents, scientific notation, and order of
operations. Module F covers expressions, polynomials, and equations. Module G covers geometric figures, perimeter and area, surface area and volume, triangles and parallelograms and similar figures.
This course has the option of a letter grade or credit/no credit.
Sect# Type Room Instructor Units Days Time Start-End Footnotes
0572 L/L PH101 DACHKOVA E 1.0 M 1245P - 0200P 1 79
DACHKOVA E 1.0 W 1245P - 0300P
DACHKOVA E 1.0 F 0110P - 0200P
MATH 404G: Self-Paced Basic Math
Transferable: No
This course is a remedial, modular, self-paced course. Applications and critical thinking skills are developed in each module. Module A covers operations with whole numbers, equivalent fractions,
multiplying and dividing fractions. Module B covers adding and subtracting fractions, and operations with decimals. Module C covers ratio and proportion, percent, and units of measurement. Module D
reviews fractions, decimals, percentages, and covers operations with integers, and working with variables. Module E covers real numbers, fractions, exponents, scientific notation, and order of
operations. Module F covers expressions, polynomials, and equations. Module G covers geometric figures, perimeter and area, surface area and volume, triangles and parallelograms, and similar figures.
This course has the option of a letter grade or credit/no credit.
Sect# Type Room Instructor Units Days Time Start-End Footnotes
0573 L/L PH101 DACHKOVA E 1.0 M 1245P - 0200P 1 79
DACHKOVA E 1.0 F 0110P - 0200P
DACHKOVA E 1.0 W 1245P - 0300P | {"url":"http://www.gavilan.edu/schedule/OLD/fall2005/mathematics.htm","timestamp":"2014-04-20T23:28:34Z","content_type":null,"content_length":"44919","record_id":"<urn:uuid:d4a3ac62-89c3-4c2e-a8ce-9c28ed229980>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
SAT Tip of the Week: Solving Permutation Questions
four (displays/desks/seats) all in a row. How many different combinations of six (paintings/students/diners) can be made?”
This kind of problem can seem difficult at the onset, but it is pretty straight forward when you get down to it. We start by thinking about each “space” as a number of possibilities.
If we have six distinct options, then any of the six could be in the first of four spaces. Thus, there are six possibilities for this first space. The second space is also six, right? Not so fast:
one distinct option has already been used for the first space. Even though we don’t know which one is used, something is in that first space, so there are only five options left for the second
space. This pattern continues for the next two spaces which contain four and three possibilities respectively. The final step is to simply multiply the numbers of possibilities together: 6 x 5 x 4 x
3 = 360, and voila we have our answer.
Be careful to check your work. In this question we are looking for 4 combinations with 6 possibilities, so we only multiply those 4 numbers (6 x 5 x 4 x 3).
This is a fairly straightforward question, but is very similar to many medium and even hard questions on the SAT. If we can think of each available “space” in terms of possibilities, we can attack
even harder questions.
Let’s look at a more difficult example:
“Five people reflected by the letters A, B, C, D and E are arranged in a row. If either A or E has to be in the first or the last position, what is the probability that B will be in the second
Quick review: Probability is the number of desired outcomes divided by the number of total outcomes, so those two numbers are all we need to find. So how many desired outcomes do we have? Well, if
A is in the first position then B, C, or D could be in second, third or fourth and E must be last, and if E is in the first position then B, C, or D could be in second, third or fourth and A must be
last, so let’s just write AB and EB to start and fill in the rest until we can’t any more. If we do this, our desired outcomes are ABCDE, ABDCE, EBCDA, and EBDCA. Four outcomes: easy enough.
So what are our total outcomes?
• Well how many possibilities do we have for our first position? Either A or D: Two options.
• How about our second? B, C, or D: Three options.
• Third? Well we already have B, C or D in our second position, so only two of them could be in the third. Third position is two possibilities.
• The fourth is only one.
• The fifth is only one because either A or E is already in the first position.
Thus, we are left with 2 x 3 x 2 x 1 x 1 = 12 possibilities. Desired outcomes over total outcomes = 4/12 =1/3.
With this problem, we could have listed out all the possibilities and calculated desired and total from that, but this method can be used in many different problems and saves a lot of times. Happy
Plan on taking the SAT soon? We run a free online SAT prep seminar every few weeks. And, be sure to find us on Facebook and Google+, and follow us on Twitter!
David Greenslade is a Veritas Prep SAT instructor based in New York. His passion for education began while tutoring students in underrepresented areas during his time at the University of North
Carolina. After receiving a degree in Biology, he studied language in China and then moved to New York where he teaches SAT prep and participates in improv comedy. | {"url":"http://www.veritasprep.com/blog/2013/05/sat-tip-of-the-week-solving-permutation-questions/","timestamp":"2014-04-19T12:00:42Z","content_type":null,"content_length":"47942","record_id":"<urn:uuid:a86c15bf-a7fa-4bc5-bf1d-788fa634cd5a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
January 2013
Just before Christmas, the WMAP collaboration posted the
9-years update
of their Cosmic Microwave Background results.
Before going into details it's worth to stop for a second and admire the picture that I pasted here. Thanks to the observations of the WMAP satellite (black) and the terrestrial telescopes
(blue) and
(orange) we know the spectrum of the CMB fluctuations down to the scales of 0.1 degree. All these peaks and wiggles are predicted by the standard cosmological model, the so-called ΛCDM model (black
line). In particular, the positions and the relative sizes of the peaks provide the strongest argument to date for the existence of dark matter in the Universe: a non-relativistic matter component
that, unlike baryons, does not interact with photons. At present, the alternatives to dark matter cannot even dream of quantitatively explaining the observed features of the CMB.
That late in the game one would not expect any spectacular turns of the action. Indeed, compared to the
WMAP data, the update typically brings a 20-30% reduction of already tiny errors on the composition of the Universe. There is however one number that changed visibly. The effective number of
relativistic degrees of freedom at the time of CMB decoupling, the so-called
parameter, is now
= 3.26 ± 0.35, compared to
= 4.34 ± 0.87 quoted in the 7-years analysis. For the fans and groupies of this observable it was like finding a lump of coal under the christmas tree...
So, what is this mysterious
parameter? According to the standard cosmological model, at the temperatures above 10 000 Kelvin the energy density of the universe was dominated by a plasma made of neutrinos (40%) and photons
(60%). The photons today make the CMB about which we know everything. The neutrinos should also be around, but for the moment we cannot study them directly. However we can indirectly infer their
presence in the early universe via other observables. First of all, the neutrinos affect the energy density stored in radiation:
which controls the expansion of the Universe during the epoch of radiation domination. The standard model predicts
equal to the number of known neutrinos species, that is
Neff = 3
(in reality 3.05, due to finite temperature and decoupling effects). Thus, by measuring how quickly the early Universe was expanding, we can determine
. If we find
≈ 3 we confirm the standard model and close the store. On the other hand, if we measured that
is significantly larger than 3, that would mean a discovery of additional light degrees of freedom in the early plasma that are unaccounted for in the standard model. Note that these new hypothetical
particles don't have to be similar to neutrinos, in particular they could be bosons, and/or have a different temperature (in which case they would correspond to non-integer increase of
). All that is required from them is that they are weakly interacting and light enough to be relativistic at the time of CMB decoupling. Theorists have dreamed up many viable candidates that could
show up in
: additional light neutrinos species, axions, dark photons, etc.
One way to measure
is via nucleosynthesis (in principle it's not the same observable as in that case one measures the number of relativistic degrees of freedom at a much earlier epoch, but in most models
at the time of nucleosynthesis and CMB decoupling are similar). Here the physics is rather straightforward.
The larger
, the faster the universe expands, and the earlier the weak interactions transforming neutrons into protons fall out of equilibrium. Almost all the neutrons that survive the thermal bath end up bound
into helium atoms, thus by measuring the amount of helium-4 in the universe one can infer
A recent
of the nucleosynthesis constraints on
is summarized in the plot. The upper panel shows the standard model prediction (red band) of the primordial helium mass fraction
as a function of
, confronted with experimental constraints. (Notice that different observations of the helium abundance are not quite consistent with each other, but that's normal in astrophysics; the rule of thumb
is that 3 sigma uncertainty in astrophysics is equivalent to 2 sigma in conventional physics).
≈ 4 seems to be preferred, although, given the uncertainties, any
between 3 and 5 is consistent with the data.
The interest of particle physicists in
come from the fact that, until recently, the CMB data also pointed at
≈4 with a comparable error. The impact of
on the CMB is much more contrived, and there are many separate effects one needs to take into account. For example, larger
delays the moment of matter-radiation equality, which affects the relative strength and positions of the peaks. Furthermore,
affects how the perturbations grow during the radiation era, which may show up in the CMB spectrum at
≥ 100. Finally, the larger
, the larger is the effect of Silk damping at
≥ 1000. Each single observable has a large degeneracy with other input parameters (matter density, Hubble constant, etc.) but, once the CMB spectrum is measured over a large range of angular scales,
these degeneracies are broken and stringent constraints on
can be derived. That is what happened recently, thanks to the high-
CMB measurements from the ACT and SPT telescopes, and some input from other astrophysical observations. The net result is that from the CMB data alone one finds
= 3.89 ± 0.67, while using in addition an input from Baryon Acoustic Oscillations and Hubble constant measurements brings it down
= 3.26 ± 0.35. All in all, the measured effective number of relativistic degrees of freedom in the early Universe can be well accounted for by the three boring neutrinos of the standard model. Well,
life's a bitch. The next update on
is expected in March when Planck releases its cosmological results, but the rumor is that it will do nothing to cheer us up.
as pointed out by a commenter, there's a rumor that the WMAP-9 analysis has a bug, and when it's corrected
increases significantly. So don't throw your sterile neutrinos models into a fire yet.
Update #2:
the bug was fixed in
. The new number is
= 3.84 ± 0.40, consistent within 2 sigma with the standard model, but leaving some room for hope.
When you think about it, the end of the world in 2012 would have made perfect sense. Last year we found the Higgs boson -- the last of the particles predicted by the standard model. It may well be
that humans have already discovered all elementary particles, in which case all that's left is looking in the sixth place of decimals. Alas, the armageddon didn't happen, so we have to drag on. What
will the year 2013 bring to particle physicists?
Well, this year is surely going to be depressing because the LHC will come to a long halt, after a brief period of shit-on-shit collisions in January and February*. During the next 2 years the
machine will undergo necessary [S:repairs:S] upgrades so that it can restart with the collision energy of about 13 TeV. Nevertheless this year should be entertaining, as the analyses of the full 8
TeV dataset will be flowing in. First of all, we're waiting for the Higgs update expected around the time of the Moriond conference in March. The most important question is whether the measured rate
in the diphoton decay channel will continue to show an excess over the standard model, as currently hinted by ATLAS, or whether it will drift towards the standard model value, as hinted by CMS. Other
Higgs search channels are unlikely to show a major departure from the standard model, given the existing data... but one never knows. Besides, there will be of course hundreds of new physics
searches; as long as there is data there's hope that a new exciting phenomenon may pop up somewhere...
On the other side of the Atlantic two important experiments will kick off in 2013. Between Fermilab and Minnesota, the NOvA neutrino experiment will carry the first attack on the CP violating phase
in the neutrino mixing matrix. Over in South Dakota, the LUX dark matter experiment will join Xenon100 on the frontier of WIMP detection. However, neither of the above is likely to deliver anything
groundbreaking as early as this year.
What else? When times are tough we turn our eyes to heaven. The Planck satellite, that has performed precise measurements of the Cosmic Microwave Background, will release the cosmological results in
March. Disappointingly, the release will not include the CMB polarization data, which are supposed the meat of the whole mission. Nevertheless, the new CMB temperature maps should give us a better
grip on cosmological parameters and improve on what we learnt from WMAP. Elsewhere in the sky the AMS-02 experiment, glued to the International Space Station since 1 and half year now, is supposed to
release first results this year. Although we don't expect anything spectacular, a confirmation of the positron excess claimed a few years ago by the PAMELA satellite would already be something. Back
to the Earth, an upgraded version of the HESS gamma-ray telescope has been operating in Namibia since last summer. The previous HESS data has been used to put limits on the monochromatic gamma-ray
emission from the galactic center at very high energies, 0.5-25 TeV. According to some reports, HESS-II should be able to go down in energy and quickly refute the presence of the line at 135 GeV that
seems to be present in the data from the Fermi gamma-ray satellite.
So, the new year holds some promises, although it's unlikely to match the fabulous 2012. Actually, I'm already worried about 2014, when there'll be so little new data. Most likely, Résonaances will
then have to turn into a tabloid blog publishing topless pictures of physicists on Caribbean beaches. But let's not think about that for a moment, and let's enjoy 2013 with the avalanche of LHC data
soon to be released.
*) As correctly pointed out by a commenter, this year we'll have proton-on-shit rather than shit-on-shit collisions. Apologies for the inaccuracy. | {"url":"http://resonaances.blogspot.com/2013_01_01_archive.html","timestamp":"2014-04-20T05:56:09Z","content_type":null,"content_length":"97565","record_id":"<urn:uuid:1c2870d7-a94d-46b3-8de7-593e13e014db>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about topological invariant on Math ∩ Programming
This series on topology has been long and hard, but we’re are quickly approaching the topics where we can actually write programs. For this and the next post on homology, the most important
background we will need is a solid foundation in linear algebra, specifically in row-reducing matrices (and the interpretation of row-reduction as a change of basis of a linear operator).
Last time we engaged in a whirlwind tour of the fundamental group and homotopy theory. And we mean “whirlwind” as it sounds; it was all over the place in terms of organization. The most important
fact that one should take away from that discussion is the idea that we can compute, algebraically, some qualitative features about a topological space related to “n-dimensional holes.” For
one-dimensional things, a hole would look like a circle, and for two dimensional things, it would look like a hollow sphere, etc. More importantly, we saw that this algebraic data, which we called
the fundamental group, is a topological invariant. That is, if two topological spaces have different fundamental groups, then they are “fundamentally” different under the topological lens (they are
not homeomorphic, and not even homotopy equivalent).
Unfortunately the main difficulty of homotopy theory (and part of what makes it so interesting) is that these “holes” interact with each other in elusive and convoluted ways, and the algebra reflects
it almost too well. Part of the problem with the fundamental group is that it deftly eludes our domain of interest: we don’t know a general method to compute the damn things!
What we really need is a coarser invariant. If we can find a “stupider” invariant, it might just be simple enough to compute. Perhaps unsurprisingly, these will take the form of finitely-generated
abelian groups (the most well-understood class of groups), with one for each dimension. Now we’re starting to see exactly why algebraic topology is so difficult; it has an immense list of
prerequisite topics! If we’re willing to skip over some of the more nitty gritty details (and we must lest we take a huge diversion to discuss Tor and the exact sequences in the universal coefficient
theorem), then we can also do the same calculations over a field. In other words, the algebraic objects we’ll define called “homology groups” are really vector spaces, and so row-reduction will be
our basic computational tool to analyze them.
Once we have the basic theory down, we’ll see how we can write a program which accepts as input any topological space (represented in a particular form) and produces as output a list of the homology
groups in every dimension. The dimensions of these vector spaces (their ranks, as finitely-generated abelian groups) are interpreted as the number of holes in the space for each dimension.
Recall Simplicial Complexes
In our post on constructing topological spaces, we defined the standard $k$-simplex and the simplicial complex. We recall the latter definition here, and expand upon it.
Definition: A simplicial complex is a topological space realized as a union of any collection of simplices (of possibly varying dimension) $\Sigma$ which has the following two properties:
• Any face of a simplex $\Sigma$ is also in $\Sigma$.
• The intersection of any two simplices of $\Sigma$ is also a simplex of $\Sigma$.
We can realize a simplicial complex by gluing together pieces of increasing dimension. First start by taking a collection of vertices (0-simplices) $X_0$. Then take a collection of intervals
(1-simplices) $X_1$ and glue their endpoints onto the vertices in any way. Note that because we require every face of an interval to again be a simplex in our complex, we must glue each endpoint of
an interval onto a vertex in $X_0$. Continue this process with $X_2$, a set of 2-simplices, we must glue each edge precisely along an edge of $X_1$. We can continue this process until we reach a
terminating set $X_n$. It is easy to see that the union of the $X_i$ form a simplicial complex. Define the dimension of the cell complex to be $n$.
There are some picky restrictions on how we glue things that we should mention. For instance, we could not contract all edges of a 2-simplex $\sigma$ and glue it all to a single vertex in $X_0$. The
reason for this is that $\sigma$ would no longer be a 2-simplex! Indeed, we’ve destroyed its original vertex set. The gluing process hence needs to preserve the original simplex’s boundary. Moreover,
one property that follows from the two conditions above is that any simplex in the complex is uniquely determined by its vertices (for otherwise, the intersection of two such non-uniquely specified
simplices would not be a single simplex).
We also have to remember that we’re imposing a specific ordering on the vertices of a simplex. In particular, if we label the vertices of an $n$-simplex $0, \dots, n$, then this imposes an
orientation on the edges where an edge of the form $\left \{ i,j \right \}$ has the orientation $(i,j)$ if $i < j$, and $(j,i)$ otherwise. The faces, then, are “oriented” in increasing order of their
three vertices. Higher dimensional simplices are oriented in a similar way, though we rarely try to picture this (the theory of orientations is a question best posted for smooth manifolds; we won’t
be going there any time soon). Here are, for example, two different ways to pick orientations of a 2-simplex:
It is true, but a somewhat lengthy exercise, that the topology of a simplicial complex does not change under a consistent shuffling of the orientations across all its simplices. Nor does it change
depending on how we realize a space as a simplicial complex. These kinds of results are crucial to the welfare of the theory, but have been proved once and we won’t bother reproving them here.
As a larger example, here is a simplicial complex representing the torus. It’s quite a bit more complicated than our usual quotient of a square, but it’s based on the same idea. The left and right
edges are glued together, as are the top and bottom, with appropriate orientations. The only difficulty is that we need each simplex to be uniquely determined by its vertices. While this construction
does not use the smallest possible number of simplices to satisfy that condition, it is the simplest to think about.
Taking a known topological space (like the torus) and realizing it as a simplicial complex is known as triangulating the space. A space which can be realized as a simplicial complex is called
The nicest thing about the simplex is that it has an easy-to-describe boundary. Geometrically, it’s obvious: the boundary of the line segment is the two endpoints; the boundary of the triangle is the
union of all three of its edges; the tetrahedron has four triangular faces as its boundary; etc. But because we need an algebraic way to describe holes, we want an algebraic way to describe the
boundary. In particular, we have two important criterion that any algebraic definition must satisfy to be reasonable:
1. A boundary itself has no boundary.
2. The property of being boundariless (at least in low dimensions) coincides with our intuitive idea of what it means to be a loop.
Of course, just as with homotopy these holes interact in ways we’re about to see, so we need to be totally concrete before we can proceed.
The Chain Group and the Boundary Operator
In order to define an algebraic boundary, we have to realize simplices themselves as algebraic objects. This is not so difficult to do: just take all “formal sums” of simplices in the complex. More
rigorously, let $X_k$ be the set of $k$-simplices in the simplicial complex $X$. Define the chain group $C_k(X)$ to be the $\mathbb{Q}$-vector space with $X_k$ for a basis. The elements of the $k$-th
chain group are called k-chains on $X$. That’s right, if $\sigma, \sigma'$ are two $k$-simplices, then we just blindly define a bunch of new “chains” as all possible “sums” and scalar multiples of
the simplices. For example, sums involving two elements would look like $a\sigma + b\sigma'$ for some $a,b \in \mathbb{Q}$. Indeed, we include any finite sum of such simplices, as is standard in
taking the span of a set of basis vectors in linear algebra.
Just for a quick example, take this very simple simplicial complex:
We’ve labeled all of the simplices above, and we can describe the chain groups quite easily. The zero-th chain group $C_0(X)$ is the $\mathbb{Q}$-linear span of the set of vertices $\left \{ v_1,
v_2, v_3, v_4 \right \}$. Geometrically, we might think of “the union” of two points as being, e.g., the sum $v_1 + v_3$. And if we want to have two copies of $v_1$ and five copies of $v_3$, that
might be thought of as $2v_1 + 5v_3$. Of course, there are geometrically meaningless sums like $\frac{1}{2}v_4 - v_2 - \frac{11}{6}v_1$, but it will turn out that the algebra we use to talk about
holes will not falter because of it. It’s nice to have this geometric idea of what an algebraic expression can “mean,” but in light of this nonsense it’s not a good idea to get too wedded to the
Likewise, $C_1(X)$ is the linear span of the set $\left \{ e_1, e_2, e_3, e_4, e_5 \right \}$ with coefficients in $\mathbb{Q}$. So we can talk about a “path” as a sum of simplices like $e_1 + e_4 -
e_5 + e_3$. Here we use a negative coefficient to signify that we’re travelling “against” the orientation of an edge. Note that since the order of the terms is irrelevant, the same “path” is given
by, e.g. $-e_5 + e_4 + e_1 + e_3$, which geometrically is ridiculous if we insist on reading the terms from left to right.
The same idea extends to higher dimensional groups, but as usual the visualization grows difficult. For example, in $C_2(X)$ above, the chain group is the vector space spanned by $\left \{ \sigma_1,
\sigma_2 \right \}$. But does it make sense to have a path of triangles? Perhaps, but the geometric analogies certainly become more tenuous as dimension grows. The benefit, however, is if we come up
with good algebraic definitions for the low-dimensional cases, the algebra is easy to generalize to higher dimensions.
So now we will define the boundary operator on chain groups, a linear map $\partial : C_k(X) \to C_{k-1}(X)$ by starting in lower dimensions and generalizing. A single vertex should always be
boundariless, so $\partial v = 0$ for each vertex. Extending linearly to the entire chain group, we have $\partial$ is identically the zero map on zero-chains. For 1-simplices we have a more
substantial definition: if a simplex has its orientation as $(v_1, v_2)$, then the boundary $\partial (v_1, v_2)$ should be $v_2 - v_1$. That is, it’s the front end of the edge minus the back end.
This defines the boundary operator on the basis elements, and we can again extend linearly to the entire group of 1-chains.
Why is this definition more sensible than, say, $v_1 + v_2$? Using our example above, let’s see how it operates on a “path.” If we have a sum like $e_1 + e_4 - e_5 - e_3$, then the boundary is
computed as
$\displaystyle \partial (e_1 + e_4 - e_5 - e_3) = \partial e_1 + \partial e_4 - \partial e_5 - \partial e_3$
$\displaystyle = (v_2 - v_1) + (v_4 - v_2) - (v_4 - v_3) - (v_3 - v_2) = v_2 - v_1$
That is, the result was the endpoint of our path $v_2$ minus the starting point of our path $v_1$. It is not hard to prove that this will work in general, since each successive edge in a path will
cancel out the ending vertex of the edge before it and the starting vertex of the edge after it: the result is just one big alternating sum.
Even more importantly is that if the “path” is a loop (the starting and ending points are the same in our naive way to write the paths), then the boundary is zero. Indeed, any time the boundary is
zero then one can rewrite the sum as a sum of “loops,” (though one might have to trivially introduce cancelling factors). And so our condition for a chain to be a “loop,” which is just one step away
from being a “hole,” is if it is in the kernel of the boundary operator. We have a special name for such chains: they are called cycles.
For 2-simplices, the definition is not so much harder: if we have a simplex like $(v_0, v_1, v_2)$, then the boundary should be $(v_1,v_2) - (v_0,v_2) + (v_0,v_1)$. If one rewrites this in a
different order, then it will become apparent that this is just a path traversing the boundary of the simplex with the appropriate orientations. We wrote it in this “backwards” way to lead into the
general definition: the simplices are ordered by which vertex does not occur in the face in question ($v_0$ omitted from the first, $v_1$ from the second, and $v_2$ from the third).
We are now ready to extend this definition to arbitrary simplices, but a nice-looking definition requires a bit more notation. Say we have a k-simplex which looks like $(v_0, v_1, \dots, v_k)$.
Abstractly, we can write it just using the numbers, as $[0,1,\dots, k]$. And moreover, we can denote the removal of a vertex from this list by putting a hat over the removed index. So $[0,1,\dots, \
hat{i}, \dots, k]$ represents the simplex which has all of the vertices from 0 to $k$ excluding the vertex $v_i$. To represent a single-vertex simplex, we will often drop the square brackets, e.g.
$3$ for $[3]$. This can make for some awkward looking math, but is actually standard notation once the correct context has been established.
Now the boundary operator is defined on the standard $n$-simplex with orientation $[0,1,\dots, n]$ via the alternating sum
$\displaystyle \partial([0,1,\dots, n]) = \sum_{k=0}^n (-1)^k [0, \dots, \hat{k}, \dots, n]$
It is trivial (but perhaps notationally hard to parse) to see that this coincides with our low-dimensional examples above. But now that we’ve defined it for the basis elements of a chain group, we
automatically get a linear operator on the entire chain group by extending $\partial$ linearly on chains.
Definition: The k-cycles on $X$ are those chains in the kernel of $\partial$. We will call k-cycles boundariless. The k-boundaries are the image of $\partial$.
We should note that we are making a serious abuse of notation here, since technically $\partial$ is defined on only a single chain group. What we should do is define $\partial_k : C_k(X) \to C_{k-1}
(X)$ for a fixed dimension, and always put the subscript. In practice this is only done when it is crucial to be clear which dimension is being talked about, and otherwise the dimension is always
inferred from the context. If we want to compose the boundary operator in one dimension with the boundary operator in another dimension (say, $\partial_{k-1} \partial_k$), it is usually written $\
partial^2$. This author personally supports the abolition of the subscripts for the boundary map, because subscripts are a nuisance in algebraic topology.
All of that notation discussion is so we can make the following observation: $\partial^2 = 0$. That is, every chain which is a boundary of a higher-dimensional chain is boundariless! This should make
sense in low-dimension: if we take the boundary of a 2-simplex, we get a cycle of three 1-simplices, and the boundary of this chain is zero. Indeed, we can formally prove it from the definition for
general simplices (and extend linearly to achieve the result for all simplices) by writing out $\partial^2([0,1,\dots, n])$. With a keen eye, the reader will notice that the terms cancel out and we
get zero. The reason is entirely in which coefficients are negative; the second time we apply the boundary operator the power on (-1) shifts by one index. We will leave the full details as an
exercise to the reader.
So this fits our two criteria: low-dimensional examples make sense, and boundariless things (cycles) represent loops.
Recasting in Algebraic Terms, and the Homology Group
For the moment let’s give boundary operators subscripts $\partial_k : C_k(X) \to C_{k-1}(X)$. If we recast things in algebraic terms, we can call the k-cycles $Z_k(X) = \textup{ker}(\partial_k)$, and
this will be a subspace (and a subgroup) of $C_k(X)$ since kernels are always linear subspaces. Moreover, the set $B_k(X)$ of k-boundaries, that is, the image of $\partial_{k+1}$, is a subspace
(subgroup) of $Z_k(X)$. As we just saw, every boundary is itself boundariless, so $B_k(X)$ is a subset of $Z_k(X)$, and since the image of a linear map is always a linear subspace of the range, we
get that it is a subspace too.
All of this data is usually expressed in one big diagram: each of the chain groups are organized in order of decreasing dimension, and the boundary maps connect them.
Since our example (the “simple space” of two triangles from the previous section) only has simplices in dimensions zero, one, and two, we additionally extend the sequence of groups to an infinite
sequence by adding trivial groups and zero maps, as indicated. The condition that $\textup{im} \partial_{k+1} \subset \textup{ker} \partial_k$, which is equivalent to $\partial^2 = 0$, is what makes
this sequence a chain complex. As a side note, every sequence of abelian groups and group homomorphisms which satisfies the boundary requirement is called an algebraic chain complex. This foreshadows
that there are many different types of homology theory, and they are unified by these kinds of algebraic conditions.
Now, geometrically we want to say, “The holes are all those cycles (loops) which don’t arise as the boundaries of higher-dimensional things.” In algebraic terms, this would correspond to a quotient
space (really, a quotient group, which we covered in our first primer on groups) of the k-cycles by the k-boundaries. That is, a cycle would be considered a “trivial hole” if it is a boundary, and
two “different” cycles would be considered the same hole if their difference is a k-boundary. This is the spirit of homology, and formally, we define the homology group (vector space) as follows.
Definition: The $k$-th homology group of a simplicial complex $X$, denoted $H_k(X)$, is the quotient vector space $Z_k(X) / B_k(X) = \textup{ker}(\partial_k) / \textup{im}(\partial_{k+1})$. Two
elements of a homology group which are equivalent (their difference is a boundary) are called homologous.
The number of $k$-dimensional holes in $X$ is thus realized as the dimension of $H_k(X)$ as a vector space.
The quotient mechanism really is doing all of the work for us here. Any time we have two holes and we’re wondering whether they represent truly different holes in the space (perhaps we have a closed
loop of edges, and another which is slightly longer but does not quite use the same edges), we can determine this by taking their difference and seeing if it bounds a higher-dimensional chain. If it
does, then the two chains are the same, and if it doesn’t then the two chains carry intrinsically different topological information.
For particular dimensions, there are some neat facts (which obviously require further proof) that make this definition more concrete.
• The dimension of $H_0(X)$ is the number of connected components of $X$. Therefore, computing homology generalizes the graph-theoretic methods of computing connected components.
• $H_1(X)$ is the abelianization of the fundamental group of $X$. Roughly speaking, $H_1(X)$ is the closest approximation of $\pi_1(X)$ by a $\mathbb{Q}$ vector space.
Now that we’ve defined the homology group, let’s end this post by computing all the homology groups for this example space:
This is a sphere (which can be triangulated as the boundary of a tetrahedron) with an extra “arm.” Note how the edge needs an extra vertex to maintain uniqueness. This space is a nice example because
it has one-dimensional homology in dimension zero (one connected component), dimension one (the arm is like a copy of the circle), and dimension two (the hollow sphere part). Let’s verify this
Let’s start by labelling the vertices of the tetrahedron 0, 1, 2, 3, so that the extra arm attaches at 0 and 2, and call the extra vertex on the arm 4. This completely determines the orientations for
the entire simplex, as seen below.
Indeed, the chain groups are easy to write down:
$\displaystyle C_0(X) = \textup{span} \left \{ 0,1,2,3,4 \right \}$
$\displaystyle C_1(X) = \textup{span} \left \{ [0,1], [0,2], [0,3], [0,4], [1,2], [1,3],[2,3],[2,4] \right \}$
$\displaystyle C_2(X) = \textup{span} \left \{ [0,1,2], [0,1,3], [0,2,3], [1,2,3] \right \}$
We can easily write down the images of each $\partial_k$, they’re just the span of the images of each basis element under $\partial_k$.
$\displaystyle \textup{im} \partial_1 = \textup{span} \left \{ 1 - 0, 2 - 0, 3 - 0, 4 - 0, 2 - 1, 3 - 1, 3 - 2, 4 - 2 \right \}$
The zero-th homology $H_0(X)$ is the kernel of $\partial_0$ modulo the image of $\partial_1$. The angle brackets are a shorthand for “span.”
$\displaystyle \frac{\left \langle 0,1,2,3,4 \right \rangle}{\left \langle 1-0,2-0,3-0,4-0,2-1,3-1,3-2,4-2 \right \rangle}$
Since $\partial_0$ is actually the zero map, $Z_0(X) = C_0(X)$ and all five vertices generate the kernel. The quotient construction imposes that two vertices (two elements of the homology group) are
considered equivalent if their difference is a boundary. It is easy to see that (indeed, just by the first four generators of the image) all vertices are equivalent to 0, so there is a unique
generator of homology, and the vector space is isomorphic to $\mathbb{Q}$. There is exactly one connected component. Geometrically we can realize this, because two vertices are homologous if and only
if there is a “path” of edges from one vertex to the other. This chain will indeed have as its image the difference of the two vertices.
We can compute the first homology $H_1(X)$ in an analogous way, compute the kernel and image separately, and then compute the quotient.
$\textup{ker} \partial_1 = \textup{span} \left \{ [0,1] + [0,3] - [1,3], [0,2] + [2,3] - [0,3], [1,2] + [2,3] - [1,3], [0,1] + [1,2] - [0,2], [0,2] + [2,4] - [0,4] \right \}$
It takes a bit of combinatorial analysis to show that this is precisely the kernel of $\partial_1$, and we will have a better method for it in the next post, but indeed this is it. As the image of $\
partial_2$ is precisely the first four basis elements, the quotient is just the one-dimensional vector space spanned by $[0,2] + [2,4] - [0,4]$. Hence $H_1(X) = \mathbb{Q}$, and there is one
one-dimensional hole.
Since there are no 3-simplices, the homology group $H_2(X)$ is simply the kernel of $\partial_2$, which is not hard to see is just generated by the chain representing the “sphere” part of the space:
$[1,2,3] - [0,2,3] + [0,1,3] - [0,1,2]$. The second homology group is thus again $\mathbb{Q}$ and there is one two-dimensional hole in $X$.
So there we have it!
Looking Forward
Next time, we will give a more explicit algorithm for computing homology for finite simplicial complexes, and it will essentially be a variation of row-reduction which simultaneously rewrites the
matrix representations of the boundary operators $\partial_{k+1}, \partial_k$ with respect to a canonical basis. This will allow us to simply count entries on the digaonals of the two matrices, and
the difference will be the dimension of the quotient space, and hence the number of holes.
Until then! | {"url":"http://jeremykun.com/tag/topological-invariant/","timestamp":"2014-04-18T08:02:30Z","content_type":null,"content_length":"103602","record_id":"<urn:uuid:da011f14-10e6-443a-84b5-e600fb44ff84>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. A new brand of breakfast cereal is being market
View the step-by-step solution to:
1. A new brand of breakfast cereal is being market tested. One hundred boxes...
Home Tutors Statistics and Probability 1. A new brand of breakfast ce...
This question has been answered by Abhatnagar on Nov 10, 2012. View Solution
bunique1988 posted a question Nov 06, 2012 at 10:36am
1. A new brand of breakfast cereal is being market tested. One hundred boxes of the cereal were given to consumers to try. The consumers were asked whether they liked or disliked the cereal. You are
given the following responses: 60 people liked the cereal, 40 people did not like the cereal. Construct a 95% confidence interval for the proportion of all consumers who will like the cereal.
2. 1. A university planner wants to determine the proportion of spring semester students who will attend summer school. She surveys 32 current students discovering that 12 will return for summer
school. Construct a 90% confidence interval estimate for the proportion of current spring students who will return for summer school.
3. 1. A health club annually surveys its members. Last year, 33% of the members said they use the treadmill at least 4 times a week. How large of sample should be taken this year to estimate the
percentage of members who use the treadmill at least 4 times a week? The estimate is desired to have a margin of error of 5% with a 95% level of confidence.
Abhatnagar was assigned to this question Nov 06, 2012 at 10:51am
Hi, I am Abhatnagar and I will be answering your question.
Abhatnagar answered the question Nov 06, 2012 at 12:03pm
Please find... View Full Answer
Attachment Preview:
• . A new brand of breakfast cereal is being market tested. One hundred boxes of the cereal
were given to consumers to try. The consumers were asked whether they liked or disliked the
cereal. You are.... A new brand of breakfast cereal is being market tested. One hundred boxes of the cereal
were given to consumers to try. The consumers were asked whether they liked or disliked the
cereal. You are given the following responses: 60 people liked the cereal, 40 people did not
like the cereal. Construct a 95% confidence interval for the proportion of all consumers who
will like the cereal.
the interval is .6 1.96*(.6*.4/100).5 = 0.5039802 to .6960198
2. 1. A university planner wants to determine the proportion of spring semester students who
will attend summer school. She surveys 32 current students discovering that 12 will return for
summer school. Construct a... View Full Attachment Show more
cattow10 denied the answer Nov 06, 2012 at 12:52pm
The answer given does not match any of the options
1. A health club annually surveys its members. Last year, 33% of the members said they use the treadmill at least 4 times a week. How large of sample should be taken this year to estimate the
percentage of members who use the treadmill at least 4 times a week? The estimate is desired to have a margin of error of 5% with a 95% level of confidence.
a. 359
b. 347
c. 340
d. 352
Abhatnagar posted a reply Nov 06, 2012 at 6:09pm
let me check and get back to you
Abhatnagar answered the question Nov 06, 2012 at 6:31pm
PLease find the... View Full Answer
Attachment Preview:
• . A new brand of breakfast cereal is being market tested. One hundred boxes of the cereal
were given to consumers to try. The consumers were asked whether they liked or disliked the
cereal. You are.... A new brand of breakfast cereal is being market tested. One hundred boxes of the cereal
were given to consumers to try. The consumers were asked whether they liked or disliked the
cereal. You are given the following responses: 60 people liked the cereal, 40 people did not
like the cereal. Construct a 95% confidence interval for the proportion of all consumers who
will like the cereal.
the interval is .6 1.96*(.6*.4/100).5 = 0.5039802 to .6960198
2. 1. A university planner wants to determine the proportion of spring semester students who
will attend summer school. She surveys 32 current students discovering that 12 will return for
summer school.... View Full Attachment Show more
cattow10 denied the answer Nov 08, 2012 at 8:24pm
response was too late- questions were time sensitive
Abhatnagar posted a reply Nov 08, 2012 at 8:26pm
This question has been flagged for moderator reviewNov 08, 2012 at 8:28pm
Moderator mohammad.shaharyar_moderator posted a reply Nov 08, 2012 at 9:09pm
Dear Student,
The tutor submitted the first solution on time. It took tutor some time to revise the solution according to your requirements. Kindly accept the revised solution submitted by tutor.
cattow10 posted a reply Nov 10, 2012 at 12:49am
What is the point of paying for an incorrect answer? The correct response was posted 3 hours after my deadline. My deadline was for a reason. The response was time sensitive and I paid according to
the factor. This response is not worth the money I offered to pay. I expect my time and money to be valued just as I paid for the tutors time and knowledge.
This question has been flagged for moderator reviewNov 10, 2012 at 1:09am
Abhatnagar answered the question Nov 10, 2012 at 1:10am
OK View Full Answer
"Thank you for the clarification, from your perspective! I have better understanding and can agree on that point. I do appreciate you and your time!"
Abhatnagar posted a reply Nov 10, 2012 at 1:41am
Please accept the revised solution . I am attaching it again.
Your initial assignment did not give any options. these were given later and this caused problems in interpreting of the margin of error.
Attachment Preview:
• . A new brand of breakfast cereal is being market tested. One hundred boxes of the cereal
were given to consumers to try. The consumers were asked whether they liked or disliked the
cereal. You are.... A new brand of breakfast cereal is being market tested. One hundred boxes of the cereal
were given to consumers to try. The consumers were asked whether they liked or disliked the
cereal. You are given the following responses: 60 people liked the cereal, 40 people did not
like the cereal. Construct a 95% confidence interval for the proportion of all consumers who
will like the cereal.
the interval is .6 1.96*(.6*.4/100).5 = 0.5039802 to .6960198
2. 1. A university planner wants to determine the proportion of spring semester students who
will attend summer school. She surveys 32 current students discovering that 12 will return for
summer school.... View Full Attachment Show more
cattow10 accepted the answer Nov 10, 2012 at 1:57am
Answer was rated: Thank you for the clarification, from your perspective! I have better understanding and can agree on that point. I do appreciate you and your time!
Abhatnagar posted a reply Nov 10, 2012 at 2:16am
Rating: Questions Answered: 723
Thank You.
I appreciate your honesty. | {"url":"http://www.coursehero.com/tutors-problems/Statistics-and-Probability/8393062-1-A-new-brand-of-breakfast-cereal-is-being-market-tested-One-hundred/","timestamp":"2014-04-19T04:22:36Z","content_type":null,"content_length":"73199","record_id":"<urn:uuid:efa39c68-2cc1-4d7a-9318-9305e8ca813f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hippocrats' Quadrature of the Lune
Hippocrates' Quadrature of the lune
Hippocrates of Chios taught in Athens and worked on the classical problems of squaring the circle and duplicating
the cube. Little is known of his life but he is reported to have been an excellent geometer who, in other respects, was stupid and lacking in sense. Some claim that he was defrauded of a large sum of
money because of his naiveté.
Iamblichus [4] writes:-
One of the Pythagorean [Hippocrates] lost his property, and when this misfortune befell him he was allowed to make money by teaching geometry.
Heath [6] recounts two versions of this story:-
One version of the story is that [Hippocrates] was a merchant, but lost all his property through being captured by a pirate vessel. He then came to Athens to persecute the offenders and, during a
long stay, attended lectures, finally attaining such proficiency in geometry that he tried to square the circle.
Heath also recounts a different version of the story as told by Aristotle:-
... he allowed himself to be defrauded of a large sum by custom-house officers at Byzantium, thereby proving, in Aristotle's opinion, that, though a good geometer, he was stupid and incompetent in
the business of ordinary life.
The suggestion is that this 'long stay' in Athens was between about 450 BC and 430 BC.
In his attempts to square the circle, Hippocrates was able to find the areas of lunes, certain crescent-shaped figures, using his theorem that the ratio of the areas of two circles is the same as the
ratio of the squares of their radii. We describe this
impressive achievement more fully below.
Hippocrates also showed that a cube can be doubled if two mean proportional can be determined between a number and its double. This had a major influence on attempts to duplicate the cube, all
efforts after this being directed towards the
mean proportional problem.
He was the first to write an Elements of Geometry and although his work is now lost it must have contained much of what Euclid later included in Books 1 and 2 of the Elements. Proclus, the last major
Greek philosopher, who lived around 450 AD wrote:-
Hippocrates of Chios, the discoverer of the quadrature of the lune, ... was the first of whom it is recorded that he actually compiled "Elements".
Hippocrates' book also included geometrical solutions to quadratic equations and included early methods of integration.
The Greeks did not use numbers to measure the area of a figure. Equality of plane figures is verified by cutting in pieces and rearranging. Quadrature of a figure means finding a side of a square of
the same area as the figure. Well--known quadrature date back to earliest Greek mathematics -- Thales (c. -600), Pythagoras (c. -540). The Greeks had accomplished the quadrature of polygons, but they
were less successful in knowing the properties of circles and other curvilinear forms. It was known that all types of polygons could be equated, in measure, to the square, and the square seemed an
ideal unit of areal measure. The next three links below show, in turn, the quadrature of the rectangle, triangle and any polygon which illustrate how the Greeks squared the figures. And finally the
quadrature of the lune which was the outstanding problem of Greek mathematics.
Quadrature of the rectangle
Quadrature of the triangle
Quadrature of any polygon
Quadrature of the lune
Back home page | {"url":"http://jwilson.coe.uga.edu/EMT668/EMAT6680.2000/Obara/Emat6690/Lunefolder/Lune.html","timestamp":"2014-04-16T15:58:48Z","content_type":null,"content_length":"5146","record_id":"<urn:uuid:ef8d508d-bd98-43e9-b95e-a199e61fa7b4>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pilgrim Gdns, PA Math Tutor
Find a Pilgrim Gdns, PA Math Tutor
...My background in academia and industry allows me to teach calculus from either a theoretical or an applied approach, depending on student needs and interests. I studied statistics as part of
the actuarial exam process. Two of the early exams focused on statistical methods, including regression, parameter fitting, Bayesian, and non-Bayesian techniques.
18 Subjects: including trigonometry, differential equations, algebra 1, algebra 2
...See course description for SAT Math. The keys to success in the reading section are vocabulary and critical analysis of brief passages. My focus is on how to concentrate while reading, and
recognize the key points of each passage.
32 Subjects: including algebra 1, algebra 2, American history, biology
...I have worked with high school students with CP, vision and hearing impairments, and am very familiar with assistive technologies for these students. I have tutored children with autism for
reading comprehension and math, students with dyslexia for reading and writing, and young children needing...
20 Subjects: including SAT math, dyslexia, geometry, algebra 1
...It takes more than just head knowledge, but takes the interpersonal know-how and ingenuity to get people working collectively in a single direction with a single purpose. Each day is usually
filled with more troubleshooting than just routine tasks. You have to learn flexibility in thinking to constantly create new solutions.
16 Subjects: including geometry, linear algebra, logic, algebra 1
...I enjoy working with students and take pride in showing students their full potential can be obtained through hard work and dedication. I look forward to meeting and working with students and
helping them achieve their academic goals. Thank you.
9 Subjects: including linear algebra, algebra 1, algebra 2, geometry
Related Pilgrim Gdns, PA Tutors
Pilgrim Gdns, PA Accounting Tutors
Pilgrim Gdns, PA ACT Tutors
Pilgrim Gdns, PA Algebra Tutors
Pilgrim Gdns, PA Algebra 2 Tutors
Pilgrim Gdns, PA Calculus Tutors
Pilgrim Gdns, PA Geometry Tutors
Pilgrim Gdns, PA Math Tutors
Pilgrim Gdns, PA Prealgebra Tutors
Pilgrim Gdns, PA Precalculus Tutors
Pilgrim Gdns, PA SAT Tutors
Pilgrim Gdns, PA SAT Math Tutors
Pilgrim Gdns, PA Science Tutors
Pilgrim Gdns, PA Statistics Tutors
Pilgrim Gdns, PA Trigonometry Tutors
Nearby Cities With Math Tutor
Drexelbrook, PA Math Tutors
Garden City, PA Math Tutors
Kirklyn, PA Math Tutors
Llanerch, PA Math Tutors
Merion Park, PA Math Tutors
Milmont Park, PA Math Tutors
Moylan, PA Math Tutors
Oakview, PA Math Tutors
Penn Wynne, PA Math Tutors
Pilgrim Gardens, PA Math Tutors
Primos Secane, PA Math Tutors
Primos, PA Math Tutors
Rose Tree, PA Math Tutors
Secane, PA Math Tutors
Westbrook Park, PA Math Tutors | {"url":"http://www.purplemath.com/Pilgrim_Gdns_PA_Math_tutors.php","timestamp":"2014-04-19T14:41:30Z","content_type":null,"content_length":"24154","record_id":"<urn:uuid:86e76d44-4603-4de0-9e08-76ce4337a8a1>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding domain and Range
September 23rd 2012, 02:27 PM #1
Finding domain and Range
I don't know for some reason this question is confusing me and I can't find an example similar in my text book
domain and range of
Re: Finding domain and Range
Re: Finding domain and Range
You should be familiar w/ the graph of the parent function $y = \frac{1}{x^2}$. Its domain is the same as for f(x).
... what transformation has occurred to the parent function by subtracting 6 ? How does this affect the range?
Re: Finding domain and Range
The domain should be all real number except where the denominator is zero
Re: Finding domain and Range
Re: Finding domain and Range
Re: Finding domain and Range
To represent the domain in interval notation it would be (-infinity,0) U (0, infinity)
Re: Finding domain and Range
Re: Finding domain and Range
Re: Finding domain and Range
ahh I see said the blind man
September 23rd 2012, 02:36 PM #2
September 23rd 2012, 02:38 PM #3
September 23rd 2012, 02:39 PM #4
September 23rd 2012, 02:43 PM #5
September 23rd 2012, 02:44 PM #6
September 23rd 2012, 02:51 PM #7
September 23rd 2012, 03:05 PM #8
September 23rd 2012, 03:36 PM #9
September 23rd 2012, 03:38 PM #10 | {"url":"http://mathhelpforum.com/pre-calculus/203948-finding-domain-range.html","timestamp":"2014-04-18T06:16:13Z","content_type":null,"content_length":"61398","record_id":"<urn:uuid:022442fa-36c7-42bd-aff3-f7123b732769>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
Raphan abstract
Harmonic Analysis and Signal Processing Seminar
Unsupervised Regression for Image Denoising Martin Raphan
CIMS and Laboratory for Computational Vision, Center for Neural Science, NYU
Tuesday, October 30, 2007, 12:30pm, WWH 1302
There are two standard frameworks for describing optimal least squares estimation of a random quantity from corrupted measurements. The first technique, Bayesian Least Squares (BLS)
estimation, uses explicit models of both the corruption process and the prior distribution of the quantity to be estimated in order to formulate an optimal estimator via Bayes' rule. The
second technique, Least Squares regression, uses supervised training on a data set which has clean samples paired with corrupted versions of those samples, to choose an optimal estimator
from some family. In many applications, however, one has available neither a model of the prior distribution, nor uncorrupted measurements of the variable being estimated. We will
describe a framework for expressing the BLS estimator (regression function) entirely in terms of a model of the corruption process and the density of the corrupted measurements. We show a
practical implementation of this nonparametric estimator for additive white gaussian noise (AWGN), and demonstrate the use of this procedure for denoising photographic images, showing
that it compares favorably with previously published methods which use explicit prior models. We also describe a dual, prior-free formulation of the Mean Square Error (MSE) which
generalizes Stein's Unbiased Risk estimator (SURE), and show how this may be used for unsupervised regression. We then demonstrate the use of this dual formulation in image denoising. In
particular, we use the dual formulation to prove the empirically observed fact that, despite their suboptimality, marginal image denoisers chosen to minimize MSE within the subbands of a
redundant multi-scale decomposition will always perform better than on the orthonormal versions of those bases. We also develop an extension of SURE that allows minimization of the
image-domain MSE for estimators that operate on subbands of a redundant decomposition, and show that this gives improvement over methods which optimize MSE within subbands. | {"url":"http://cims.nyu.edu/~gunturk/Seminar_Data/Raphan_abstract.html","timestamp":"2014-04-20T18:24:28Z","content_type":null,"content_length":"3418","record_id":"<urn:uuid:85f650aa-e438-4832-8d24-fa169d99c6d9>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to check if a number is ODD or Even ? [PHP function]
Hey guys!
Long time no seen! Today I had a pleasure of going over myprogrammingblog emails, where many users have asked me to show solutions to different problems. Sorry for not being able to respond earlier,
as I had whole bunch of things going on.
From now on, at least for a month, I will be regularly (read as 1 time in 1-2 days) posting some code examples for the problems you have asked me to take a look at. By the way, if you have any
interesting problems, which you would like to share with me, please email me at: myprogrammingblog@gmail.com .
Quick Update:
I have opened myprogrammingblog Git repo, where I will be posting code examples discussed in this blog.
Repo URL is :
Now, back to business.
One of the readers ( I am not posting names/nicknames since some people don’t like to be addressed publicly) asked me:
How to check if a number is ODD or Even ?
To solve this problem I will use nice operator, available in most popular programming languages (e.g C/C++/Python/Java/PHP/JavaScript…you name it…) called modulo (%) .
Modulo – finds the remainder of the division. That’s exactly what we need!
We know that EVEN numbers are those that can be equally (read without a remainder) divided into two groups. By using modulo we can check if the number divided by two does not have a remainder, than
it is an EVEN number, otherwise it is ODD.
Bored already ? Here is the code (since no specific language was required, I have used PHP):
* Description:
* This small function returns true if passed number
* is divisible by two, false if not.
* @author: Anatoly Spektor
function isEven($number) {
$isEven = false;
if (is_numeric ($number)) {
if ( $number % 2 == 0) $isEven = true;
return $isEven;
In case you need Unit Tests for this function, they are available in the repo.
Thank you for sending me your questions!
3 Responses to How to check if a number is ODD or Even ? [PHP function]
1. Hello,
Nice article. Thank you for sharing. I also shared a similar article on c programming. I posted a article to check whether a given number is odd or even using c program.
□ Here is the link for the article of c program to check given number even or odd.
2. Very effective and commonsensical solution, I remember when I first learned programming, this was one of the first problems given to us, at that time it appeared tough.
This entry was posted in GitHub, open-source, PHP, Programming Languages, Useful Functions and tagged Dvision by TWO, EVEN numbers, Function, Modulo, ODD numbers, PHP, Remainder, Repo, Unit Tests.
Bookmark the permalink. | {"url":"http://myprogrammingblog.com/2013/08/19/how-to-check-if-a-number-is-odd-or-even-php-function/","timestamp":"2014-04-20T23:27:27Z","content_type":null,"content_length":"65799","record_id":"<urn:uuid:c40aeb90-81f4-442e-88c3-c4684684ad8f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Big Ideas. Printable Page | Thirteen/WNET
JOHN FORBES NASH, JR.
Field of Expertise
John Forbes Nash, Jr. is a pioneer in the field of game theory. After first encountering game theory from the works of John von Neumann and Oskar Morgenster, Nash was the first to introduce the
economic application of game theory. In 1950, his dissertation explained the "Nash Equilibrium," developing the theory to solve strategic non-cooperative games for mutual gain. He further developed
game theory with his "Nash Bargaining Solution," which was a solution concept for two-person cooperative games. In 1951, he introduced his name to another side of economics with the "Nash Programme."
This paper called for the reduction of all cooperative games into a non-cooperative framework. John Nash's ideas on game theory have caused its influence to grow so quickly that some claim, it is on
a path "to overwhelm much of economics itself."
Educational Background
John Nash was born in Bluefield, West Virginia in 1928. He received an undergraduate and a Master's degree in mathematics from Carnegie Institute of Technology and received his Ph.D. from Princeton
University. In 1951, John Nash joined the faculty at MIT. He was tenured at M.I.T. in 1958 at the age of 29. In 1957, he received the Sloan Grant and spent the year as a temporary member of the
Institute for Advanced Study. In 1959, he left his position at M.I.T. due to health problems. He currently performs research for Princeton University.
At the age of ten, Nash was awarded the George Westinghouse Award, which provided a full scholarship to the Carnegie Institute of Technology.
He won the Nobel Prize in Economics in 1994 because of his 27-page dissertation, "Non-Cooperative Games," written in 1950 when he was 21. As an undergraduate, Nash proved Brouwer's fixed point
theorem. He also broke one of Riemann's most perplexing mathematical conundrums.
Nobel e Museum: John F. Nash Jr. - Autobiography
The Mac Tutor History of Mathematic Archive
American Experience: A Brilliant Madness
Popular-Science.Org - John Nash: Genius, Nobel and Schizophrenia
John Nash, THE ESSENTIAL JOHN NASH. Princeton University Press
John Nash, H. Robert Bartell Jr. CASES IN CORPORATE FINANCIAL PLANNING AND CONTROL.
John F. Nash, Cynthia D. Heagy, and Harvey M. Cortney, THE DESIGN SELECTION AND IMPLEMENTATION OF ACCOUNTING INFORMATION SYSTEMS. Dame Publishing
John F. Nash, ACCOUNTING INFORMATION SYSTEMS. McMillan Publishing Company | {"url":"http://www.thirteen.org/bigideas/printable/nash.html","timestamp":"2014-04-18T21:00:00Z","content_type":null,"content_length":"5989","record_id":"<urn:uuid:2a1f221f-bf35-42e2-8d52-a63e824b22dd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lagranges Theorem
August 31st 2008, 02:47 PM #1
Jun 2008
Lagranges Theorem
So we have $\phi(n) = \prod_{i=1}^{k} \phi(p_{i}^{e_{i}}) = \prod_{i=1}^{k} p_{i}^{e_{i}-1}(p_{i}-1)$. So basically what it is saying that the number of numbers coprime to $n$ is equal to the
totient product of their prime factors. So then $\phi(8) = 4 eq \phi(2) \cdot \phi(2) \cdot \phi(2)$. What's wrong with this (am I interpreting this incorrectly)? Also why is $\phi(n)$ defined
for integers $\leq n$ as opposed to just integers $< n$. Because a number is never coprime with itself right?
Last edited by particlejohn; August 31st 2008 at 09:40 PM.
You are right. It makes no difference.
I would guess is the problem with $n=1$.
Since the set $\{ 1\leq x < n\}$ would be empty.
And then $\phi(1) = 0$.
But we want to define it as $\phi(1)=1$.
Sorry I am using Lagrange's Theorem, but the above is not Lagrange's Theorem.
But $\phi(8) = 4 eq \phi(2) \times \phi(2) \times \phi(2) = 1$. Or does it mean $\phi(2^{3})$?
So we have $\phi(n) = \prod_{i=1}^{k} \phi(p_{i}^{e_{i}}) = \prod_{i=1}^{k} p_{i}^{e_{i}-1}(p_{i}-1)$. So basically what it is saying that the number of numbers coprime to $n$ is equal to the
totient product of their prime factors. So then $\phi(8) = 4 eq \phi(2) \cdot \phi(2) \cdot \phi(2)$. What's wrong with this (am I interpreting this incorrectly)? Also why is $\phi(n)$ defined
for integers $\leq n$ as opposed to just integers $< n$. Because a number is never coprime with itself right?
Ok this is for distinct prime factors. $8 = 2^{3}$ which is not distinct prime factors.
Another formula is :
$\phi(n)=n \prod_{i=1}^k \left(1-\frac{1}{p_i}\right)$
where $p_i$ is a prime divisor of n.
For example, $140=2^2 \cdot 5 \cdot 7$
$p_1=2 ~,~ p_2=5 ~,~ p_3=7$
August 31st 2008, 02:54 PM #2
Global Moderator
Nov 2005
New York City
August 31st 2008, 02:54 PM #3
Jun 2008
August 31st 2008, 04:11 PM #4
Jun 2008
August 31st 2008, 04:16 PM #5
Global Moderator
Nov 2005
New York City
August 31st 2008, 09:39 PM #6
Jun 2008
August 31st 2008, 11:05 PM #7 | {"url":"http://mathhelpforum.com/number-theory/47268-lagranges-theorem.html","timestamp":"2014-04-16T14:48:40Z","content_type":null,"content_length":"54068","record_id":"<urn:uuid:028a2a06-79f6-496c-a50c-0657959900d5>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic Notions of Trace Theory, in
Results 1 - 10 of 63
"... The theme of this paper is profunctors, and their centrality and ubiquity in understanding concurrent computation. Profunctors (a.k.a. distributors, or bimodules) are a generalisation of
relations to categories. Here they are first presented and motivated via spans of event structures, and the seman ..."
Cited by 263 (33 self)
Add to MetaCart
The theme of this paper is profunctors, and their centrality and ubiquity in understanding concurrent computation. Profunctors (a.k.a. distributors, or bimodules) are a generalisation of relations to
categories. Here they are first presented and motivated via spans of event structures, and the semantics of nondeterministic dataflow. Profunctors are shown to play a key role in relating models for
concurrency and to support an interpretation as higher-order processes (where input and output may be processes). Two recent directions of research are described. One is concerned with a language and
computational interpretation for profunctors. This addresses the duality between input and output in profunctors. The other is to investigate general spans of event structures (the spans can be
viewed as special profunctors) to give causal semantics to higher-order processes. For this it is useful to generalise event structures to allow events which “persist.”
, 1997
"... . State space explosion is a fundamental obstacle in formal verification of designs and protocols. Several techniques for combating this problem have emerged in the past few years, among which
two are significant: partial-order reductions and symbolic state space search. In asynchronous systems, ..."
Cited by 59 (0 self)
Add to MetaCart
. State space explosion is a fundamental obstacle in formal verification of designs and protocols. Several techniques for combating this problem have emerged in the past few years, among which two
are significant: partial-order reductions and symbolic state space search. In asynchronous systems, interleavings of independent concurrent events are equivalent, and only a representative
interleaving needs to be explored to verify local properties. Partial-order methods exploit this redundancy and visit only a subset of the reachable states. Symbolic techniques, on the other hand,
capture the transition relation of a system and the set of reachable states as boolean functions. In many cases, these functions can be represented compactly using binary decision diagrams (BDDs).
Traditionally, the two techniques have been practiced by two different schools---partial-order methods with enumerative depth-first search for the analysis of asynchronous network protocols, and
symbolic bread...
, 1995
"... We extend labelled transition systems to distributed transition systems by labelling the transition relation with a finite set of actions, representing the fact that the actions occur as a
concurrent step. We design an action-based temporal logic in which one can explicitly talk about steps. The log ..."
Cited by 29 (5 self)
Add to MetaCart
We extend labelled transition systems to distributed transition systems by labelling the transition relation with a finite set of actions, representing the fact that the actions occur as a concurrent
step. We design an action-based temporal logic in which one can explicitly talk about steps. The logic is studied to establish a variety of positive and negative results in terms of axiomatizability
and decidability. Our positive results show that the step notion is amenable to logical treatment via standard techniques. They also help us to obtain a logical characterization of two well known
models for distributed systems: labelled elementary net systems and labelled prime event structures. Our negative results show that demanding deterministic structures when dealing with a
"noninterleaved " notion of transitions is, from a logical standpoint, very expressive. They also show that another well known model of distributed systems called asynchronous transition systems
exhibits a surprising a...
, 1994
"... . Models for concurrency can be classified with respect to three relevant parameters: behaviour/system, interleaving/noninterleaving, linear/branching time. When modelling a process, a choice
concerning such parameters corresponds to choosing the level of abstraction of the resulting semantics. The ..."
Cited by 25 (4 self)
Add to MetaCart
. Models for concurrency can be classified with respect to three relevant parameters: behaviour/system, interleaving/noninterleaving, linear/branching time. When modelling a process, a choice
concerning such parameters corresponds to choosing the level of abstraction of the resulting semantics. The classifications are formalized through the medium of category theory. Keywords. Semantics,
Concurrency, Models for Concurrency, Categories. Contents 1 Preliminaries 431 2 Deterministic Transition Systems 433 3 Noninterleaving vs. Interleaving Models 436 Synchronization Trees and Labelled
Event Structures : : : : : : : : : : : : : : 438 Transition Systems with Independence : : : : : : : : : : : : : : : : : : : : : : 439 4 Behavioural, Linear Time, Noninterleaving Models 441
Semilanguages and Event Structures : : : : : : : : : : : : : : : : : : : : : : : 443 Trace Languages and Event Structures : : : : : : : : : : : : : : : : : : : : : : 446 5 Transition Systems with
Independence and Lab...
- Journal of Collaborative Computing , 1999
"... Team automata have been proposed in (Ellis, 1997) as a formal framework for modeling both the conceptual and the architectural level of groupware systems. Here we define team automata in a
mathematically precise way in terms of component automata which synchronize on certain executions of actions. A ..."
Cited by 25 (11 self)
Add to MetaCart
Team automata have been proposed in (Ellis, 1997) as a formal framework for modeling both the conceptual and the architectural level of groupware systems. Here we define team automata in a
mathematically precise way in terms of component automata which synchronize on certain executions of actions. At the conceptual level, our model serves as a formal framework in which basic groupware
notions can be rigorously defined and studied. At the architectural level, team automata can be used as building blocks in the design of groupware systems.
- Application of Concurrency to System Design , 2001
"... We describe a framework where formal models can be rigorously defined and compared, and their interconnections can be unambiguously specified. We use trace algebra and trace structure algebra to
provide the underlying mathematical machinery. We believe that this framework will be essential to provid ..."
Cited by 21 (6 self)
Add to MetaCart
We describe a framework where formal models can be rigorously defined and compared, and their interconnections can be unambiguously specified. We use trace algebra and trace structure algebra to
provide the underlying mathematical machinery. We believe that this framework will be essential to provide the foundations of an intermediate format that will provide the Metropolis infrastructure
with a formal mechanism for interoperability among tools and specification methods.
, 2000
"... The incorporation of timing makes circuit verification computationally expensive. This paper proposes a new approach for the verification of timed circuits. Rather than calculating the exact
timed state space, a conservative overestimation that fulfills the property under verification is derived. Ti ..."
Cited by 17 (6 self)
Add to MetaCart
The incorporation of timing makes circuit verification computationally expensive. This paper proposes a new approach for the verification of timed circuits. Rather than calculating the exact timed
state space, a conservative overestimation that fulfills the property under verification is derived. Timing analysis with absolute delays is efficiently performed at the level of event structures and
transformed into a set of relative timing constraints. With this approach, conventional symbolic techniques for reachability analysis can be efficiently combined with timing analysis. Moreover, the
set of timing constraints used to prove the correctness of the circuit can also be reported for backannotation purposes. Some preliminary results obtained by a naive implementation of the approach
show that systems with more than 10^6 untimed states can be verified.
- THEORETICAL COMPUTER SCIENCE , 1995
"... Several categorical relationships (adjunctions) between models for concurrency have been established, allowing the translation of concepts and properties from one model to another. A central
example is a coreflection between Petri nets and asynchronous transition systems. The purpose of the pres ..."
Cited by 16 (7 self)
Add to MetaCart
Several categorical relationships (adjunctions) between models for concurrency have been established, allowing the translation of concepts and properties from one model to another. A central example
is a coreflection between Petri nets and asynchronous transition systems. The purpose of the present paper is to illustrate the use of such relationships by transferring to Petri nets a general
concept of bisimulation.
, 1993
"... We investigate an extension of CTL (Computation Tree Logic) by past modalities, called CTLP , interpreted over Mazurkiewicz's trace systems. The logic is powerful enough to express most of the
partial order properties of distributed systems like serializability of database transactions, snapshots, p ..."
Cited by 16 (6 self)
Add to MetaCart
We investigate an extension of CTL (Computation Tree Logic) by past modalities, called CTLP , interpreted over Mazurkiewicz's trace systems. The logic is powerful enough to express most of the
partial order properties of distributed systems like serializability of database transactions, snapshots, parallel execution of program segments, or inevitability under concurrency fairness
assumption. We show that the model checking problem for the logic is NPhard, even if past modalities cannot be nested. Then, we give a one exponential time model checking algorithm for the logic
without nested past modalities. We show that all the interesting partial order properties can be model checked using our algorithm. Next, we show that it is possible to extend the model checking
algorithm to cover the whole language and its extension to CTL*P . Finally, we prove that the logic is undecidable and we discuss consequences of our results on using propositional versions of
partial order temporal logics to s...
- Proc. of TACAS'97, LNCS 1217 , 1997
"... . A finite representation of the prime event structure corresponding to the behaviour of a program is suggested. The algorithm of linear complexity using this representation for model checking
of the formulas of Discrete Event Structure Logic without past modalities is given. A method of building fi ..."
Cited by 15 (8 self)
Add to MetaCart
. A finite representation of the prime event structure corresponding to the behaviour of a program is suggested. The algorithm of linear complexity using this representation for model checking of the
formulas of Discrete Event Structure Logic without past modalities is given. A method of building finite representations of event structures in an efficient way by applying partial order reductions
is provided. 1 Introduction Model checking is one of the most successful methods of automatic verification of program properties. A model-checking algorithm decides whether a finite-state concurrent
system satisfies its specification, given as a formula of a temporal logic [3, 10]. Behaviour of a concurrent system can be modeled in two ways. In the interleaving semantics, the meaning of a
program is an execution tree, temporal-logic assertions are interpreted over paths of this tree. In partial-order semantics (or event structure semantics), behaviour is an event structure, where the
ordering r... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=117329","timestamp":"2014-04-16T21:17:18Z","content_type":null,"content_length":"38424","record_id":"<urn:uuid:4d689e47-1355-4690-8e01-f5943a49db3f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cresskill Prealgebra Tutor
...During these years,I have been working with diverse student populations in urban, suburban, and international settings. I always try to create a productive and comfortable environment that
facilitates learning. I incorporate results-driven methods to consistently provide for improvement.
14 Subjects: including prealgebra, calculus, geometry, algebra 1
...I have my bachelor's degree in History and 12+ years of experience. In addition, I have completed my Masters program in Adolescent Education. I have taught middle school Social Studies for the
past 13 years and I believe that I have the ability to teach high school Social Studies as well.
8 Subjects: including prealgebra, GRE, American history, elementary math
...I referred to the main idea at hand, made certain observations, used specific examples to support my observations, and summarized my position in the final paragraph. This approach was more than
sufficient to satisfy my critical analysis writing. My strengths in English focus on grammar and sentence structure.
41 Subjects: including prealgebra, English, reading, chemistry
...Cooking is chemistry. Everything you can touch or taste or smell is a chemical. When you study chemistry, you come to understand a bit about how things work.
17 Subjects: including prealgebra, chemistry, physics, geometry
...No matter how hard things may seem, with hard work and patience nothing is beyond a willing mind. I love playing and listening to music and I am also a big sports fan. Learning should always be
fun so let's get started!
25 Subjects: including prealgebra, chemistry, physics, geometry | {"url":"http://www.purplemath.com/Cresskill_Prealgebra_tutors.php","timestamp":"2014-04-21T02:48:12Z","content_type":null,"content_length":"23803","record_id":"<urn:uuid:21da78d8-92f1-4068-9938-00a802505ab4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
C# Basic Operators
Learning C# Series Operators
So, you've
written your first program
, and you've
learned about the basic types in .NET
. Now you need to learn how to do things with those types.
The most basic way that types interact is through operators. This tutorial will walk you through some of the more basic.
Why is this important to learn about?
Again, operators are the most basic way that you can create interactions between objects/values. How could you ever do math without knowing what plus, minus, times, and divide do? Well, those are
mathematical operators. The same principles apply to programming operators: without them, you'll probably never get anything done.
Definitions of terms used. Note: All examples were created using Visual Studio 2010, targetting the .NET Framework 4.0. We'll do our best to point out anything that might not work in older versions.
If you followed the link in the definitions above, you'll see that C# has quite a few operators. Some are single character, some are multiple. Some are
, some are
; one is even
(one of my favorites, actually).
Thes operators, just like in mathematics, follow an order of operations. The MSDN list of C# operators lists them in groups of precedent. Operators in the same group are evaluated left-to-right.
Parenthesis ( )
Parenthesis have several functions. First and foremost, they're used for order-of-operations, just like in math. 5 + 1 * 3 equals 8, but (5 + 1) * 3 equals 18. It works the same way in C#. Operations
inside of parenthesis are evaluated before those outside of them. Parenthesis are also used to invoke methods (we'll cover methods in another tutorial later), and for casting operations. Casting will
also be covered later, but in breif: it's a way to convert one type to another.
Here's an example of casting:
int x = (int)3.141;
In that example, we're taking a double, and converting it into an integer. You can't cast any type to any other: cast operations must either be performed on classes that are related via inheritance,
or be defined (using
) in the class definition. But that's out of scope of this tutorial.
Unary Operators
Unary operators take only one operand. A mathematical example would be the
Factorial operator
Increment (++) and Decrement (--)
If you've ever heard of C++, this is where the ++ comes from.
++ increments
the the value it's applied to. This operator can be either postfix (
) or prefix (
). In either case, after the expression is evaluated,
will be one greater than it was before. These operators can only be applied to numeric types by default.
Now, there's a reason why you can use these operators either way, and it's
quite important
. Before I explain, consider this code:
int a = 0;
int b = 0;
As explained earlier, both a and b will have a value of 1 after this code is evaluated. However, if you've tried this yourself, you've noticed that the output doesn't match!
Why, when both adds one to the variable? Because they evaluate at different times. ++a means "add one to the value of a, then return that value." b++ means "return the value of b, then add one to
it". It's a very important distinction if you're not using it as a standalone operation. I suggest you get in the habit of using the prefix (++a) version unless you
the postfix behavior. This will cause less confusion down the road.
Note: the -- operator works exactly the same way, except decreasing the value by one instead of increasing. Plus (+) and Minus (-) as Unary Operators
These operate the same way mathmatical positive and negative signs do.
simply returns
x * -1
Boolean Negation (!)
The bang (or exclamation point) operator negates boolean types.
, and
. This is often useful to invert a boolean statement:
bool respondedNo = false;
while (!respondedNo) {
respondedNo = Console.ReadKey().Key == ConsoleKey.N;
Binary Operators
Binary operators take two operands. These are the most familiar, since you've been using these since grade school.
Important note:
None of these operators except for the assignment operators modify their operands. They
the result of their evaluation. So if you do
a + b
, after it's evaluated, both a and b will still have their original values. You must either use it in further evaluation or assign the result to a variable for it to have any significance.
Multiplicitive Operands (*, /, %)
These three share precedence in the order of operations, and as such, are evaluated left-to-right. First is the Multiply operator (*). This returns the product of the operands to the left and right.
The Divide operator (/) behaves the same, except dividing the two operands.
Console.WriteLine(10 * 5);
Console.WriteLine(20 / 4);
If you use ints to do division, you may find unexpected results. For example:
Console.WriteLine(1 / 3);
You'd expect the output to be 0.3333 repeating, but
it's not.
That's because integers can't possibly have decimal values, so the decimals are trimmed (
not rounded! 0.999 becomes 0
). By changing one of the operands to a float/double/decimal, the entire operation is evaluated as a double.
Console.WriteLine(1 / 3.0);
Note: Integer Division by zero will throw a DivideByZeroException. That's a bad thing, so don't do it. Dividing a floating-point number by zero will not result in an exception, but will return the
value Double.NaN (Not a Number).
This naturally leads us to the Modulus operator (%). Remember in grade school when you started learning division? You didn't learn with decimals, but with integer quotients and remainders. The
modulus operation returns the remainder of integer division. For example:
Console.WriteLine(20 % 6);
20 / 6 is equal to 3.33 repeating. However, using integer division, it is 3. 6 * 3 is equal to 18. 20 - 18 equals 2. In other words, 20 / 6 = 18 R 2. Modulus returns the remainder. This doesn't sound
immediately useful, but it can be if you know a bit of math. For instance, we can use it to determine even numbers:
for (int i = 0; i < 10; i++) {
if (i % 2 == 0)
Console.WriteLine("{0} is even.", i);
Console.WriteLine("{0} is odd.", i);
0 is even.
1 is odd.
2 is even.
3 is odd.
4 is even.
5 is odd.
6 is even.
7 is odd.
8 is even.
9 is odd.
Additive Operators (+, -)
These both share a precedence, below that of the multiplicitive operators. Just like in actual mathematics. For numeric types, + and - do exactly what you think they do: return the result of adding
or subtracting the operators respectively.
However, + has another function:
string concatenation
. Concatenation basically means joining together. Here's an example of concatenating strings:
string a = "Hello";
string b = "World";
string c = a + " " + b;
Important note:
Strings aren't numbers! There is a huge, huge difference between adding numbers and concatenating strings. Since strings can represent numbers though, it may
confusing. But remember, if it's a string, it'll be joined up, not added together.
Console.WriteLine(1 + 2);
Console.WriteLine("1" + "2");
Equality Operators (==, !=)
C# makes an important distinction that not all languages make: "Equality and Assignment are different things, therefore should use different operators." What that means is, there's a difference
between saying "I want to assign the value of
," and "I want to know if
is equal to
To support that distinction, C# uses
as the comparison operator.
if(x == 1)
//do something
In the preceeding example, we
compare x
to 1. The result will be either
, depending on if x is equal to 1.
Important note
: The equality operator compares values for value types, but for most reference types, it checks to see if both operands are pointing to the same block of memory. This is a hugely important
distinction, since two objects can be what
would consider equal, but in distinct memory, so the compiler won't consider them equal. You can define equality for reference types by overriding the .Equals method, which is outside the scope of
this tutoria..
(not equals) operator is the exact opposite. It returns true when the operands are
equal, and false when they are.
Other Comparison Operators (&&,||)
These operators are all about
boolean logic
. If you've ever taken a logic class, you should have covered this. Basically, they're a way of expressing AND (&&) and OR (||). && has a higher precedence in order of operations than ||.
&& will return true if
of the operands are true. Otherwise it will return false.
|| will return true if
of the operands are true. Only if both are false will it return false.
To see this in action:
bool x = true;
bool y = false;
if (x && y)
Console.WriteLine("Both were true.");
Console.WriteLine("At least one was false.");
if (x || y)
Console.WriteLine("At least one was true.");
Console.WriteLine("Both were false.");
At least one was false.
At least one was true.
There are also
versions of these operators, (&, |), as well as a bitwise XOR (^). XOR returns true if one and only one of the operands are true, otherwise false. Against bools, the bitwise operators work exactly
the same. However, they also operate against integral values as well. They compare each bit of the binary representation of the int, and return the result. It's not something you generally need to
concern yourself with, and is outside the scope of this tutorial.
Assignment operators (=, +=, -=, *=, /=, %=)
As already discussed in the Equality Operators section, assignment and equality are different things. Assignment means "take the evaluated value on the right, and assign it into the variable on the
left, then return that value if necessary" Of course, this means that there must be a variable on the left for this to be a valid expression. Some examples of assignment:
int x = 1; //assign 1 to x.
int y = x + 1; //evaluate x + 1, then assign the result (2) to y.
It's important to note that once an assignment is evaluated, it returns the value on the right as a result. This means that an assignment can be used in
int x = 1;
int y;
int z = y = 2;
In this example, z is to be assigned the result of (y = 2). So the compiler evaluates y = 2. 2 is assigned to y, then 2 is returned. Now that y = 2 evaluated to 2, z is assigned 2.
This is convoluted and not generally used.
Now, there are several other operators mentioned (+=, -=, /=, *=, %=). Basically, these are the combination of assignment and arithmatic. For example:
x += 5;
is exactly the same as
x = x + 5
. See? A combination of assignment and arithmatic. This works the same for each of these operators, except with their own arithmatic operator (- for -=, * for *=, etc).
Ternary Operator
This is the only ternary operator in C#: the conditional operator (?:). Here's an example of the pattern it follows:
x = a ? b : c;
That is the equivalent of this:
x = b;
x = c;
must evaluate to a bool, and
must evaluate to the same type as
A somewhat more practical example:
string longString = "abcdefghijklmnop";
string shortString = longString.Length > 5 ? longString.Substring(0, 5) : longString;
This says "if longString is longer than 5, assign the Substring of longString to shortString. Otherwise, just assign longString to shortString."
In Conclusion
This in no way covered every operator that C# includes. There are several that are rarely used, or are simply more advanced than the scope of this topic. You can see all of them here:
Each is a link that you can follow to see it in action.
Another note: when defining a class, operators can be overloaded. We'll cover this in a later tutorial, but I want you to be aware that this is possible. Overloading an operator means that it can do
something other than it's default operation. For example, the minus (-) operator is overloaded on DateTime in two ways: if you subtract a TimeSpan from a DateTime, you'll get a new DateTime. If you
subtract a DateTime from a DateTime, you'll get a TimeSpan. Example:
DateTime dt1 = new DateTime(2011, 3, 21);
//subtract a timespan of 5 days from dt1 to create dt2
DateTime dt2 = dt1 - TimeSpan.FromDays(5);
//now, let's subtract dt2 from dt1 to get a timespan back
TimeSpan difference = dt1 - dt2;
You don't need to understand how to do it at the moment, just the fact that some objects may have operators that behave differently than normal.
Hope you've enjoyed this installment of the C# Learning Series! Next one's coming soon!
See all the C# Learning Series tutorials | {"url":"http://www.dreamincode.net/forums/topic/223459-c%23-basic-operators/page__pid__1848527__st__0","timestamp":"2014-04-18T07:03:13Z","content_type":null,"content_length":"100883","record_id":"<urn:uuid:78e04a2c-9b3d-4693-836c-190947435359>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Renata Jora
Syracuse University
Model for light scalars in QCD
We consider a generalized linear sigma model with two chiral nonets, one with a quark-antiquark structure, the other one with "diquark-diantiquark" structure. The model aims to explain the existence
of a lighter than 1 GeV nonet of scalar mesons. We investigate in this context the masses and mixings of the pseudoscalar and scalar states and find that the lightest pseudoscalar mesons have a large
two quark content and the lightest scalar mesons have a large four quark content. The pion-pion scattering is also studied in the limit of massless quarks and for an SU(3) invariant symmetry breaking
term. While the current algebra results hold for the first case, they no longer verify for the latter one.
Back to the theory seminar page. | {"url":"http://www.phy.anl.gov/theory/semabstracts07/jora.html","timestamp":"2014-04-19T19:52:32Z","content_type":null,"content_length":"1273","record_id":"<urn:uuid:f180aecd-d2b1-44dc-adfd-295ee724bf5b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find the equation of a line with the given points. (Put your answers in the form y = mx + b.) (1, -5) (5, -1)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Slope = rise/run (-5-(-1))/(1-5), -4/-4 or 1 Now, knowing that when we plug in x, we need to add a constant to get the correct y. m = 1 y = x - 6
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50f5b5a1e4b061aa9f9ac9f8","timestamp":"2014-04-18T00:19:46Z","content_type":null,"content_length":"27712","record_id":"<urn:uuid:b09ba8f7-8f85-4393-866a-427fd7341fd4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: induction on finite set.
Replies: 2 Last Post: Feb 26, 2013 5:21 AM
Messages: [ Previous | Next ] Topics: [ Previous | Next ]
Re: induction on finite set.
Posted: Feb 26, 2013 5:21 AM
The principle of finite induction can be derived from the fact that every nonempty set of natural numbers has a smallest element. This fact is known as the well-ordering principle for natural
numbers. The finite induction therefore is not that much brader.
For any given positive integer N, we have 2N-1 real variables x_k \in [0, 1], with 1 \leq k \leq 2N-1.
We also know x_1 = 1 = x_{2N-1}, and for all other k, x_k < 1. Additionally, we may use the notation
S_k = \sum_{j = 1}^{k} x_j
We have a recurrence that is valid for 1 \leq k \leq 2N - 2:
S_k = x_k + (1 - x_k)S_{k+1}
We can manipulate that in a variety of ways, such as the following which are valid (if I haven?t made a mistake) when 1 < k < 2N - 1
If our aim is to find a general form of x_k as a function of k, for each positive integer N. A general form for S_k would be nice as well. My preliminary attempts suggest perhaps there will be
symmetry, with x_{N-n} = x_{N+n}, and I would like to prove or disprove that. I also suspect that there will be N-1 polynomials p_k of degree N, such that for 1 < k \leq N we will have p_k(x_k) = 0.
I?d like to know whether that is true or false.
algebra homework help
online maths tutoring
Date Subject Author
2/20/13 sean bruce
2/24/13 Re: induction on finite set. grei
2/26/13 Re: induction on finite set. johnykeets | {"url":"http://mathforum.org/kb/thread.jspa?messageID=8418633&tstart=0","timestamp":"2014-04-20T18:47:26Z","content_type":null,"content_length":"19899","record_id":"<urn:uuid:346a7e33-ad13-4238-91fa-d11e590e7f9f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
Creates a circle.
Creates a circle from a center location and a radius.
1. Pick the center.
2. Pick a radius or diameter.
Draws a circle perpendicular to the construction plane.
1. Pick the center.
2. Pick a radius or diameter.
Draws a circle from the two ends of its diameter.
1. Pick the start of the diameter.
2. Pick the end of the diameter.
Draws a circle through three points on the circumference.
1. Pick the first point.
2. Pick the second point.
3. Pick the third point.
Draws a circle tangent to curves.
1. Pick the first tangent location on the first curve.
2. Pick the second tangent curve, or type a radius.
3. Pick the third tangent curve, or press to draw the circle from the first and second tangent locations.
Note: The options are the same as the Arc command.
Pick any point. This point does not have to be a tangent point on a curve.
Forces the circle or arc to go through the first picked point on the curve instead of allowing the point to slide along the curve.
The circle is restricted to the specified radius. If a tangent point exists on the second curve that meets the radius requirement, the tangent constraint will appear at that point as you drag the
Draws a circle perpendicular to a curve.
1. the curve.
2. Pick the center on a curve.
3. Pick a .
Draws a circle by fitting to selected point objects.
Circle > Circle: Center, Radius
Main1 > Circle: Center, Radius
Curve > Circle > Center, Radius | {"url":"http://4.rhino3d.com/4/help/Commands/Circle_command.htm","timestamp":"2014-04-20T00:58:23Z","content_type":null,"content_length":"12889","record_id":"<urn:uuid:cad622c0-e1bb-4d67-83ed-dd21c6d0a237>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the antiderivative with respect to x of y=(x^2+2x+2)^(1/2)? - Homework Help - eNotes.com
What is the antiderivative with respect to x of y=(x^2+2x+2)^(1/2)?
The antiderivative of a function is equivalent of the integral of that function.
Therefore the antiderivative of `(x^2+2x+2)^(1/2)` with respect to `x` is equivalent to
`int (x^2+2x+2)^(1/2)dx`
Rewriting `x^2 + 2x + 2` as `(x+1)^2 + 1` this is equal to
`int [(x+1)^2 + 1]^(1/2)dx`
Now make the substitution
`tan(t) = x+1` , where `(dx)/(dt) = sec^2t` . The integral is then equal to
`int (tan^2t + 1)^(1/2)sec^2t dt`
Since `tan^2t +1 = sec^2t` (trigonometric identity) this equals
`int sec^3t dt `
`= 1/2 sec(t)tan(t) + 1/2ln|sec(t) +tan(t)| + c`
`= 1/2sqrt(x^2+2x+2)(x+1) + 1/2ln|sqrt(x^2+2x+2) + (x+1)| + c`
(since `sec(arctan(x+1)) = sqrt((x+1)^2+1) = sqrt(x^2+2x+2)` )
This result is found by using integration by parts where we let
`u = sec(x)` and `v = tan(x)` and solve
`int u (dv)/(dx) = uv - int v (du)/(dx)`
The antiderivative of (x^2+2x+2)^(1/2) is equivalent to the integral of the same expression. Using substitution and then integration by parts this is found to be
1/2(x^2+2x+2)^(1/2)(x+1) + 1/2 ln|(x^2+2x+2)^(1/2) + (x+1)| + c
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/what-antiderivative-y-x-2-2x-2-5-436082","timestamp":"2014-04-17T08:21:57Z","content_type":null,"content_length":"26960","record_id":"<urn:uuid:849cc0e0-9aa4-49f8-80b2-718c7492b8fa>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posterior analysis for some classes of nonparametric models
Lijoi, A. and Prunster, I. and Walker, S.G. (2008) Posterior analysis for some classes of nonparametric models. Journal of Nonparametric Statistics, 20 (5). pp. 447-457. ISSN 1048-5252. (The full
text of this publication is not available from this repository)
Recently, James [L.F. James, Bayesian Poisson process partition calculus with an application to Bayesian Levy moving averages, Ann. Statist. 33 (2005), pp. 1771-1799.] and [L.F. James, Poisson
calculus for spatial neutral to the right processes, Ann. Statist. 34 (2006), pp. 416-440.] has derived important results for various models in Bayesian nonparametric inference. In particular, in
ref. [L.F. James, Poisson calculus for spatial neutral to the right processes, Ann. Statist. 34 (2006), pp. 416-440.] a spatial version of neutral to the right processes is defined and their
posterior distribution derived. Moreover, in ref. [L.F. James, Bayesian Poisson process partition calculus with an application to Bayesian Levy moving averages, Ann. Statist. 33 (2005), pp.
1771-1799.] the posterior distribution for an intensity or hazard rate modelled as a mixture under a general multiplicative intensity model is obtained. His proofs rely on the so-called Bayesian
Poisson partition calculus. Here we provide alternative proofs based on a different technique.
• Depositors only (login required): | {"url":"http://kar.kent.ac.uk/12678/","timestamp":"2014-04-16T13:07:36Z","content_type":null,"content_length":"20262","record_id":"<urn:uuid:e548177c-4f66-4822-b252-626f097602b5>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Porterdale Calculus Tutor
...Graduated with a focus in Finance and passed the CFA Level 1 exam. Also passed the GACE Mathematics 022, 023 exams. I love helping students overcome their stumbling blocks and I look forward to
helping you overcome yours in the coming months!Tutored on Algebra 1 topics during high school, college, and as a GMAT instructor for three years.
28 Subjects: including calculus, physics, statistics, GRE
...I majored in electrical engineering and currently work in the power industry. My love for math has grown since grade school which prompted me to take all of the math courses that I could in
college. Before I transferred to Clemson, I attended Newberry College where I maintained a GPA above 3.0 and majored in Math and Computer Science.
14 Subjects: including calculus, geometry, algebra 1, algebra 2
...Since I have been working with college level trig students for several years, I have a rich, deep understanding of the subject. Much of my engineering background involved trig also. I am
qualified to tutor the math portion of the Praxis, or in Georgia the GACE.
22 Subjects: including calculus, geometry, GRE, ASVAB
...I have a passion for teaching. I love when the light bulb goes on in my students' minds. I hope to be that inspiration to all that I tutor.
7 Subjects: including calculus, algebra 1, algebra 2, trigonometry
I have tutored students at Georgia Perimeter College and Georgia Tech. I got my bachelor's and master's degrees from Georgia Tech in mechanical engineering and graduated with highest honors.
Currently, I am working as a mechanical engineer at a company in Atlanta, GA.
12 Subjects: including calculus, physics, statistics, geometry | {"url":"http://www.purplemath.com/Porterdale_calculus_tutors.php","timestamp":"2014-04-20T11:22:15Z","content_type":null,"content_length":"23811","record_id":"<urn:uuid:365c1842-61f0-4222-afd4-bc7ee9826865>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: May 1994 [00050]
[Date Index] [Thread Index] [Author Index]
long algebraic equations + simplification
• To: mathgroup at yoda.physics.unc.edu
• Subject: long algebraic equations + simplification
• From: "Dr A. Hayes" <hay at leicester.ac.uk>
• Date: Thu, 28 Apr 1994 09:55:54 +0100 (BST)
Mike Sonntag <MICHAEL at usys.informatik.uni-kassel.de>
writes (slightly edited):
> I have a problem concerning the simplification of long
> algebraic equations.
> E.g.
> fkt =x==a^2+b^3*Sqrt[a^3]
> What I want is to simplify this equation by giving combinations
> of parameters new names, e.g.
> A=a^2
> When I do something like
> fkt /. a^2->A
> Mma will only change the first a^2 and not the a^3 as it doesn't
> correspond to this pattern.
I attach a package, AlgebraicRulesExtended, designed to deal with
this sort of problem
fkt =x==a^2+b^3*Sqrt[a^3];
fkt/.AlgebraicRulesExtended[A == a^2]
x == A + Sqrt[a A] b
Bug reports and suggestions for improvement will be very welcome.
Allan Hayes
Department of Mathematics
The University
Leicester LE1 7RH
hay at leicester.ac.uk
(* :Title: AlgebraicRulesExtended *)
(* :Author: Allan Hayes, hay at leicester.ac.uk *)
(* :Summary:
AlgebraicRulesExtended is a package containing the single function,
AlgebraicRulesExtended, an extension of the system
function AlgebraicRules.
It gives a single rule that can be used exactly like
AlgebraicRules (though the latter gives a list of rules).
The advantages are that:
(1) it does not require the user to list all the symbols
occurring in the expression to which the rule is applied
in order to avoid error messages;
(2)it uses Map so as to get at places that Replace alone
cannot reach.
(* :Context: Haypacks`AlgebraicRulesExtended ` *)
(* :Package Version: 1.3 *)
(* :Copyright: Copyright 1993,1994 Allan Hayes. *)
(* :History:
Version 1.1 by Allan Hayes, January 1993;
Version 1.2 by Allan Hayes, March 1994;
Version 1.3 by Allan Hayes, April 1994.
(* :Keywords: Algebra, Simplification, Manipulation *)
(* :Warning: uses OtherVariables as a special symbol.*)
(* :Mathematica Version: 2.2 *)
(**Begin Package**)
(**Usage Messages**)
AlgebraicRulesExtendedInfo::usage = "
AlgebraicRulesExtended is a package that contains one function,
AlgebraicRulesExtended, an extension of the system function
AlgebraicRules for replacing variables according to given equations.\n
Please see the separate entry for more information and examples."
AlgebraicRulesExtended::usage = "
AlgebraicRulesExtended[eqns], for a list of equations or a single equation eqns
gives a replacement rule for replacing earlier variables in the list of
variables in eqns (in default order) with later ones according to the equations
eqns. The special symbol OtherVariables must not occur in eqns.\n
The order of replacement may be modified as follows:\n
AlgebraicRulesExtended[eqns, vars] where eqns is a listof equations or a single
equation (which should not involve the special variable OtherVariables) and
vars is a list of variables or a single variable gives a rule for replacing
variables according to eqns. The preferences amongst the variables are
determined by vars as follows.\n
If vars includes the symbol OtherVariables then this is replaced by the
sequence of those variables in eqns which are not in vars (in default order),
and then a replacement rule is returned for replacing earlier symbols in the
resulting list by later ones.\n
If vars does not include OtherVariables then OtherVariables is first appended
to vars and the evaluation then proceeds as above.\n
AlgebraicRulesExtended[eqns, var] for a single variable var evaluates like
AlgebraicRulesExtended[eqns, {var}].\n\n
If rule is the rule returned then it is used in the usual way: expr/. rl.\n\n
AlgebraicRulesExtended has the combined options of AlgebraicRules
and Cases.\n
Changes involving the heads of expressions may be made by using the option
Heads -> True. The default, Heads -> False excludes heads from the replacement
eqns = { c1^2 + s1^2 == 1, c2^2 + s2^2 == 1, c3^2 + s3^2 == 1 };\n\n
expr = Tan[s1^2 + a^c + c1^2]/(b(s2^4 +k + c2^4));\n\n
arex = AlgebraicRulesExtended[eqns]\n
eqns = {M == n^4 + 4*k^2*p^2 - 2*n^2*p^2 + p^4, w^2 == n^2-k^2};\n\n
expr = (-(((-2*a*k*p - n^4*y0 - 4*k^2*p^2*y0 + 2*n^2*p^2*y0 - p^4*y0)*
Cos[(-k^2 + n^2)^(1/2)*t])/
(n^4 + 4*k^2*p^2 - 2*n^2*p^2 + p^4)) -
(((n^4 + 4*k^2*p^2 - 2*n^2*p^2 + p^4)*
(a*n^2*p - a*p^3 - n^4*yp0 - 4*k^2*p^2*yp0 +
2*n^2*p^2*yp0 - p^4*yp0) -
(-(k*n^4) - 4*k^3*p^2 + 2*k*n^2*p^2 - k*p^4)*
(-2*a*k*p - n^4*y0 - 4*k^2*p^2*y0 + 2*n^2*p^2*y0 -
p^4*y0))*Sin[(-k^2 + n^2)^(1/2)*t])/
((n^4 + 4*k^2*p^2 - 2*n^2*p^2 + p^4)*
(n^4*(-k^2 + n^2)^(1/2) + 4*k^2*(-k^2 + n^2)^(1/2)*p^2 -
2*n^2*(-k^2 + n^2)^(1/2)*p^2 + (-k^2 + n^2)^(1/2)*p^4)))/
E^(k*t) + (-2*a*k*p*Cos[p*t] + a*n^2*Sin[p*t] - a*p^2*Sin[p*t])/
(n^4 + 4*k^2*p^2 - 2*n^2*p^2 + p^4);
arex = AlgebraicRulesExtended[eqns,{OtherVariables,M,w}]\n
(a x c)[a x c]/.AlgebraicRulesExtended[ a x == b]\n
(a x c)[a x c]/.AlgebraicRulesExtended[ a x == b, Heads -> True]\n
x^3/.AlgebraicRulesExtended[x^2 ->z]\n
x^3/.AlgebraicRulesExtended[x^2 ->z, z]
OtherVariables::usage = "OtherVariables is a special symbol for
AlgebraicRulesExtended: the first step in the evaluation of
AlgebraicRulesExtended[eqns, vars] is to replace any occurrences of the symbol
OtherVariables in vars with the sequence of those variables in eqns that are
not in vars (arranged in default order).
(**Begin Private**)
(* Define a short format for the output.The name \"expression\" is included to
localize the format to this context - if it is changed then its other
occurrences must also be changed.
expression_ :> With[_List, _Off; _ = MapAt[(#/.AR_)&,_,_]; _On; _]
]:= AR;
Options[AlgebraicRulesExtended] =
AlgebraicRulesExtended[eqns_, opts___?OptionQ] :=
AlgebraicRulesExtended[eqns, {OtherVariables},opts];
AlgebraicRulesExtended[eqns_, var_Symbol, opts___?OptionQ] :=
AlgebraicRulesExtended[eqns, {var}, opts];
] := AlgebraicRulesExtended[eqns, {var, OtherVariables}, opts];
AlgebraicRulesExtended[eqns_, {var__Symbol}, opts___?OptionQ] :=
{fullvars, ar},
{filops =
( Off[First::first,Function::fpct];
fop = FilterOptions[##];
fullvars =
filops[Cases, opts]
ar =
filops[AlgebraicRules, opts]
{AM =
(#/. rules)&
lhs =
If[ ar[[-2]] === {{}},
expression_ :>
{posn =
filops[Position, opts],
ans = MapAt[AM,expression,posn]; | {"url":"http://forums.wolfram.com/mathgroup/archive/1994/Apr/msg00050.html","timestamp":"2014-04-19T17:20:31Z","content_type":null,"content_length":"41793","record_id":"<urn:uuid:446cdcf5-4bf7-497f-8bd0-133a26a34516>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00639-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A bank contains 30 coins, consisting of nickels, dimes, and quarters. yjere are twice as many nickels as quarters and the remaining coins are dimes. if the total value of the coins is $3.35, what is
the number of each type of coin in the bank?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/509c380fe4b007737458874a","timestamp":"2014-04-20T01:12:16Z","content_type":null,"content_length":"44843","record_id":"<urn:uuid:949e7406-408d-4b43-b9fd-7a73c7c485ba>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
Partial Pressure
Full screen
Divers are smart and know all about partial pressures and gases going into solution and such because it is just part of having fun and staying alive but having had to work it out so I could explain
it I thought I'd type it up so I could keep a copy, and I wanted pictures so that was HTML and so... So I stuffed it on the web site. ^oC it is pounding along at about one thousand miles per hour and
bouncing off the walls lots and lots of times a second.
This allows us to make some immediate deductions about one molecule situations.
1. When the molecule hits the piston it tries to push it back. A force is being applied to it. It is doing the same to the other walls but we can get our fingers on the piston and feel the force for
ourselves. This force is what we call pressure.
2. If the piston was half the size the molecule would only hit it half as often so we would only get half the force. Hence it is easy to measure pressure in force divided by area units (pound per
square inch, kilograms per square meter) so that we can work out what forces to expect if we know how big the piston is.
3. The faster the molecule is going the more often it will bounce off the piston and when it does so it does it more vigorously. Hence hotter (read faster) molecules will impart more force on the
piston hence more pressure.
4. If we give it more volume to rush around in, it will spend more time rushing around and less time piston hitting so more volume implies less pressure.
Yeah. Just one molecule. Bit of a special case eh?
Well no. Molecules are stupid. Deep down stupid. What makes you think that one molecule knows or cares if there is another molecule in this universe? Put two molecules in our cylinder and sometimes
they might bounce off one another snooker ball fashion but effectively they just carry on bouncing about. Put 20000000000000000000000 molecules in and you are getting realistic for a litre of air.
(That was 22 zeros by the way, count them.) Now they spend a lot more time bouncing off one another but the pistons just gets hits 2000.... etc times as often.
OK so what do we now know about pressure?
More gas = more pressure
Less volume = more pressure
Hotter = more pressure
This takes us to the Ideal Universal Gas Law which tends to be written in diving books as PV/T is constant. i.e. Pressure times Volume divided by Absolute Temperature is a constant for any sample of
gas so we can work out what happens if we heat, cool, compress or anything a constant amount of gas. So if you halve the volume something has to change to keep things constant so normally we assume
the pressure has doubled although what tends to happen is the temperature goes up a bit so the pressure more than doubles. The new PV/T continuing to equal the old PV/T. When using it never forget
that absolute temperature is degrees centigrade plus 273 or you get some very silly results.
Read the article on Van der Waals' work to get a more detailed explanation and where it is too simplistic.
What about different gases?
Well the molecules of different gases are heavier or lighter but the same general rules apply. Remember that our molecules continue to be stupid so they don't know that they are now in a mixed gas
scenario. So if I have enough Oxygen molecules to give me 3 units of pressure on my piston if they were on their own and enough Nitrogen to give me 5 units of pressure again on their own and I stuff
the lot into the cylinder together I get (dramatic pause) 8 units of pressure.
Physicists like to keep things complicated so we speak of having a partial pressure of 3 for Oxygen and 5 for Nitrogen and a total pressure of 8. That tells us how much Oxygen is contributing to the
pressure and in effect how much oxygen there is in the mixed gas. Beware however. Having the same partial pressures does not mean you have the same quantity of gases, just that they are pressing as
hard. Heavier gases play rougher when it comes to bouncing off pistons.
If we introduce a mouse to our cylinder (no, not a computer mouse, the whiskers and tail variety) and wait we will begin to observe a decrease in the oxygen partial pressure and an increase in the
carbon dioxide partial pressure and if we leave it too long we will have an ex-mouse situation develop.
The wonderful discovery that partial pressures are independent and you can just add up them up is called Dalton's Law. It may be pretty obvious now but poor old Mr. Dalton had to work it out from
scratch and that deserves serious credit.
Dissolved gases
When a liquid and a gas are in contact two things happen. Some of the liquid molecules might become detached from the surface and rush about pretending to be a gas and some of the gas molecules steam
into the liquid, forget to bounce off, and stooge around pretending to be liquid.
The first is called evaporation and the second is called dissolving. Now evaporation is a complex process that involves a liquid molecule scraping together enough energy (called latent heat) to
actually break free of the embrace of all its fellow liquid molecules so I'm not going to worry about it here but dissolving of gases is a more interesting process (to divers).
Dissolving and undissolving is a two way process. Every time a molecule of gas meets the top of the liquid from either the inside or the outside there is a mathematically determined probability that
it will swap sides. It is a two way stream with molecules dissolving and undissolving all the time and things generally tend toward a balanced situation.
Let's do some simple sums (or you can beep them out if maths is a dirty word to you).
Imagine I know the probability that any nitrogen atom hitting a water surface will dissolve (I don't) say it is 5% that is 0.05 as a fraction or something. Call this r[1]. Hence the number of
Nitrogen atoms dissolving will be based on the partial pressure (p[1]) because that tells us how many atoms are hitting the surface multiplied by r[1].
Inside the liquid we have the exact analogy of partial pressure for the dissolved gas molecules moving about, call this p[2] and some probability that they will undissolve, call it r[2].
If we leave it long enough to even out finally the number of molecules dissolving will equal the number undissolving so,
p[1] * r[1] = p[2] * r[2] (please excuse the computereese * for multiply as x just looks like a letter)
Now why have I gone to this length? Because you can now see that if I increase the partial pressure (p[1]) of the gas then, given time, the amount dissolved (represented by p[2]) will go up by
exactly the same fraction so our equation stays balanced.
This time the honours go to a Mr. Henry. Henry's law is that the amount of gas that will dissolve in a fluid given time is directly proportional to the partial pressure of that gas over the fluid.
Interesting. Remember that all the gases act independently so from air the Oxygen dissolves according to its own partial pressure and Nitrogen according to its own, quite separate, partial pressure.
What do we measure pressures and partial pressures in?
Well the old classics were to use a mercury barometer and measure it in millimetres or inches of mercury but you had to convert these to something useful before doing any sums. Now we tend to use
force per area methods so pounds per square inch or newtons per square meter (known as Pascals) or 10^6 dynes per sq. cm (known as Bar).
The unit Bar is rather handy as 1 bar is just about the pressure of the atmosphere at sea level so a tank at 200bar is roughly 200 times atmospheric pressure and contains about 200 times as much as
it would at 1 bar. | {"url":"http://www.combro.co.uk/nigelh/diver/pp.html","timestamp":"2014-04-20T15:50:38Z","content_type":null,"content_length":"9694","record_id":"<urn:uuid:706fe17b-ef7e-4d83-8e8d-63ae54ac42f6>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Covariance and Contravariance in Scala - Atlassian Blogs
Covariance and Contravariance in Scala
• By Michael Peyton Jones, Michael Peyton Jones
• On January 15, 2013
I spent some time trying to figure out co- and contra-variance in Scala, and it turns out to be both interesting enough to be worth blogging about, and subtle enough that doing so will test my
So, you’ve probably seen classes in Scala that look a bit like this:
1 sealed abstract class List[+A] {
2 def head : A
3 def ::[B >: A] (x:B) : List[B] = ...
4 ...
5 }
And you’ve probably heard that the +A means that A is a “covariant type parameter”, whatever that means. And if you’ve tried to use classes with co- or contra-variant type parameters, you’ve probably
run into cryptic errors about “covariant positions” and other such gibberish. Hopefully, by the end of this post, you’ll have some idea what that all means.
The first thing that’s going on there is that List is a “generic” type. That is, you can have lots of List types. You can have List[Int], and List[MyClass] or whatever. To put this in another way,
List[_] is a type constructor; it’s like a function that takes another concrete type and produces a new one. So if you already have a type X, you can use the List type constructor to make a new type,
A little bit of category theory
To get the cool stuff in all its generality, we’re going to need to start thinking about things in terms of categories. Fortunately, it’s pretty non-scary categories stuff. Recall that a category C
is just some objects and some arrows (which we usually gloss as “functions”). Arrows go from one object to another, and the only requirements for being a category are that you have some binary
operation on arrows (usually glossed as “composition”), that makes new arrows that go from and to the right places; and that you have an “identity” arrow on every object that does just what you’d
expect.^1 The category we’re mostly interested in is the category of types: types like Int, Person, Map[Foo, Bar] are the objects, and arrows are precisely functions.
The other concept we’re going to need is that of a functor. A functor F: C -> D is a mapping between categories. However, there’s no reason you can’t have functors from categories to themselves
(“helpfully” called “endofunctors”), and those are the ones we’re going to be interested in. Functors have to turn objects in the source category into objects in the target category, and they also
have to turn arrows into new arrows. Again, functors have to obey certain laws, but don’t worry too much about that.^2
Okay, so who cares about functors? The answer is that type constructors are basically functors on the category of types. How is that? Well, they turn types (which are our objects) into other types:
check! But what about the arrows (i.e. functions). Don’t functors have to map those over as well? Yes, they do, but in Scala we don’t call the function that comes out of the List functor List[f], we
call it map(f).^3
One final concept and then I promise this will start to get relevant. Some mappings between categories look a lot like functors, except that they reverse the direction of arrows. So instead of
getting F(f): FX -> FY. So these got a special name, they’re called contravariant functors. To distiguish them, normal functors are called covariant functors.
Look at that, there are those funny words again. But what on earth do contravariant functors have to do with Scala?
Good question.
The key feature of Scala, for our purposes, is that it’s a language with subtyping. Classes (types) can be sub- or super- types of other classes. This gives us the familiar idea of a class hierarchy.
Looking at it mathematically, we can say that we have a relation <: between types that acts as a partial order. Here comes neat Category Theory Trick no. 1: we can view any partially ordered set as a
category! The objects are the objects, and we have an arrow A ->B iff A <: B. This is a bit weird, because we’re only ever going to have one arrow between objects, and they’re not really “functions”
any more, but all the formal machinery still works.^4
Now some type constructors on this category still look like functors. They map objects to other objects, and if one of those objects is a subtype of the other, then they may or may not impose a
relationship between the mapped objects.
This is where the Scala type annotations come in. When we declare List[A+], we are saying that List is covariant in the parameter A.^5 What that means is that it takes a type, say Parent, to a new
type List[Parent], and if Child is a subtype of Parent, then List[Child] will be a subtype of List[Parent]. If we’d declared List to be contravariant (List[-A]), then List[Child] would be a supertype
of List[Parent].
There’s one final possibility. Since subtyping is a partial order, we can have two types where neither one is a subtype of the other. There’s no reason in principle why a type constructor T couldn’t
take Parent and Child to new types which were completely unrelated. In Scala, this is the case when you don’t provide an annotation for the type in the declaration; such a constructor is said to be
invariant in that parameter. Arrays, for example, have this property.
And that, fundamentally, is it. That’s what those little +s and -s on type paramters mean. You can go home now.
1 class GParent class
3 Parent extends GParent
5 class Child extends Parent
7 class Box[+A]
9 class Box2[-A]
11 def foo(x : Box[Parent]) : Box[Parent] = identity(x)
13 def bar(x : Box2[Parent]) : Box2[Parent] = identity(x)
15 foo(new Box[Child]) // success
17 foo(new Box[GParent]) // type error
19 bar(new Box2[Child]) // type error
21 bar(new Box2[GParent]) // success
But what about those cryptic errors?
1 class Box[+A] {
3 def set(x : A) : Box[A]
5 } // won't compile
You get these kinds of errors in Scala because of the subtleties of how variance relates to functions (and later, methods). We can see that there’s something weird going on if we look at the
declaration of the Function trait:
1 trait Function1[-T1, +R] {
3 def apply(t : T1) : R
5 ...
7 }
Whoa. That’s pretty strange. Not only does it have two type parameters, one of them is contravariant. Weird. Let’s work through this methodically.
We have Function1[A,B], which is a type of one-parameter functions that go from type A to type B. It can therefore be a sub- or super-type of other (function) types. For example,
Function1[GParent, Child] <: Function1[Parent, Parent]
How do I know this? Because of the variance annotations on Function1. The first parameter is contravariant, so can vary upwards, and the second parameter is covariant, so can vary downwards.
The reason why Function1 behaves in this way is a bit subtle, but makes sense if you think about the way substitution has to work when you have subtyping. If you have a function from A to B, what can
you substitue for it? Anything you put in its place must make fewer requirements on it’s input type; since the function can’t, for example, get away with calling a method that only exists on subtypes
of A. On the other hand, it must return a type at least as specialised as B, since the caller of the function may be expecting all the methods on B to be available.
Function Functors
There’s actually a nice category theory justification for why things have to be this way. In general, for any category C we can also construct a category of the Hom-sets of C. Functions between these
sets will just be higher-order functions that turn functions into different functions. There is then an obvious functor, Hom(-, -) that takes two objects A and B and produces Hom(A, B). The
Hom-functor is a bit tricky because it’s a bifunctor: it takes two arguments. The easiest way to deal with it is to sort of “partially apply” it and look at how it behaves on each of its arguments
So Hom(A, -) takes an object B to the set of functions from A to B. How does it act on functions? If we have a morphism f:B B’ we need a function Hom(A, f): Hom(A, B) -> Hom(A, B’). The obvious
definition is
Hom(A, f)(g) = f . g
That is, you do g first, to get from A to B, and then f to get from B to B’. So Hom(A, -) acts as a covariant functor.
On the other hand, if you try and make Hom(-, B) into a covariant functor, good luck! The types just don’t line up if you try and do composition. What does work is the following:
Hom(f, B)(g) = g . f
where g is in Hom(B’, B), rather than Hom(A, B). So Hom(-, B) acts as a contravariant functor.^6 Which makes Hom(A, B) contravariant in A, and covariant in B — just like Function1!^7
This is actually a more general result, since it applies in any category, and not just in the category of types with subtyping. Cool!
Back to Earth
Okay, so functions in Scala have these weird variance properties. But from a theoretical point of view, methods are just functions, and so they ought to have the same variance properties, even though
we can’t see them (methods don’t have a trait in Scala!).
So we can now see why we got that cryptic compile error. We declared that A was covariant in our class, and also that set takes a parameter of type A. But then, for some B <: A we could replace an
instance of Box[A] with an instance of Box[B], and hence an instance of Box[A].set(x) with Box[B].set(x), where x:B. But set[A] can’t be replaced by set[B] as an argument, for the reasons we
disucussed above; at best it can be contravariant. So this would allow us to do stuff we shouldn’t be able to do. Likewise, if we declared A as contravariant then we would run into conflict with the
return type of set. So it looks like we have to make A invariant.
As an aside, this is why it’s an absolutely terrible idea that Java’s arrays are covariant. That means that you can write code like the following:
1 Integer[] ints = [1,2]
3 Object[] objs = ints
5 objs[0] = "I'm an integer!"
Which will compile, but throw an ArrayStoreException at runtime. Nice.
Actually, we don’t have to make container types with an “append”-like method invariant. Scala also lets us put type bounds on things. So if we modify Box as follows:
1 class BoundedBox[+A] {
3 set[B >: A](x:B) : Box[B]
5 }
then it will compile. This ensures that the input type of the set method is properly contravariant.
And that’s about it. The thing to remember with Scala is that everything is a method. So if you’re getting surprising variance errors, it might be that you have a sneaky method somewhere that needs a
lower bound.
1. In full, the requirements are:A class of objects: CFor every pair of objects, a class of morphisms between them: Hom(A, B)A binary operation . : Hom(B, C) x Hom(A, B) -> Hom(A, C), which is
associative and has the identity morphism as its identity.
2. These are:F(id{X}) = id{FX}F(f.g) = F(f).F(g)
3. The astute reader will have noticed that not all type constructors come with a map function. This does indeed mean that not all type constructors are functors. But pretend that they are for now.
4. Crucially, we can use the relation to give us our arrows because it’s transitive, and hence composition will work properly.
5. Yes, there can be more than one parameter. Don’t worry about it for now.
6. If you’re wondering whether there couldn’t be some other way of mapping the functions that would work, it turns out that there can’t be one that also makes the functor laws work. You can try it
yourself if you don’t believe me!
7. We actually need to do a little bit more work to show that Hom(-, -) is a true bifunctor (functor on the product category), but it’s not terribly interesting.
Comments (3)
Thanks. This explains really well.
• I started reading but it became too confusing after a while.. I even tried re-reading parts but got no luck.
I think it could be convenient to use examples to explain definitions.
Thanks anyway! | {"url":"http://blogs.atlassian.com/2013/01/covariance-and-contravariance-in-scala/","timestamp":"2014-04-17T00:52:27Z","content_type":null,"content_length":"59058","record_id":"<urn:uuid:92c154b1-2507-48c4-af63-151628a14a00>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with fractions
Can someone please tell me how to explain to a 3rd grader how to find fractions greater than 3/4??
To add to Plato's reply, we can take $\frac{4}{5}$, which is less than $\frac{5}{6}$, which is less than... ... $\frac{100}{101}$, which is less than... etc. Any fraction of the form $\frac{n}{n+1}$,
where n is a whole number, will get closer and closer to "1.000" as n gets larger. But you might want to leave that part out of the discussion with the 3rd grader...
She really understood it thatnks. For lesser than would it work the same way but substact one?
Yes, but you'll run out (at least if you stick to the pattern n/(n +1)) after 1/2. Or maybe 0/1! What you can do is to increase the denominator (bottom) while leaving the numerator (top) the same.
This would make a bunch of sense to a 3rd grader if you expressed the number of pizza each person got at a party with a fraction... 1/7 means 1 pizza for 7 people. 2/5 means 2 pizzas for 5 people. If
we only have 2 pizzas but keep inviting more and more people, we get... 2/20 --> 2 pizzas for 20 people. 2/100 --> 2 pizzas for 100 people... etc. Clearly you can tell that each person will only get
a bite after increasing the denominator sufficiently!
Thats awesome thank you very much she says that makes more sense!!! =^) | {"url":"http://mathhelpforum.com/math-topics/179294-help-fractions.html","timestamp":"2014-04-17T05:53:54Z","content_type":null,"content_length":"45470","record_id":"<urn:uuid:ea86b70a-c3e8-4197-a94c-aba75d57173b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
L and inaccessibles as models of ZFC
February 1st 2011, 06:27 AM #1
Aug 2010
L and inaccessibles as models of ZFC
Three facts:
(1)The constructible universe L is the minimal model for ZFC;
(2) L is a model of "there exists an inaccessible cardinal $\kappa$", and
(3) if V=L,an inaccessible cardinal with the membership relation $\epsilon$ is a model of ZFC.
So, what is confusing me is: if the universe of L contains $\kappa$$^{L}$ , then how can L be the minimal model? Wouldn't < $\kappa$", $\epsilon$> be a model that is smaller?
PS, how come, when I wrapped math brackets around ^{L}, it didn't go to superscript?
Last edited by nomadreid; February 1st 2011 at 06:30 AM. Reason: problems with Latex
[tex]\kappa^{L} [/tex] gives $\kappa^{L}$.
Thanks, Plato. I hope the mathematical solution is as simple as the technical one.
If I remember correctly, then L is minimal with respect to any universe which has the same ordinals as L. But $L_{\kappa}$ doesn't contain the ordinal $\kappa$.
Thanks, DrSteve. This is a key point: I did not know the bit about
which has the same ordinals as L
Also you implicitly pointed out that my question should not have been "isn't < k,epsilon> a smaller model?" but "isn't <L_k, epsilon> a smaller model?" Again, thanks.
Hm, on the "Quick Reply" mode the possibility to use LaTex seems to have disappeared. But DrSteve used LaTex in his reply. What is going on?
If you hit the "go advanced button" after replying you will get the tex button back.
Of course an ordinal can never be a model of set theory. For example the pairing axiom fails (most pairs of ordinals aren't ordinals).
Please just note my statement "if I remember correctly." I haven't studied the constructable universe in a while, so just make sure you confirm that what I said regarding "having the same
ordinals" is correct.
Thanks, DrSteve
Thanks, DrSteve. I got the LaTex back. I will try it out on this post.
It was, of course, silly to put $\kappa$ instead of $L_{\kappa}$, you're right.
Rephrasing your suggestion about the minimal model, it does indeed make sense: it seems that L is the minimal inner model of ZFC, but there is an ordinal $\alpha$ smaller than $\kappa$ such that
$L_{\alpha}$ is a minimal model of ZFC. You have put me on the right track, so thanks again.
Last edited by nomadreid; February 2nd 2011 at 04:50 AM. Reason: erased something incorrect
Another little tip: double-clicking on "Reply to Thread" takes you to the "Advanced Editing Mode" in one step.
Thanks, Ackbeet. Good tip to know.
February 1st 2011, 06:35 AM #2
February 1st 2011, 06:42 AM #3
Aug 2010
February 1st 2011, 04:10 PM #4
Senior Member
Nov 2010
Staten Island, NY
February 1st 2011, 08:24 PM #5
Aug 2010
February 2nd 2011, 02:11 AM #6
Senior Member
Nov 2010
Staten Island, NY
February 2nd 2011, 04:48 AM #7
Aug 2010
February 2nd 2011, 05:00 AM #8
February 2nd 2011, 06:13 AM #9
Aug 2010 | {"url":"http://mathhelpforum.com/discrete-math/169903-l-inaccessibles-models-zfc.html","timestamp":"2014-04-16T21:03:04Z","content_type":null,"content_length":"54359","record_id":"<urn:uuid:3ab9aef4-30c2-484d-8a50-5db0470afa69>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
magnetic flux density vs magnetic flux
Hi chanderjeet!
Magnetic flux, Φ, is a scalar, measured in webers (or volt-seconds), and is a total amount measured across a surface (ie, you don't have flux at a point).
Magnetic flux density,
, is a vector, measured in webers per square metre (or teslas), and exists at each point.
The flux across a surface S is the integral of the magnetic flux density over that surface:
Φ = ∫∫[S] B.dS
(and is zero for a closed surface)
Magnetic flux density is what physicists more commonly call the
magnetic field
It is a density
per area
, rather than the usual density per volume.
(and they
be used interchangeably)
Similarly, electric flux is a scalar, measured in volt-metres, and electric flux density (also a density per area), E, is a vector, measured in volts per metre (and is more commonly called the
electric field). | {"url":"http://www.physicsforums.com/showthread.php?t=382880","timestamp":"2014-04-19T09:38:13Z","content_type":null,"content_length":"33682","record_id":"<urn:uuid:e76226ee-7fda-4903-978a-e9e18db7daf7>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
• Class Summary
Class Description
Fraction Fraction is a Number implementation that stores fractions accurately.
IEEE754rUtils Provides IEEE-754r variants of NumberUtils methods.
NumberUtils Provides extra functionality for Java Number classes.
Package org.apache.commons.lang3.math Description
Extends java.math for business mathematical classes. This package is intended for business mathematical use, not scientific use. See Commons Math for a more complete set of mathematical classes.
These classes are immutable, and therefore thread-safe.
Although Commons Math also exists, some basic mathematical functions are contained within Lang. These include classes to a Fraction class, various utilities for random numbers, and the flagship
class, NumberUtils which contains a handful of classic number functions.
There are two aspects of this package that should be highlighted. The first is NumberUtils.createNumber(String), a method which does its best to convert a String into a Number object. You have no
idea what type of Number it will return, so you should call the relevant xxxValue method when you reach the point of needing a number. NumberUtils also has a related isNumber(String) method.
$Id: package-info.java 1559146 2014-01-17 15:23:19Z britter $ | {"url":"http://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/math/package-summary.html","timestamp":"2014-04-16T19:00:15Z","content_type":null,"content_length":"8417","record_id":"<urn:uuid:1f4cd28d-07eb-4cbf-bf02-dd0a28b3b472>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |